International Core Journal of Engineering 2020-26 | Page 123
edges represent similarities between samples, and the nodes
represent data samples. Sparse representations have recently
been extensively studied because they describe the intrinsic
relationships and structures between data samples. The
hypothesis of a sparse representation is that few of the most
important coefficients describe all the signals, passing the
discriminative information in a very compact form. The
coefficients in the sparse representation graph are sparsely
represented, and the connections between the data sample
nodes are also rare. When the transformation needs to be
performed, the direction of the data sample is consistent with
the surrounding adjacent points, so that there is greater
difference between the classes, and the sparse representation
of the low-dimensional space is realized. Sparse
representation performance is poor when training data is rare
in special situations [6]. classification was obtained by making the heterogeneous
separation as much as possible and the rules of the same type
as compact as possible. The non-dominant manifold learning
algorithm can well preserve the internal structure of the data
for feature extraction for manifold learning.
There are still some difficulties in the above classification
methods. For example, the classifier is difficult to conform to
the multi-level graph model. In addition, the data in the real
scene rarely carries valid tags, so the effect of using the
semi-supervised graph classifier under real conditions is not
ideal [7]. A DCT method similar to the discrete Fourier transform
transforms data from space to frequency representation and
divides the image into regions of different importance. The
principle is that when the image is converted to frequency,
most of the energy is concentrated in the low frequency.
High frequencies become unimportant, so this approach
reduces the amount of data without degrading the visual
quality of the image [10].
The use of these extracted features plus machine learning
techniques allows for the classification of faces. For example,
use the simplest machine learning method: K-nearest
neighbor classifier to find the training data around the target
data from coarse to fine; or use the well-known support
vector machine method to perform elastic beam map
matching to achieve face classification; or use sparse It is
assumed that the data can be linearly combined by the data of
the same ID and the residual between the data and the
covariance is calculated.
This paper proposes a semi-supervised structured sparse
graph data classification method for face feature extraction:
firstly transform the graph data into a uniform length vector
and generate a class probability classifier, and then construct
a semi-supervised sparse graph data classifier, through the
structure The sparse learning automatically obtains the
connection relationship between the tagged data vector and
the unlabeled data vector and its weight, and implements the
sparse representation of the semi-supervised data samples.
Experiments show that this semi-supervised sparse graph
data classification method has more advantages, including
good robustness and adaptability to noise.
The organizational structure of this paper is as follows:
The second introduces the basic face feature extraction
technology; it was followed by details of the semi-supervised
structured sparse graph data classification; the fourth gives
the test results on some face datasets; finally the conclusions
are given.
Figure 1. Model of face classification system based DCT and HOG.
HOG is a feature commonly used in image recognition
technology such as object detection. It has good performance
on many different spatial, degree and normalized data sets.
Cutting the data into smaller connected regions and
performing a histogram combination of gradient directions or
edge directions for each such region indicates that the final
HOG is generated. The HOG has two main parameters: the
row and column cell size that represents the histogram patch;
and the number of bin orientations used to construct the
gradient angle interval. HOG has properties that are robust to
geometric and photometric transformations. A basic model
for extracting facial features using DCT and HOG as feature
extraction methods is shown in Fig.1.
II. F ACE F EATURE E XTRACTION T ECHNOLOGY
Face classification and recognition is one of the most
successful commercial applications in the field of artificial
intelligence. As a biometric identification technology, face is
convenient, non-invasive and accurate in practical
applications, so it is widely used. Education, security,
finance and other fields [8]. Face classification requires the
generation of LBP histograms by algorithms such as discrete
cosine transform, local binary and spatial local binary, and
combines LBP and HOG to form a description [9].
Face recognition from standard databases, real data, and
sensor data on sensors has become a challenging problem
due to the wide variety of faces and the wide range of
applications. Although face recognition technology has
advanced by leaps and bounds in recent years, the rapid
changes in light, facial expressions and postures, and most
real-world recognition systems have very limited training
data, resulting in poor performance of face recognition
systems under certain application conditions. . Face
recognition based on local features, Gabor, LBP [and their
multi-level and high-dimensional extended versions achieve
relatively robust performance on some invariance
requirements. However, these hand-designed features still
The new scale-invariant feature transform can handle
large-scale and diverse face data. Experiments have shown
that effective face features are compact and adequately
describe data sample features. The principal component
analysis algorithm for face recognition has good
generalization because the high-dimensional space where the
face is located is reduced to the low-dimensional subspace
without discarding the energy of the main feature. But only
the PCA with the largest variance can't control the best
direction. The subsequent LDA method made up for the
shortcomings of PCA, and the good result of face
101