International Core Journal of Engineering 2020-26 | Page 124
We propose a semi-supervised structured sparse graph
data classification method. The main steps are to first
regularize the projection column using the L2 norm by the
graph embedding method, then use the general manifold
regularization method to reduce the complexity based on
Semi-supervised learning algorithm for sparse graphs. It is
complemented by the emphasis on spatial complexity in
traditional theory. The high-dimension face data X and the
low-dimension Y are defined with a transforming T. Formula
(3) is the optimization function of this method.
lack specificity and compactness. The learning-based local
descriptor sub-method is better learned through learning, and
the coding also makes the feature more compact.
III. S EMI - SUPERVISED S TRUCTURED S PARSE G RAPH
C LASSIFICATION
The shallow representations still have inevitable
limitations, and they are not robust to complex nonlinear face
appearance changes. In addition, face recognition generally
requires the user to fully cooperate to maintain the image
data quality at a good level, but this premise is not satisfied
in many cases, so that the available training data is few, even
under special conditions, there is only one The image
corresponds to an ID. The number of data samples is smaller
than the data sample dimension called the small sample size
problem.
PLQ § ¨ <
©
DUJ PLQ
\ 7 \
\ 7 '\
; 7 7
·
¸
¹
(3)
The regularized risk function in the manifold structure is
the sum of the empirical risk and the regularization factor.
The method of minimizing the regularized risk function
given a kernel k and function H is [13]:
PLQ
P
P
¦
[ L \ L I [ L O : I
L
+
(4)
Laplacian regularized least squares classification and
Laplace SVM methods perform semi-supervised learning
well with or without labeling. Gaussian field and harmonic
function are another semi-supervised learning method based
on Gaussian random field. The same kind of semi-supervised
learning algorithm is proposed for local linear reconstruction
coefficient. The semi-supervised classification problem can
also be applied to the semi-supervised classification problem
by establishing a penalty term on the manifold and unifying
the localized discriminant and geometric regularized least
squares classification.
A graph embedding method refers to a graph in which a
node is represented in a vector form. This vector is used to
describe nodes and use edge weights to describe node
similarity relationships. The general approach to face
recognition is to map \
\ \ \ P 7 from high-
dimensional face data to low-dimensional. Adjacent data in
the original space is also considered to be adjacent in the
low-dimensional space. When the node is represented in
vector form, the class output can be performed by this
method. The method is to minimize the formula (1) and
deform it into the formula (2), which preserves the data
keeps the local spatial structure.
The L2 norm is used to limit the model space, limit the
solution space and reduce the solution space to control the
model complexity and reduce the structural problem.
Semi-supervised learning method used to learn how to
learn from data in a scene with/without tags [11]. The goal is
to change the learning style to mix data with/without tags.
Supervised algorithms such as clustering and other
unsupervised algorithms are classified and used for algorithm
design. Semi-supervised learning can also be used as a
quantitative tool to understand humans' learning of class
attributes. Semi-supervised learning in machine learning and
data mining is very effective. It uses unlabeled data to further
improve the effectiveness of machine learning and data
mining when tag data is very little.
PLQ ¦ \ L \ M Z LM
Further using a graph-based semi-supervised classifier,
the hypothesis map is represented by a matrix : of
size Q
Q , where Z LM is the non-negative weight
(1)
between LˈM . This knowledge can be expressed as a penalty
for the regularization risk function described above. In
particular, the magnitude of the similar node weights
indicates whether the nodes have a common similar label.
The discriminant function is as follows:
(2)
Where '
¦ Z LM .The optimal solution of y can be
obtained by finding the eigenvalue.
PLQ
The universal manifold regularization framework can be
used to solve a variety of learning problems from
unsupervised, semi-supervised to supervised [12]. The
general manifold regularization framework can be used to
solve a variety of learning problems from unsupervised,
semi-supervised to supervised, and a new regularized
framework representation can be generated by using kernel
Hilbert space in the priors of inherent low-dimensional
manifolds. Additional penalty terms for measuring manifold
smoothness enhance the ability to represent data through an
inherent structure.
P
P
¦
\ L I [ L O I
L
Where I [
function.
¦ F LM . [ L [ ,
F
+
O I 7 I
(5)
is square loss
IV. E XPERIMENT R ESULTS
In order to compare and validate the method presented in
this paper, the standard face data set CASIA-FaceV5 was
used. Contains photos of 500 people, 5 for each, 2500 photos,
and the face image is a 16-bit color BMP file with
dimensions 480 and width 640. Here we divide the data into
102