International Core Journal of Engineering 2020-26 | Page 100
as a device-independent model to be used as a reference. It
expresses color as three values: L for the lightness from
black to white, a from green to red, and b from blue to
yellow. Therefore, in order to realize the difference method
based on color, Lab features are applied to the inter-frame
difference method in the proposed method as well.
Fig. 3. The Gabor function with different parameter combination.
obtain Bitmap_Lab.
III. T HE P ROPOSED METHODG
In the adopted data set, video has two seconds and 30
frames per second. The texture of the background is assumed
to be relatively fixed. Gabor filter is used in our experiment.
The important parameters in Gabor filter are the parameters
of wavelength and the orientation. The experiments are
conducted with the parameters of wavelength 2 and 4, and
the orientations 0q and 90q, so that there are four Gabor filter
combinations: { 2, 0q}, {2, 90q}, {4, 0q}, and {4, 90q}.
An example for detection using the color features is
shown in Fig. 4.
(a)
(b)
(c)
(d)
The proposed method includes two main steps: detection
using color features, and detection using texture features. Fig. 4. The color based inter-frame difference.(a): an original video frame;
(b): difference image (c): difference image after de-noising processing; (d):
detection result: Bitmap_Lab.
A. Inter-frame difference detection based on color for
moving target
The color features of adjacent frames are extracted to
detect moving target by the following step: B. Inter-frame difference detection based on texture for
moving target
The texture features are extracted from adjacent frames
and are used to detect moving target.
1) The color of RGB image is converted to the Lab color
space. The L, a and b images of the two adjacent frames are
subtracted pixel by pixel. The absolute value of the
difference image is obtained in difference_L, difference_a
and difference_b, and then the three different images are
superimposed to get the total difference of color features in
Difference_Lab. 1) The RGB image is converted to grayscale, because the
texture features are independent of the RGB color
information.A group of gabor filters is designed (there are
N(=a*b) gabor filters in each group, in which a is the number
of angle parameters, b is the number of wavelength
parameters). Then these N gabor filters are used to process
the frame images.
2) In the de-noising processing, a threshold parameter is
used to process the Difference_Lab, to get the
Difference_Lab_Threshold graph. 2) The images of two adjacent frames are set as F1 and
F2. F1_i and F2_i are obtained which are processed by the i-
th (1<= i <=N) gabor filter. F1_i and F2_i are subtracted and
take its absolute value, then the difference_i image is got.
Then all the difference_i (1<= i <=N) are added together to
get the total Difference of texture features,
Difference_Gabor.
3) Similarly, the Difference_Lab_Threshold graph is
converted into a binary graph, by setting non-zero pixel
values to 1, and others to 0. Then the method described in the
postprocessing part is used to process the binary graph to
78