International Core Journal of Engineering 2020-26 | Page 102
(a)
(c)
V. C ONCLUSION
In this paper, to improve the performance in moving
targets detection from video frames, an inter-frame
difference method is proposed by using the combination of
both texture features and color features. From the
experimental results, it is found that the texture features are
crucial in moving target detection, while color features can
act as a good supplement for moving targets detection.
Extensive experiments demonstrate that the proposed method
can successfully detect the moving targets from the video
datasets. The reason may be that the color features and the
texture features represent two different aspects of the images,
while the texture features detected by the Gabor filter are
susceptible to environmental changes. Our future work is to
improve the proposed method by making full use of the
continuity in the video frames.
( (b) )
(d)
(e)
Fig. 7. The experiment results for Highway dataset. (a): The original video
frame; (b): The extracted moving targets frame; (c) Detection result:
Bitmap_Gabor; (d) Detection result: Bitmap_Lab; (e): Combined detection
result: Binary_image.
(a)
A CKNOWLEDGMENT
This work is supported by the National Key R&D
Program of China (Grants No. 2017YFE0111900,
2018YFB1003205).
(b)
R EFERENCES
[1]
(c)
(d)
[2]
(d)
Fig. 8. The experiment results for Office dataset. (a): The original video
frame; (b): The extracted moving targets frame; (c) Detection result:
Bitmap_Gabor; (d) Detection result: Bitmap_Lab; (e): Combined detection
result: Binary_image.
[3]
[4]
[5]
[6]
(a)
(b)
[7]
[8]
(c)
(d)
(e)
[9]
Fig. 9. The experiment results for Pedestrian dataset. (a): The original video
frame; (b): The extracted moving targets frame; (c) Detection result:
Bitmap_Gabor; (d) Detection result: Bitmap_Lab; (e): Combined detection
result: Binary_image.
[10]
[11]
[12]
(a)
(c)
(b)
(d)
[13]
[14]
(e)
Fig. 10. The experiment results for PETS2006 dataset. (a): The original
video frame; (b): The extracted moving targets frame; (c) Detection result:
Bitmap_Gabor; (d) Detection result: Bitmap_Lab; (e): Combined detection
result: Binary_image
[15]
[16]
80
R. Szeliski, “Computer Vision: Algorithms and Applications”,
Springer Science & Business Media, pp. 27-86, 2010.
J. S. Kulchandani, K. J. Dangarwala, “Moving object detection:
Review of recent research trends”, International Conference on
Pervasive Computing (ICPC), in Pune India, IEEE, 16 April 2015.
A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet Classification
with Deep Convolutional Neural Networks”, Neural Information
Processing Systems, vol. 25, no. 2, pp. 4-8, 2010.
A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target
classification and tracking from real-time video”, IEEE Workshop on
Application of Computer Vision (WACV), 1998.
B. Lucas, Kanade T, “An iterative image registration technique with
an application to stereo vision”, International Joint Conference on
Artificial Intelligence, 1981.
B. Horn, B. Schunck, “Determining Optical Flow”, Artificial
Intelligence, vol. 17, no. 1-3, pp. 185-203, 1981.
C. Stauffer, W. Grimson, “Adaptive background mixture models for
real-time tracking”, IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (Cat. No PR00149), 2002.
S. Ren, K. He, R. Girshick, J. Sun, “Faster R-CNN: Towards Real-
Time Object Detection with Region Proposal Networks”, IEEE
Transactions on Pattern Analysis & Machine Intelligence, pp. 1137-
1149, 2015.
O. Barnich, M. Droogenbroeck, “ViBe: A universal background
subtraction algorithm for video sequences”, IEEE Transactions on
Image Processing, vol. 20, no. 6, pp. 1709-1724, 2011.
M. Droogenbroeck, O. Barnich, “Visual Background Extractor”,
World Intellectual Property Organization, WO 2009/007198, pp. 36,
2009.
J. Jr, C. Jung, S. MusseJ, “A background subtraction model adapted
to illumination changes”, IEEE International Conference on Image
Processing, 2007.
A. Zeileis, K. Hornik, P. Murrell, “Escaping RGBland: Selecting
Colors for Statistical Graphics”, Computational Statistics & Data
Analysis, vol. 53, no. 9, pp. 3259–3270, 2009.
K. Trambitsky, K. Anding, G. Polte, D. Garten, V. Musalimov, “Out-
of-focus region segmentation of 2D surface images with the use of
texture features”, Scientific and Technical Journal of Information
Technologies, Mechanics and Optics, vol. 15, no. 5, pp. 796–802,
2015.
A. Jain, N. Ratha, S. Lakshmanan, “Object Detection Using Gabor
Filter”, Pattern Recognition Society, vol. 30, no. 2, pp. 295-309, 1997.
G. Jemilda, S. Baulkani, “Capturing Moving Objects in Video Using
Gabor and Local Spatial Context Model”, Asian Journal of
Information Technology, vol. 15, no. 5, pp. 846-850, 2016.
A.G. Ramakrishnan, S. Kumar Raja, H.V. Raghu Ram, “Neural