International Core Journal of Engineering 2020-26 | Page 203
2019 International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)
FPGA-based Moving object Detection with Interferences
Lih-Jen Kau, Member ,IEEE, Guo-Ting Jhao, Wei-Xiang Lai, and You-Ran Liu
Department of Electronic Engineering, National Taipei University of Technology
No.1, Sec. 3, Chung-Hsiao E. Rd., Taipei 10608, Taiwan
Email: [email protected], [email protected], [email protected], [email protected]
we will see in the experiment that the proposed system is very
useful and can have a throughput as high as 54.4MPixels/sec.
Abstract— Visual interferences caused by bad weather condi-
tions can have a negative impact on the performance of a visual
surveillance system. How can we detect the moving object in
a quickly and accurately manner is a very important step for
further analysis, e.g., object recognition, object tracking, event
detection, or behavior analysis, in a visual surveillance system.
However most of the moving object object detection systems
are based on a high-level micro-processor for its highly complex
algorithm. Moreover, the detection accuracy is often affected by
the environment of the system used. In this paper, we proposes
using an FPGA (Field Programmable Gate Array) to realize a
visual surveillance system so that the real-time requirement can
be achieved. Besides, the proposed system can adapted itself to a
variety of environmental situations, especially for the condition
of light changes, such as the existence of a shadow, raindrop, etc.
As we will see in the experiment that a very good performance
in terms of power consumption, memory usage, throughput, as
well as the capability of adapt to interference can be achieved in
the proposed system.
II. P ROPOSED P IPELINE A RCHITECTURE
The proposed system is a fourteen-stage pipeline architec-
ture as shown in Fig. 1. The details of individual stage will
be explained in this section.
• Stage 1. CCD Capture:
This stage is to capture the video frame via CMOS sensor.
• Stage 2. Raw data to RGB color space:
In this stage we convert the CMOS raw data to RGB
color model.
• Stage 3. Color quantization:
To ſt the SDRAM memory as well as the width of the
data bus. The RGB resolution will be quantized, where
the quantized R, G, and B components will be 5, 6, and
5 bits in width, respectively.
• Stage 4. SDRAM storage:
Therefore, this stage is to store four video frames to
the four SDRAM memory banks in ALTERA DE2-70
module.
• Stage 5. Multi-interference removal:
The multi interference removal algorithms can be divided
into two parts: the part for “ambient brightness change
detection” and “reconstruction”. We ſrst examine if R n ,
G n , and B n are all greater than a predeſned threshold
T 1 . If so, the pixel is regarded as high brightness pixel
(e.g., caused by the light reƀection due to the rain drop),
and we will try to recover the pixel value with (1).
⎧
|R n − R n−1 | > T 2
⎪
⎪
⎪
⎪
P
,
if
|G
⎪
n−1
n − G n−1 | > T 2
⎪
⎪
⎪
|B
⎪
n − B n−1 | > T 2
⎪
⎪
⎪
⎨
|R n − R n−2 | > T 2
(1)
P n =
⎪
⎪
P
,
if
|G
⎪
n−2
n − G n−2 | > T 2
⎪
⎪
⎪
|B n − B n−2 | > T 2
⎪
⎪
⎪
⎪
⎪
⎪
⎩
P n ,
otherwise.
Index Terms— FPGA, Pipeline, Moving object detection, Multi-
interference, Rain drop removal
I. I NTRODUCTION
Harsh environments tend to degrade the performance of
a visual surveillance system. In general, harsh environments
can be classiſed into two categories; (i)stable type: fog and
haze (ii)dynamic type: rain, hail, and snow. Among which,
the impact of the dynamic rain drop is the most common
situation in various weather conditions. Besides, most of the
aforementioned interference will result in changes in ambient
brightness and cause performance degradation of the mov-
ing object detection system. For this, many researches have
been proposed to solve the effect of an instant light change,
e.g., the shadow and light reƀection, in a visual surveillance
system [1][2],[6]-[8]. In this paper, we propose an FPGA-
based moving object detection system, which can adapt to the
environment interference caused by light intensity change due
to the existence of a shadow and the rain drops. We ſrst try
to ſnd and recover the value of those pixels affected by the
increasing of light intensity, e.g., the light reƀection due to the
rain drops. Secondly, a series of video processing techniques
including shadow detection will be applied to remove the salt
and pepper noise and decide the area affected by shadow.
Thirdly, the time sequence subtraction on consecutive video
frames will be used to ſnd the contour of moving objects.
After that, we apply the morphology operation. Finally, we
perform segmentation for those detected moving objects. As
978-1-7281-4691-1/19/$31.00 ©2019 IEEE
DOI 10.1109/AIAM48774.2019.00043
where the T 2 in (1) is a predeſned threshold. That is we
try to recover the value of those pixels affected by the
raindrop with three consecutive video frames.
181