International Core Journal of Engineering 2020-26 | Page 204
Fig. 1.
•
•
•
Proposed pipeline system.
Stage 6. Median ſltering:
In this stage, we apply the median ſlter to remove
the salt and pepper noise. As can be seen in Fig. 1,
we process current (R 1 , G 1 , B 1 ) and previous frame
(R 2 , G 2 , B 2 ) simultaneously just because there are
two memory banks with two individual data bus for
each bank. Therefore, we can read video data from the
memory bank in a simultaneous manner so that the
run-time performance can be accelerated.
P k (x, y) = R k (x, y) • L̂ k (x, y) • C k (x, y).
According to the research by Cucchiara et al. in 2001
and 2003, the detected shadow does not change its hue
H, but the saturation S and value V cab be decreased
[4][6]. Therefore, we can determine if the pixel (x, y)
under considered is in shadow area by examining if the
following inequalities holds.
Stage 7. Color space conversion: We ſnd the
characteristics of a rain drop and shadow can be
better recognized in in HSV color model. Therefore, the
RGB color space will be converted to HSV color model
in this stage. This stage is the most complex part of the
system.
H
H
| < T 3 or |P n H − P n−1
| > (360 − T 3 ) (5)
|P n H − P n−1
(2)
•
where k ∈ (R,G,B), and the L k (x, y) and R k (x, y)
indicates the luminance and reƀectance of the pixel
at position (x, y), respectively. This luminance can be
further represented as in (3).
L k (x, y) = L̂ k (x, y) • C k (x, y)
S
|P n S − P n−1
| > T 4 (6)
V
|P n V − P n−1
| > T 5 (7)
We decide a pixel P n is in shadow area if the inequalities
(5), (6), and (7) holds simultaneously, and the point will
not be considered as a candidate of a moving object.
Stage 8. Shadow detection:
Actually, a pixel value can be represented as in (2).
P k (x, y) = R k (x, y) • L k (x, y),
(4)
(3)
where L̂ k (x, y) is the mean luminance, and C k (x, y) is
the shadow rate. Combining (2) and (3), the pixel value
P k (x, y) can be represented as in (4).
182
Stage 9. Temporal subtraction: Background subtraction,
temporal sequence subtraction, and optical ƀow are com-
monly used method for moving object detection. Consider
the computational complexity and the limited resources
in FPGA module, the background subtraction and the
optical ƀow methods will not be used for its high compu-
tational complexity. In this paper, the temporal sequence
subtraction will be applied instead for limited FPGA
memory size constraint. We use two consecutive video
frame for temporal subtraction to obtain the contour of
moving objects. This method is relatively simple, and