Home   >   CSC-OpenAccess Library   >    Manuscript Information
Pedestrian Counting in Video Sequences based on Optical Flow Clustering
Sizuka Fujisawa, Go Hasegawa, Yoshiaki Taniguchi, Hirotaka Nakano
Pages - 1 - 16     |    Revised - 15-01-2013     |    Published - 28-02-2013
Volume - 7   Issue - 1    |    Publication Date - February 2013  Table of Contents
MORE INFORMATION
KEYWORDS
Pedestrian Counting, Video Processing, Optical Flow, Clustering.
ABSTRACT
The demand for automatic counting of pedestrians at event sites, buildings, or streets has been increased. Existing systems for counting pedestrians in video sequences have a problem that counting accuracy degrades when many pedestrians coexist and occlusion occurs frequently. In this paper, we introduce a method of clustering optical flows extracted from pedestrians in video frames to improve the counting accuracy. The proposed method counts the number of pedestrians by using pre-learned statistics, based on the strong correlation between the number of optical flow clusters and the actual number of pedestrians. We evaluate the accuracy of the proposed method using several video sequences, focusing in particular on the effect of parameters for optical flow clustering. We find that the proposed method improves the counting accuracy by up to 25% as compared with a non-clustering method. We also report that using a clustering threshold of angles less than 1 degree is effective for enhancing counting accuracy. Furthermore, we compare the performance of two algorithms that use feature points and lattice points when optical flows are detected. We confirm that the counting accuracy using feature points is higher than that using lattice points especially when the number of occluded pedestrians increases.
CITED BY (10)  
1 Taniguchi, Y., Mizushima, M., Hasegawa, G., Nakano, H., & Matsuoka, M. (2016). Counting Pedestrians Passing Through a Line in Crowded Scenes by Extracting Optical Flows. International Information Institute (Tokyo). Information, 19(1), 303.
2 Yoshida, T., & Taniguchi, Y. Estimating the number of people using existing WiFi access point in indoor environment.
3 Kim, J., Jang, G., Kim, G., & Kim, M. H. (2015). Crowd Activity Recognition using Optical Flow Orientation Distribution. KSII Transactions on Internet and Information Systems (TIIS), 9(8), 2948-2963.
4 Taniguchi, Y., & Nakano, H. (2014). Modeling and evaluation of a ceiling-mounted compound-eye sensor. International Information Institute (Tokyo). Information, 17(2), 663.
5 Jatropha Compro, Shigeru Matsuoka & Gordon. (2014). H. 264 / AVC with i ta wo ho Walker ? ri a number Hikaru Tatari ? Rousseau presumption. Portrait Society of Electronics, 43 (4), 599-604.
6 History of Tao, Cheng & party. (2014) based on multi-feature fusion pedestrian detection and tracking statistical methods. Television Technology, 38 (19), 179-183.
7 Mo Jianwen, Zhao Pu, & Yuan Hua. (2014). Based on Gabor features and enhancements RSC face recognition algorithm. Television Technology, 38 (19), 170-174.
8 Hashimoto, k., taniguchi, y., hasegawa, g., nakano, h., & matsuoka, m. pedestrian counting based on the number of salient points considering non-linear effect of occlusions.
9 MIZUSHIMA, M., TANIGUCHI, Y., Hasegawa, G., NAKANO, H., & MATSUOKA, M. (2013). Counting Pedestrians Passing through a Line in Video Sequences based on Optical Flow Extraction. Proceedings of CSECS 2013, 26-33.
10 Lin, D. T., & Huang, C. Robust detection for moving objects based on edge information and frame differences.
1 Google Scholar 
2 CiteSeerX 
3 refSeek 
4 Scribd 
5 SlideShare 
6 PdfSR 
A. Elegammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proc. ECCV 2000, vol. 2, pp. 751–767, Jun. 2000.
A. Ess, B. Leibe, K. Schindler, and L. V. Gool, “A mobile vision system for robust multiperson tracking,” in Proc. CVPR 2008, pp. 1–8, Jun. 2008.
A. Fod, A. Howard, and M. J. Mataric, “Laser-based people tracking,” in Proc. IEEE ICRA 2002, pp. 3024–3029, May 2002.
A. Leykin and R. Hammoud, “Robust multi-pedestrian tracking in thermal-visible surveillance videos,” in Proc. Conference on CVPR Workshop 2006, p. 136, Jun. 2006.
A. Mittal and L. S. Davis, “M2tracker: A multi-view approach to segmenting and tracking people in a cluttered scene using region-based stereo,” Computer Vision, vol. 2350, pp. 189–203,May 2002.
B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. the 1981 DARPA Image Understanding Workshop, pp. 121–130, Apr.1981.
C. Harris and M. Stephen, “A combined corner and edge detector,” in Proc. AVC 1988, pp.147–152, Aug. 1988.
C. Harris and M. Stephen, “A real-time system for monitoring pedestrians,” in Proc. WACVMOTION 2005, vol. 1, pp. 378–385, Jan. 2005.
D. Conte, P. Foggia, G. Percannella, and M. Vento, “Performance evaluation of a people tracking system on pets2009 database,” in Proc. ACSS 2010, pp. 119–126, Aug. 2010.
E. Goubet, J. Katz, and F. Porikli, “Pedestrian tracking using thermal infrared imaging,” in Proc.SPIE 2006, no. 6206, pp. 797–808, Apr. 2006.
Eco counter, “People counters - Eco-counter.” Internet: www.eco-compteur.com. [Nov. 22,2012].
F. Bu, R. Greene-Roesel, M. Diogenes, and D. Ragland, “Estimating pedestrian accident exposure: Automated pedestrian counting devices report,” UC Berkeley Traffic Safety Center,Tech. Rep., Mar. 2007.
H. Kitazawa, Z. Li, and K. Yabuta, “Moving object extraction and tracking based on the exclusive block matching,” IEICE technical report, vol. 108, no. 4, pp. 49–54, Apr. 2008.
H. Zhao and R. Shibasaki, “A novel system for tracking pedestrians using multiple single-row laser-range scanners,” Systems, Man and Cybernetics, vol. 35, pp. 283–291, Mar. 2005.
InfraRed Integrated Systems Ltd, “IRISYS people counter,” Internet: irisys.co.uk/peoplecounting.[Nov. 22, 2012].
J. Cui, H. Zha, H. Zhao, and R. Shibasaki, “Laser-based detection and tracking of multiple people in crowds,” Computer Vision and Image Understanding, vol. 106, pp. 300–312, May 2007.
J. P. Batista, “Tracking pedestrians under occlusion using multiple cameras,” in Proc. ICIAR 2004, vol. 3212, pp. 555–562, Oct. 2004.
J.-S. Hu, T.-M. Su, and S.-C. Jeng, “Robust background subtraction with shadow and highlight removal for indoor surveillance,” in Proc. IROS 2006, pp. 4545–4550, Oct. 2006.
James Ferryman, “PETS 2010.” Internet: www.cvg.rdg.ac.uk/PETS2010. [Nov. 22, 2012].
James Ferryman, “Video sequence take from PETS workshop,” available Internet:www.cvg.rdg.ac.uk/PETS2010/a.html. [Nov. 22, 2012].
K. Hashimoto, M. Yoshinomoto, S. Matsueda, K. Morinaka, and N. Yoshiike, “Development of people-counting system with human-information sensor using multi-element pyroelectric infrared array detector,” Sensors and Actuators A: Physical, no. 58, pp. 165–171, Feb. 1998.
K. Hashimoto, M. Yoshinomoto, S. Matsueda, K. Morinaka, and N. Yoshiike, “Development of people-counting system with human-information sensor using multielement pyroelectric infrared array detector,” Sensors and Actuators, vol. 58, no. 2, pp. 165–171, Feb. 1997.
M. Heeikkila and M. Pietikainen, “A texture-based method for modeling the background and detection moving objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.28, no. 4, pp. 657–662, Apr. 2006.
N. Baba, T. Ejima, and H. Matsuo, “Headfinder: A real-time robust head detection and tracking system,” in Proc. SPIE Electronic Imaging Conference on Real-Time Imaging 2002, vol.4666, pp. 42–51, Feb. 2002.
N. Friedman and S. Russell, “Image segmentation in video sequences: A probabilistic approach,” in Proc. UAI 1997, Aug. 1997.
N. Friedman and S. Russell, “Image segmentation in video sequences: A probabilistic approach,” in Proc. UAI 1997, pp. 175–181, Aug. 1997.
P. Kilambi, E. Ribnick, A. J. Joshi, O. Masoud, and N. Papanikolopoulos, “Estimating pedestrian counts in groups,” Computer Vision and Image Understanding, vol. 110, pp. 43-59,Apr. 2008.
Q. Chen, M. Gao, J. Ma, D. Zhang, L. M. Ni, and Y. Liu, “Mocus: Moving object counting using ultrasonic sensor networks,” International Journal of Sensor Networks, vol. 3, no. 1, pp. 55–65, Dec. 2007.
R. Eshel and Y. Moses, “Homography based multiple camera detection and tracking of people in a dense crowd,” in Proc. CVPR 2008, pp. 1–8, Jun. 2008.
R. Mehrotra and S. Nichani, “Corner detection,” Pattern Recognition, vol. 23, pp. 1223–1233,Mar. 1990.
S. Bileschi and L. Wolf, “A unified system for object detection, texture recognition, and context analysis based on the standard model feature set,” in Proc. BMVC 2005, Sep. 2005.
SenSource Inc, “PCW-2BX03 Directional People Counter.” Internet:www.sensourceinc.com/PDF/PCW-Wired-Directional%20-%20Brochure.pdf. [Nov. 22, 2012].
T. Horprasert, D. Harwood, and L. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” in Proc. IEEE ECCV 1999 Frame-Rate Workshop,Sep. 1999.
T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Neural Networks, vol. 24,no. 7, pp. 971–989, Jul. 2002.
T. Teixeira and A. Savvide, “Lightweight people counting and localizing in indoor spaces using camera sensor nodes,” in Proc. ICDSC 2007, pp. 36–43, Sep. 2007.
T. Zhao and B. Wu, “Segmentation and tracking of multiple humans in crowded environments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 7,pp. 1198–1211, Jul. 2008.
T. Zhao and R. Nevatia, “Bayesian human segmentation in crowded situations,” in Proc.CVPR 2003, pp. 459-466, Jun. 2003.
V. Kettnaker and R. Zabih, “Counting people from multiple cameras,” in Proc. MCS 1999, vol.2, pp. 267–271, Jul. 1999.
V. Mahadevan and N. Vasconcelos, “Background subtraction in highly dynamic scenes,” in Proc. CVPR 2008, pp. 1–6, Jun. 2008.
X. Liu, P. H. Tu, J. Rittscher, A. G. A. Perera, and N. Krahnstoever, “Detecting and counting people in surveillance applications,” in Proc. AVSS 2005, pp. 306–311, Sep. 2005.
Miss Sizuka Fujisawa
Graduate School of Information Science and Technology Osaka University Suita, Osaka, 565-0871 - Japan
Associate Professor Go Hasegawa
Cybermedia Center Osaka University Toyonaka, Osaka, 560-0043 - Japan
hasegawa@cmc.osaka-u.ac.jp
Dr. Yoshiaki Taniguchi
Cybermedia Center Osaka University Toyonaka, Osaka, 560-0043 - Japan
Professor Hirotaka Nakano
Cybermedia Center Osaka University Toyonaka, Osaka, 560-0043 - Japan


CREATE AUTHOR ACCOUNT
 
LAUNCH YOUR SPECIAL ISSUE
View all special issues >>
 
PUBLICATION VIDEOS