Object Search

Share         

People and Object Tracking

Introduction
 
People and Object Tracking computer vision algorithms are used for tracking objects (eg. humans, vehicles, etc) in camera images for visual surveillance purposes. It can be widely used in sports video analysis, intelligent video surveillance, city traffic crossroads and pedestrian monitoring.
 
01.png
02.png
 
Person Tracking with Discriminative Appearance Modeling API
 
 
The Novelty
 
Our algorithm can handle several challenging cases in tracking application such as occlusion, pose changes, illumination, fast motion, cluster, distracter with similar appearance, etc. It works well over different resolutions and scenes.
 
03.png 
04.png 
05.png 

 

Publications
 
This work has been published in the following journals:
1. Yuwei Wu, Mingtao Pei, Min Yang, Junsong Yuan, and Yunde Jia. Robust Discriminative Tracking via Landmark-based Label Propagation. IEEE Transactions on Image Processing (TIP), 2015
2. Min Yang, Yuwei Wu, Mingtao Pei, Bo Ma, and Yunde Jia. Online Discriminative Tracking with Active Example Selection. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2015
 
Techniques
 

We provide a powerful online discriminative tracking algorithm based on Laplacian Regularized Least Squares (LapRLS) [1]. The tracking algorithm contains two steps. First, a manifold regularized semi-supervised learning method (i.e. LapRLS) is used to learn a robust classifier to detect the target object. Second, an active example selection approach is adopted to automatically select the most informative examples for LapRLS to ensure the high classification confidence of the classifier. The overview of the approach is shown below:

06.png 

Performance
 
We evaluated this API with 11 state-of-the-art methods on a recent popular benchmark [2] where each tracker is tested on 51 challenging videos (more than 29,000 frames). The state-of-the-art trackers include the TLD tracker [3], tracking with Multiple Instance Learning (MIL) [4], Visual Tracking Decomposition (VTD) [5], the Struck method [6], the Sparsity-based Collaborative Model (SCM) [7], Laplacian Ranking Support Vector Tracking (LRSVT) [8], Compressive Tracking (CT) [9], Structural Part-based Tracking (SPT) [10], Least Soft-threshold Squares Tracking (LSST) [11], Randomized Ensemble Tracking (RET) [12] and tracking with Online Non-negative Dictionary Learning (ONNDL) [13]. Comparison with the state-of-the-art trackers on the comprehensive benchmark demonstrates that our tracking algorithm is more effective and accurate.
 
07.png

Video of Demos
 
The following videos show the performance of our approach (bold, red box) against the other approaches mentioned above.
 

 Content Editor ‭[1]‬

 
Pose Variations​​ Occlusion​
​Motion Blur ​Background Clutter
 

 Content Editor ‭[2]‬

 
 
 
Real Time Visual Object Tracking
 
We also provide a real time algorithm with high performance for fast person or object tracking requirements. This real time algorithm can achieve the speed of 0.031 second per frame. It adopts histogram to fit feature distribution and feature selection mechanism to further delete features which are less discriminative and improves the feature quality. Because of these two steps, the feature matching can be accelerated significantly and the tracking accuracy and robustness can be improved.
 
 
Video of Demos
 
 
The following videos show the performance of our approach (bold, red box) against other approaches.
 
 
Pose Variations Motion Blur​
​Illumination Changes​ Background Clutter​
 
 
 
References (compared approaches)
 
[1] Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: Booststrapping binary classifiers by structural constraints,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 49-56.
 
[2] B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with online multiple instance learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1619-1632, 2011.
 
[3] J. Kwon and K. Lee, “Visual tracking decomposition,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 1269-1276.
 
[4] S. Hare, A. Saffari, and P. H. Torr, “Struck: Structured output tracking with kernels,” in IEEE Conference on Computer Vision (ICCV), 2011, pp. 263-270.
 
[5] W. Zhong, H. Lu, and M.-H. Yang, “Robust object tracking via sparsity based collaborative model,” IEEE Transactions on Pattern Analysis and Machine Learning, vol. 33, no. 5, pp. 2356-2368, 2014.
 
[6] Y. Bai, and M. Tang, “Robust tracking via weakly supervised ranking SVM,” in IEEE Computer Societ
y Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 1854-1861.
 
[7] K. Zhang, L. Zhang, and M.-H. Yang, “Fast compressive tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 10, pp. 2002-2015, 2014.
 
[8] R. Yao, Q. Shi, C. Shen, Y. Zhang, and A. van den Hengel, “Part-based visual tracking with online latent structural learning,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2363-2370.
 
[9] D. Wang, H. Lu, and M.-H. Yang, “Least soft-threshold squares tracking,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2371–2378.
 
[10] Q. Bai, Z. Wu, S. Sclaroff, M. Betke, and C. Monnier, “Randomized ensemble tracking,” in IEEE Conference on Computer Vision (ICCV), 2013, pp. 2040-2047.
 
[11] N. Wang, J. Wang, and D.-Y. Yeung, “Online robust none-negative dictionary learning for visual tracking,” in IEEE Conference on Computer Vision (ICCV), 2013, pp. 657-664.