International Journal of Chromatography and Separation Techniques (ISSN: 2577-218X)

Article / research article

"Object Tracing Based On Exponential Function"

Abdulla Mohamed1*, Phil F Culverhouse1, Angelo Cangelosi1, Chenguang Yang2

Centre for Robotics & Neural Systems, Plymouth University, UK

Zienkiewicz Centre for Computational Engineering, Swansea University, UK

*Corresponding author: Abdulla Mohamed, Centre for Robotics & Neural Systems, Plymouth University, UK. Tel: +447761624073; Email:

Received Date: 27 October, 2018; Accepted Date: 20 November, 2018; Published Date: 29 November, 2018

1.                   Abstract

In football podcasting and other games the cameras required to track the ball or players all the time. This paper presents a work done on an object tracking with a movable camera. This work was done to integrate into an active stereo vision platform that uses vergence vision in order to retrieve the 3D position of the target. This required a smooth tracking controller to keep the object within the field of view and avoid generating large blur that affects the quality of the image or leads to loss the feature in the image. The controller was designed based on exponential function to generate a smooth trajectory that decreases when it closer to the centroid of the object. The result of the exponential function helps to keep the target within the center of the image at the accuracy of ±5 pixels.

2.                   Keywords: Control System; Exponential Function; Object Tracking; Vision

1.                   Introduction

Visual tracking is a classical problem has been studied in computer vision and has many applications. The classical visual tracking is used a statistic camera where an object tracked within the field of view of the camera; such process uses in industrial especially on the conveyor belt. Many algorithms were developed for the static camera such as background subtracting that assume the background is static and the foreground is changing [1,2]. Background subtraction has many disadvantages like the introduction of illumination and light changing [3], various moving backgrounds like moving trees or slow moving of the foreground where these two issue has been studied in [4]. Another approach in tracking an object is optical flow. Optical flow is an algorithm depends on feature extraction of the target like using corner detection, Scale-Invariant Feature Transform SIFT [5], then track these feature in the next frames [6]. Many works have been done on this approach to improve the quality and the speed of the tracking [7-9]. In sport the camera tracking the players or the ball where the camera in these case is moving or in humanoid case the head is tracking the moving object. In this case, the problem of object tracking gets more complicated when the issue of moving camera introduced. The object tracking problem introduces to the control system, which required to design a controller suit the camera specification. Many works have been done on this type of issue using different techniques and based on the required task. Won Jin Kim and In-So Kweon (2011) [10] implement object tracking for multiple targets using the homography based motion detection [11] to detect an individual target then an online boost tracker was integrated to combine the separate targets.

In [12] a detection algorithm to track an object in a moving camera was studied. The algorithm base on feature correspondences between frames, then using the information generated from feature matching the properties of motion is computed. Hu et al. (2015) [13] studied a multiple object detection in moving camera. Which the algorithm was presented in their work is using feature detection in the frames. The features are classified into background and foreground where the foreground represent the target. In [14], an algorithm of object tracking in 3D coordinate was studied. The controller used was based fuzzy logic that gives the performance to track the object in 3D coordinate relate to the robot. Where the focus was to control the motor that attached to the camera in order to keep the target within the field of view and the centroid of the camera. In this paper, the work is focusing on the controller system that controls the camera during the tracking process where the primary focus on keeping the target within the field of view. In this work, we are interested in keeping the centroid of the target matching the center of the image during the tracking process. This paper focuses on the control side of the system by integrating an exponential function with the motor controller. To provide smooth object tracking and control the blur that generated during the movement of the camera. The paper is organized as follow: the next section introduces the methodology of the control system, then the setup of the experiment will be shown in section three. In the fourth section, the result and discussion are presented and, finally the conclusions and future development closing this work.

1.1                        Background and Preliminaries

In this paper, the object tracking was implemented on an active stereo vision platform. The platform is used in studying the dynamic vergence vision that depends on tracking the object. The platform consists of 5 DOF. Each camera has a pen and tilt independently and sharing the same active baseline. The platform is shown in (Figure 1).

The cameras used in the rig are colour Point Grey Flea 3 (FL3- U3-88S2C-C) camera with 8.8MP and frame rate of 21 FPS. The sensor is Sony IMX121 with a resolution of 4096 x 2160, 12-bit ADC, and the pixel size is 1.55 µm. The camera has a global rest shutter at a speed of 0.021 ms to 1 s.

A Dynamixel xl430-w250-t motor was used. These motors have an absolute magnet encoder with 14 bit which gives a resolution of 0.088°. The maximum speed of the motor is 60 rpm. The communication between the motors and the PC using a USB2Dynamixel dangle, with a power supply of 12V. The control system was designed using Robot Operating System ROS [15] on a desktop PC with Ubuntu 16.04. The PC has Intel Core i7-7700K 4.2 GHz Quad- Core Processor, with DDR4 3200MHz Memory RAM.

1.2      Camera Model

The pinhole camera model is the standard model used to describe a space point relate to the camera origin (Figure 2). Point P is a world point in front of the camera. This point coordinate [

Figure 1: Active stereo vision platform with 5 DOF.

Figure 2: single camera model.

Figure 3: Motion blur generated due to the fast camera motion.

Figure 4: The block diagram of the object tracking.

Figure 5: The experiment setup with the static target.

Figure 6: Moving object tracking experiment setup.

Figure 7: Angular velocity of the motor at lambda 0.0010.

Figure 8: Angular velocity of the motor at lambda 0.0015.

Figure 9: Angular velocity of the motor at lambda 0.0020.

Figure 10: Exponential function at different lambda

Figure 11: object tracking using cantilever length 200 mm.

Figure 12: object tracking using cantilever length 400 mm.

Figure 13: object tracking using cantilever length 500 mm.

1.                   Barnich O, Van Droogenbroeck M (2011) ViBe A Universal Background Subtraction Algorithm for Video Sequences. IEEE Transactions on Image Processing 20:1709-1724.

2.                   Ren X, Wang Y (2016) Design of a FPGA hardware architecture to detect real-time moving objects using the background subtraction algorithm. in 2016 5th International Conference on Computer Science and Network Technology (ICCSNT) 428-433.

3.                   Yong X, Jixiang D, Bob Z, Daoyun X (2016) Background modeling methods in video analysis: A review and comparative evaluation. CAAI Transactions on Intelligence Technology 1: 43-60.

4.                   Yuewei L, Yan T, Yu C, Youjie Z, Song W (2017) Visual-Attention-Based Background Modeling for Detecting Infrequently Moving Objects. IEEE Transactions on Circuits and Systems for Video Technology 27: 1208-1221.

5.                   Lowe DG (1999) Object Recognition from Local Scale- Invariant Features. in Proceedings of the International Conference on Computer Vision.

6.                   Kale K, Pawar S, Dhulekar P (2015) Moving object tracking using optical flow and motion vector estimation. in 2015 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions) 1-6.

7.                   Denman S. Fookes C, Sridharan S (2010) Group Segmentation During Object Tracking Using Optical Flow Discontinuities. in 2010 Fourth Pacific-Rim Symposium on Image and Video Technology 270-275.

8.                   Bota, S Nedevschi S (2011) Tracking multiple objects in urban traffic environments using dense stereo and optical flow. in 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC) IEEE 791-796.

9.                   Salmane H, Ruichek Y and Khoudour L (2011) Object tracking using Harris corner points based optical flow propagation and Kalman filter. in 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC).

10.                Won Jin Kim and In-So Kweon (2011) Moving object detection and tracking from moving camera. in 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

11.                Loan TTK, Pham XQ, Nguyen HQ, Tan Tri ND, Thai NQ, et al. (2015) Homography-Based Motion Detection in Screen Content. Advances in Computer Science and Ubiquitous Computing 373: 875-881.

12.                Chen Y, Zhang R, Shang L (2014) A Novel Method of Object Detection from a Moving Camera Based on Image Matching and Frame Coupling. PLoS ONE 9.

13.                Hu WC, Chen CH, Chen TY, Huang DY, Wu ZC (2015) Moving object detection and tracking from video captured by moving camera. Journal of Visual Communication and Image Representation 30: 164-180.

14.                Mohamed A, Yang C, Cangelosi A (2016) Stereo Vision based Object Tracking Control for a Movable Robot Head. IFAC-Papers Online 49: 155-162.

15.                Quigley M, Ken C, Brian PG, Josh F, Tully F, et al. (2009) ROS: an open-source Robot Operating System. in ICRA Workshop on Open Source Software.

16.                Szeliski R (2009) Computer Vision. Algorithms and Applications.

17.                Garrido-Jurado S, Muñoz-Salinas R, Madrid-Cuevas FJ, Marín-Jiménezet MJ (2014) Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47: 2280-2292.

Citation: Mohamed A, Culverhouse PF, Cangelosi A, Yang C (2018) Object Tracing Based On Exponential Function. Int J Chromatogr Sep Tech: IJCST-119. DOI: 10.29011/2577-218X.000019

free instagram followers instagram takipçi hilesi