Timezone: »

Poster
Do Different Tracking Tasks Require Different Appearance Models?
Zhongdao Wang · Hengshuang Zhao · Ya-Li Li · Shengjin Wang · Philip Torr · Luca Bertinetto

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @

Tracking objects of interest in a video is one of the most popular and widely applicable problems in computer vision. However, with the years, a Cambrian explosion of use cases and benchmarks has fragmented the problem in a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now novel approaches proposed by the community are usually specialised to fit only one specific setup. To understand to what extent this specialisation is necessary, in this work we present UniTrack, a solution to address five different tasks within the same framework. UniTrack consists of a single and task-agnostic appearance model, which can be learned in a supervised or self-supervised fashion, and multiple heads'' that address individual tasks and do not require training. We show how most tracking tasks can be solved within this framework, and that the same appearance model can be successfully used to obtain results that are competitive against specialised methods for most of the tasks considered. The framework also allows us to analyse appearance models obtained with the most recent self-supervised methods, thus extending their evaluation and comparison to a larger variety of important problems.

#### Author Information

##### Luca Bertinetto (University of Oxford)

Luca Bertinetto is a PhD candidate in the Torr Vision Group at the University of Oxford. The main focus of his doctorate is the problem of agnostic object tracking, which he likes to tackle using simple and effective approaches. Before getting lost among the spires of Oxford, he obtained a joint MSc in Computer Engineering between the Polytechnic University of Turin and Telecom Paris Tech. He has published at CVPR and NIPS and reviewed for PAMI.