Skip to yearly menu bar Skip to main content

Workshop: Data Centric AI

Data Agnostic Image Annotation


Visual identification of objects using cameras requires precise detection, localization, and recognition of the things in the field of view. The visual identification problem is challenging when the objects look identical and features between distinct entities are indistinguishable, even with state-of-the-art computer vision techniques. The problem becomes significantly challenging when the things themselves do not carry rich geometric and photometric features. To address this issue, we design and evaluate a novel visual sensing system that uses optical beacons(In this case, LED) to promptly locate each of the tightly spaced objects and track them across the scenes. Such techniques can be helpful to create data sets that are huge in volume and precisely annotated to augment Deep learning models to perform better in tasks such as localization and segmentation. One such use case that we have experimented with is the localization of LEDs using a classical communication algorithm to act as an automated annotation tool, and to verify the same, we have also created a data set containing 11000 images and micro-benchmark the task of localization against state-of-the-art (SOTA) object detection model YOLO v3.