Benchmarking Scene Understanding, and Active & Continuous Learning for Robotic Vision.
Visit the Official WebsitePowered by the Australian Centre for Robotic Vision, these challenges are designed to evaluate and benchmark critical aspects of robotic vision, pushing the boundaries of what's possible in the field.
We develop a set of new benchmark challenges specifically for robotic vision, and evaluate:
We combine the variety and complexity of real-world data with the flexibility of synthetic graphics and physics engines.
The Robotic Vision Scene Understanding Challenge evaluates how well a robotic vision system can understand the semantic and geometric aspects of its environment. There are two tasks in this challenge:
We have recently concluded an iteration a CVPR2023 workshop. Please see our challenge page for further details. Details on past challenges can be found at the following links: CVPR2022, CVPR2021.
Our probabilistic object detection challenge requires participants to detect objects in video data from high-fidelity simulation. As a novelty, our evaluation metric rewards accurate estimates of spatial and semantic uncertainty using probabilistic bounding boxes. To participate and for more information around the dataset read more here.
There is no currently active PrOD challenge with prize money available but we do have a continuous evaluation server with its own test set available to promote research in the field of probabilistic object detection. The continuous evaluation server is running here.
Big computer vision challenges and competitions like ILSVRC or COCO had a significant influence on the advancements in object recognition, object detection, semantic segmentation, image captioning, and visual question answering in recent years. These challenges posed motivating problems to the research community and proposed datasets and evaluation metrics that allowed to compare different approaches in a standardized way.
However, visual perception for robotics faces challenges that are not well covered or evaluated by the existing benchmarks. These challenges comprise calibrated uncertainty estimation, continuous learning for domain adaptation and incorporation of novel classes, active learning, and active vision.
There is currently a lack of meaningful standardised evaluation protocols and benchmarks for these research challenges. This is a significant roadblock for the evolution of robotic vision, and impedes reproducible and comparable research.
We believe that by posing a new robotic vision challenge to the research community, we can motivate computer vision and robotic vision researchers around the world to develop solutions that lead to more capable, more robust, and more widely applicable robotic vision systems.