AI for Competitive Robotics

Gain a decisive edge in competitions like FIRST and VEX. This bootcamp focuses on applying Computer Vision to solve common robotics challenges like object detection and navigation, perfect for teams in Bangalore.

Level

Intermediate

For

Grades 8-12

Duration

3 or 5 Days

What You Will Master

Object Detection with YOLO

Learn the theory behind You-Only-Look-Once (YOLO) and train a model on a custom dataset.

Robotics-Specific Datasets

Learn to collect, annotate, and augment image data for game pieces and field elements to build a robust model.

Model Optimization for Edge Devices

Understand techniques like quantization to make your model run efficiently on hardware like a Raspberry Pi or Jetson Nano.

Integrating CV with Control Loops

Learn the logic of how to use the output of a vision model (bounding boxes) to guide motor commands and robot behavior.

The Capstone Project

Custom Game Piece Detector

Students will create a custom dataset of images for a specific robotics game piece (e.g., balls, cubes). They will then train, evaluate, and test a YOLOv8 object detection model that can identify and locate these pieces in real-time from a camera feed, simulating a core task for an autonomous robot.

Key Transformation

Train and deploy a custom object detection model capable of identifying game pieces, and understand how to integrate its output into a robotics control loop.

Course Syllabus

1
Session 1: Computer Vision for Robotics

Explore how CV is used in autonomous systems, from object detection to SLAM. Set up your development environment and learn the basics of OpenCV for image manipulation.

2
Session 2: Building Your Custom Dataset

Learn the critical skill of data annotation. Use tools like Roboflow to label images of your game pieces and prepare them for training your object detection model.

3
Session 3: Training a YOLOv8 Model

Dive into the practical steps of training a state-of-the-art object detection model on your custom dataset using Google Colab and the Ultralytics library. Evaluate model performance with mAP.

4
Session 4: From Pixels to Action

Write Python code to process the output of your YOLO model (bounding box coordinates) and translate it into simple directional commands (e.g., "turn left", "move forward") for a simulated robot or a real one via serial communication.

Explore More Tracks

View All Workshops

Build Your Advantage

Our project-based workshops are designed to give you a tangible, verifiable edge. Enroll now to secure your spot and start building your future.

Contact Us