RoboScout Research Highlights
The RoboScout team has three main research thrusts.
Category 1: AI & Perception, is led by Manocha’s team, who have been using large Vision-Language models to create synthetic images for data augmentation.
Category 2: Medical Trauma & Sensors, has been performing benchtop testing on Time-of-Flight (i.e., depth) cameras, as well as collecting sensor data on trauma manikins with the aide of Lynch and his C-STARS lab.
Category 3: Robotics & Autonomy, focuses on hardware for our UAV Chimera & UGV Spot, and is currently developing an autonomy stack for UAV aerial detection in an outdoor environment.
Category 1: AI & Perception
Develop, train, and implement onboard perception algorithms for automated detection and localization of isolated, unoccluded casualties.
Utilize camera-based training data of unoccluded, isolated manikins and human actors to train machine learning algorithms and demonstrate algorithm performance.
Category 2: Medical Trauma & Sensors
Identify, test, and validate a multi-modal, non-contact sensing paradigm for assessment and labeling of individual, unoccluded casualties at short ranges. Sensors include optical, thermal, and infrared for injury detection, using data collection with unoccluded, solitary trauma manikins.
Participate in data collection by providing and operating trauma manikin functions for simulation of injuries.
Category 3: Robotics & Autonomy
Assembly, configuration, and deployment of an autonomous fleet of sensor-equipped robotic air (UAV) and ground (UGV) platforms for mass casualty incidents.
Provide robotic sensor platforms, sensing data collection, data repository, and additional support as needed to conduct data collection.
C-STARS Simulation Center @ University of Maryland School of Medicine
Get in Touch with Us
Have questions or want to get involved? Please fill out the contact form.