Dragoon Disaster Response Robot

Dragoon in its intended operating environment

Overview

Dates: August 2020 to Present

The cornerstone of the Masters of Robotic Systems Development program is the MRSD Project. My group, HOWD-E Robotics (Hazardous Operations Within Disaster Environments) worked with Draper Laboratory to develop Dragoon, a room mapping and human detection robot intended to be deployed within collapsed building scenarios.

Performance Requirements of Dragoon

The system:

  • Will detect people in the room in real-time with less than 500ms latency
  • Will localise the detected people in robot’s field of view up to 8m away with no obfuscation and 3m away with obfuscation
  • Will work in minimum 150 lux low-visibility lighting, smoke obfuscation, and maximum 25% partial occlusion
  • Will visualise the room geometry, obstructions, humans, and robot pose in 2D to the user for up to 10m distance from the robot with an update rate of 0.5Hz
  • Will provide a live feed from the robot to the user for use of teleoperation with 10Hz data rate
  • Will be controlled remotely by the user in teleoperation with a tactile controller for up to 40m in range with line-of-sight
  • Will be powered by self-contained energy source to run for at least 20 minutes per full-charge
  • Will record mission and critical data logs of at least the last 20 minutes
Functional architecture that displays the relationships between Dragoon’s functional requirements.
Dragoon’s Cyber-Physical architecture describes the relationships between the physical and electronic aspects of the system.

My Role

All four members of the team work on many aspects of the Dragoon project. At the time of writing, my main areas of focus are electrical hardware design and validation, autonomy components and software architecture. I also worked on developing systems engineering architectures for the overall system, which are shown above.

Process

At the end of the spring semester 2021, the system is fully constructed and functioning. The video below describes our Spring Validation Demonstration, and details the performance and structure of the system.

Here are some pictures from the Gazebo simulation I’ve been working on in order to develop autonomous mobility capabilities.

After the Gazebo physics and controllers were developed, I could start mapping the environment using the LIDAR scans.

Using point clouds processed from the simulated LIDAR to create an occupancy grid. Note that on the real robot, these grids are produced by our online SLAM, which I am not using in the simulation for performance reasons.