08-26, 10:00–10:30 (Europe/Rome), General online
We discuss the "Diet Hadrade" codebases, which provides an open-source, lightweight mechanism for leveraging remote sensing imagery and machine learning techniques to aid in humanitarian assistance and disaster response (HADR) in austere environments.
In a disaster scenario (be it an earthquake or an invasion) where communications are unreliable, overhead imagery often provides the first glimpse into what is happening on the ground. The rapid identification of both vehicles and road networks directly from overhead imagery allows a host of problems to be tackled, such as congestion mitigation, optimized logistics, evacuation routing, etc. Such challenges often arise in the aftermath of natural disasters, but are also present in crises like the current invasion of Ukraine where roads are choked with civilians fleeing the fighting.
Automobiles provide an attactive proxy for human popuplation due to their mobile nature and the necessity of population movement in many disaster scenarios. In this project, we deploy the YOLTv5 computer vision object detection codebase to rapidly identify and geolocate vehicles over large areas. Vehicle detections yield significantly greater utility when combined with road network data. We use the CRESI computer vision framework to extract up-to-date road networks with travel time estimates, thus permitting optimized routing. The CRESI codebase is able to extract roads using only overhead imagery, so flooded areas or obstructed roadways will sever the CRESI road graph; this is crucial for post-disaster scenarios where existing road maps may be out of date and the route suggested by cloud navigation services may be impassable or hazardous.
Diet Hadrade provides a number of graph theory analytics that combine the CRESI road graph with YOLTv5 locations of vehicles. We combine the car detections with the road network to infer how congested certain areas are. Congestion information is important for everyday life, but also crucially important in disaster response scenarios when roads may become impassable due to both natural phenomena as well as traffic.
We leverage the detailed road graph and vehicle location information to illustrate a number of scenarios, such as: bulk evacutation, optimal aid disbursement locations, critical intersections, and detection and automated avoidance of dangerous locales. These capabilities are presented in an interactive dashboard that computes optimal routes on the fly based on user inputs.
Adam Van Etten is a machine learning researcher with a focus on remote sensing and computer vision. Adam helped found the SpaceNet initiative, and ran the SpaceNet 3, 5, and 7 Challenges. Recent research foci include semi-automated dataset generation, and exploring the limitations and utility functions of machine learning techniques. Adam created Geodesic Labs in 2018 as a means to explore the interplay between computer vision and graph theory in a disaster response context.