FOSS4G NA 2023

Realtime edge processing of drone imagery for search-and-rescue

We propose a FOSS-based machine learning workflow for search-and-rescue operations, automating preliminary identification in drone footage. This streamlines human review, improves efficiency, expands coverage, and reduces response time.


This talk will discuss how the–currently manual and inefficient–process of searching for people in need of rescue in drone footage can be optimized using free and open-source implementations of state-of-the-art machine learning (ML) models and other computer vision techniques, allowing rescuers to search a larger geographical area with greater accuracy and in a shorter amount of time. Below, we describe the current practices, the constraints of this problem space, and our proposed solution.

During search-and-rescue (SAR) missions, ground searchers and drone operators will be given a geographic area to search for people in. Currently, drone flights are typically done “hero” style, where an operator actively flies the drone forward-facing so they can avoid obstacles while simultaneously viewing the footage and searching for signs of a person. The footage is also streamed/recorded and sent to the base camp for other SAR members to review. This means that 1) flights are not as efficient as they could be—some areas may be covered many times and others not at all—and 2) one minute of drone footage is reviewed in, at the very least, one minute of staff time. In practice, a single minute of footage could translate to several minutes of staff time for review.

We propose partially automating this process by using machine learning models to scan the drone footage in near real-time. The use of edge devices in this process is crucial for this, as it avoids excessive data transmission and allows for immediate action. However, these devices have limitations, including restricted memory and compute capabilities, and they may not have a GPU. Despite these challenges, edge devices offer significant advantages in terms of speed and efficiency.

An additional important consideration with automation is that the cost of false negatives (i.e. missing people entirely) is much higher than that of false positives. Our approach is designed to reduce the volume of footage for review, but not to eliminate the need for human oversight entirely.

Our proposed solution offers a promising approach to optimizing SAR operations using only free and open-source software. By leveraging machine learning and computer vision techniques, we can improve the efficiency and effectiveness of these critical missions, potentially saving lives and resources. We look forward to sharing our findings and discussing the potential of this technology in the field of SAR.