Raster processing on HPC without coding? Sure!
11-20, 16:30–16:55 (Pacific/Auckland), WG126

In this presentation I demonstrate how to use the LUMASS visual modelling environment to develop high-performance parallel raster processing models without writing a single line of code. LUMASS models are scalable and run on laptops as well as distributed memory systems. https://manaakiwhenua.github.io/LUMASS


Wrangling large and/or many raster datasets on a laptop or small workstation is no fun! Unfortunately, parallelising models or workflows to run on distributed memory compute clusters requires more than entry level coding skills. In this presentation I introduce the parallel extension to LUMASS' high-level visual raster processing framework. It enables the development of complex models and workflows without writing a single line of code! Check out this short playlist for getting a better idea: https://youtube.com/playlist?list=PL_CsDVZ4IPO-D87TgO0awddJGl2gUYIaD&si=siH6lT_3pwdN3Q-W
LUMASS provides a range of processing components, including map algebra, zonal summaries, terrain attributes, and more. It uses GDAL for 2D raster I/O and NetCDF for data cubes. It supports SQLite-based raster attribute tables alongside any supported raster format. Furthermore, it enables the integration of any external command line program or script to be included into the pipeline to extend processing capabilities. Programmers find a Python interface for writing ‘moving window’ functions without having to worry about multi-threading or streaming, which comes out of the box! LUMASS is free and open-source software ( https://github.com/manaakiwhenua/LUMASS ).