To see our schedule with full functionality, like timezone conversion and personal scheduling, please enable JavaScript and go here.
09:00
09:00
15min
Opening ceremony

Yes! The wait is finally over. It's been too long since the last FOSS4GE conference at Guimarães, Portugal 6 years ago.

The FOSS4GE 2024 LOC team is honoured to welcome you to Tartu for this festival of free and open source for geospatial and the community that surrounds it.

Features welcome addresses by the FOSS4G Europe 2024 LOC, the city of Tartu, and the University of Tartu.

Plenary
Van46 ring
09:15
09:15
15min
Ideas about OSGeo, a European OSGeo and our conferences
Ilie Codrina, Jeroen Ticheler

We would like to briefly introduce OSGeo to you and trigger your interest in contributing to our mission. While our geospatial projects thrive, OSGeo also faces challenges. These include community involvement, our relationship with other FOSS foundations, financial health of the foundation, professionalisation of both OSGeo as a foundation and FOSS4G conference organisation, stricter requirements around information and IT security and the need for an OSGeo Europe foundation. All examples where OSGeo needs your involvement as a community member. We would like to kick start a round of discussions during this conference and follow up on those topics during Bird of a Feather sessions and online through existing or ad-hoc committees.

Plenary
Van46 ring
09:30
09:30
30min
Bridging Horizons: From Geoinformation to Meteorology and Beyond - A Journey of Cross-Disciplinary Synergy for Global Challenges
Athina Trakas

In this keynote I want to share my journey from the world of Geoinformation - specifically the FOSS4G and open standards community - to the Meteorology community. What are the lessons learned, the challenges and the rich and manifold opportunities available at the intersection of these dynamic fields. I want to share with you my personal perspective on how collective efforts as Earth Sciences community in fostering interdisciplinary bridges can lead to innovative solutions for our planet’s challenges.
In sharing my story, I intend to highlight the synergies between geoinformation and meteorology, illustrating how these interconnected disciplines can complement and enhance one another and how we as a Earth Sciences community can benefit as a whole.

Keynote
Van46 ring
10:00
10:00
30min
Coffee
Van46 ring
10:00
30min
Coffee
GEOCAT (301)
10:00
30min
Coffee
LAStools (327)
10:00
30min
Coffee
QFieldCloud (246)
10:00
30min
Coffee
Omicum
10:30
10:30
30min
GeoNetwork - State of the Art
Florent Gravin, Jeroen Ticheler

The GeoNetwork opensource project is a catalog application facilitating the discovery of data, services and applications within any local, regional, national or global "Spatial Data Infrastructure" (SDI). GeoNetwork is an established technology - recognized as an OSGeo Project and a member of the FOSS4G community since the early days.

The GeoNetwork team would love to share what we have been up to and talk about the different projects that have contributed functionality to the software during the last twelve months. Our rich ecosystem of schema plugins continues to improve; with national teams pouring fixes, improvements and new features into the core application.

GeoNetwork is the backend of the European INSPIRE Geoportal and over 80% of national geospatial catalog end points for INSPIRE. We will discuss a number of developments that are foreseen to evolve easy access to geospatial open data and other open data. How do we work with expert communities to make sure GeoNetwork does what it is expected to do?

We will also talk about the UI revamp through the geonetwork-ui framework, and the new perspectives it could bring to your catalogs. Progress of our main branches (4.4.x), and release schedule.

Attend this presentation for the latest from the GeoNetwork community and this vibrant technology platform.

State of software
GEOCAT (301)
10:30
30min
MapLibre Tiles: Introducing The Next Generation Vector Tiles Format
Markus Tremmel

This talk introduces a new vector tiles format called MapLibre Tiles (MLT), which offers a significant tile size reduction and accelerated decoding performance compared to the de-facto standard Mapbox Vector Tiles (MVT). MLT also adds support for missing features like nested properties, linear referencing and M-values. The design of MLT is influenced by the latest research results on big data analytics formats and adapted for the map visualization use case.

Our evaluation against MVT on a OpenMapTiles schema based tileset shows a reduction in tile size of nearly up to 80% with even faster decoding times. Moreover, on a already highly optimized Bing Maps tileset, MLT achieves reductions of up to 40% in size. Based on these results, Microsoft generously donated to MapLibre to integrate MLT into the mapping stack.

Additionally, we will explore how next-generation map rendering libraries can leverage SIMD and WebGPU compute shaders for processing to fully utilize the potential of MLT.

State of software
LAStools (327)
10:30
30min
Open source solutions for managing crowdsourced geospatial data
Natalia Räikkönen

Collecting geospatial data through crowdsourcing offers a rapid, cost-effective, and dynamic alternative to traditional methods. Despite facing challenges and limitations, integrating crowdsourced data with other available sources using open-source solutions effectively fills gaps in data sources. Southwest Finland’s regional open data platform, Lounaistieto, integrates crowdsourcing into the collection and management of open regional information. Lounaistieto maintains two key crowdsourced open geospatial datasets: Service Point data and Recreational Data (Virma data).

The knowledge of locations various services is vital for regional planning. Yet, no single data source in Finland openly provides information about both the public and private sector services. To overcome this deficiency, Lounaistieto has created an automated data pipeline that combines service points from two distinct sources into one database. Public sector data is sourced from the Finnish Service Catalogue, encompassing location and attribute information related to administration, rescue services, education, transportation, and well-being services. Private sector data is derived from OpenStreetMap, including information about tourism and cultural services. Daily automatic updates to the service point database through OGC web services ensure that the information remain up to date. The combined data is shared as an open API and visualized on a web map, enabling users to check attribute information and follow links to OSM for additional details.

Another dataset, Virma recreation data is focused on recreational and nature tourism routes in southwest Finland, along with associated public services. Maintained through crowdsourcing using the Virma Maintenance tool, this process offers entrepreneurs, outdoors societies, and other bodies taking care of recreational infrastructure access to the digital information about the recreational routes. Building on this data, Lounaistieto has published an open-source Oskari-based mobile web map application, Virma Map (kartta.virma.fi), which offers mobile route planning tools for outdoor activities. The platform emphasizes smartphone usability, dynamically visualizing routes and service points on the map, enriched with details such as route length, service locations and real time location. The Virma data is also published in the national open data portal freely utilizable for other service providers.

Open-source solutions not only offer cost-effective alternatives but also foster collaborative efforts, transparency, and innovation in managing and enhancing geospatial datasets.

Use cases & applications
QFieldCloud (246)
10:30
30min
State of GDAL: what's new in 3.8 and 3.9?
Even Rouault

We will give a status report on the GDAL software, focusing on recent developments and achievements in the 3.8 and 3.9 GDAL versions released during the last year.
The discussed topics will be as various as the scope of GDAL is, covering:
- new vector drivers: JSONFG for the in-development OGC Features and Geometries JSON specification, PMTiles (ProtoMap Tiles) v3 for tiled vector datasets
- new raster drivers for IHO standards: S-102, S-104 and S-111 for bathymetric surface products, water level information for surface navigation and Surface Current products
- a new command line utility: gdal_footprint
- a new raster driver, GDAL Tile Index, to manage mosaics of many many files
- enhancements in the Arrow interface and GeoParquet driver
- performance improvements in the GeoPackage driver

State of software
Van46 ring
10:30
30min
Threats related to open geospatial data in the current geopolitical environment
Jussi Nikander, Henrikki Tenkanen

Finland has been a strong proponent for open data for a long time. Since 2010, a significant amount of public sector data has been published openly, and much of this data is geospatial by nature. Accurate geospatial data with nation-wide coverage is highly valuable for many applications, including matters related to national security and military applications. When such information is provided as open data, it can also be used by other countries, including hostile nations. Furthermore, geospatial data can also be used by criminals and other malicious actors, and therefore there have always been possible threats related to open geospatial data.
Traditionally, threats related to open geospatial data have been divided into two categories: threats to privacy and threats to national security. Threats to privacy have typically been handled carefully, as there are numerous datasets that pose obvious threats to privacy, such as accurate census data. Therefore, the public sector has developed mature best practices on how to handle privacy concerns, and there are also international guidelines to assess risks related to open data (Open Data Institute, 2022). For example, census or population registry data should never be published at an individual level, but the data should be aggregated to minimize the privacy risks.
After the Balkan wars of the 90s, the majority of Europe has been in a state of deep peace. Therefore, the potential national security threats related to open geospatial data have been given relatively little attention. Potential threats from other nation states have been sidelined by other concerns, and often dismissed as irrelevant due to increased European integration. This is true even in Finland, which never downsized her army or dismantled the national preparedness organizations. The Russian invasion of Ukraine caused a rapid and radical change in the global geopolitical environment. In Finland this caused a radical shift in discussion about national security.
Here, we report the results of a work, where the security concerns related to open geospatial data in Finland were studied. The main research questions for this work are:
What kinds of threats related to open geospatial data exist?
How can the threat-related open geospatial data be mitigated and managed?
Before our project, open discussions regarding the need for threat assessment in the new geopolitical environment had already started within the Finnish geospatial ecosystem. This gave a useful basis for scoping our research, as well as provided an environment where the findings could be discussed.
In the study, we focused specifically on matters related to national security. Specifically, our focus was on national geospatial datasets maintained by the National Land Survey of Finland (NLS), including e.g. the Finnish topographic database. Even though our focus was on the data produced by NLS, our findings are applicable more generally, as our approach considered potential threats enabled by open geospatial data in general.
As a main research method for the study, we used semi-structured interviews. We interviewed approximately 20 individuals from 13 Finnish organizations.The majority of the interviewees were from public sector organizations. During the last few interviews, there were not many new insights to be gained. Thus, we concluded that we had reached the saturation point in terms of new information and no further interviews were needed.
Based on the interviews, we created a number of threat scenarios. The threat scenarios were used as examples on what sorts of threats might be related to open geospatial data. The scenarios were then discussed and further refined with a number of experts on national security and the Finnish geospatial ecosystem.
In our results, we assigned the threats into categories, and gave recommendations for mitigation strategies related to open geospatial data. The results of the work are closely related to earlier threat assessment work done on a national level. Our results include several insights about how open geospatial data could be used to threaten critical infrastructure, important infrastructure, soft targets, as well as the privacy of individuals. Similarly, our results list potential sources of threats including other nation states, terrorist organizations and lone wolf terrorists, criminals, and foreign companies. Both the targets and the threats are well-known already in national security work and are not unique to the geospatial ecosystem.
In most of the threat scenarios discussed, open geospatial data could help malicious actors to plan and execute activities that can cause harm. Based on our analysis, the threat related to a specific dataset most often did not directly target the publisher of the dataset, nor affected the dataset itself. For example, detailed building data can be used to plan burglaries, and accurate road network and topographical data can be used to plan an armed invasion. Thus the targets of the malicious activity are elsewhere, and the data is used as means to gain more information about these targets.
To balance the potential unwanted use scenarios, the benefits of open geospatial data were also discussed throughout our interviews. When considering the threats and mitigation strategies, it is crucial to remember the benefits of open data. Just because it is possible to misuse a dataset is not alone a reason to try and limit the use of the data. Only if the threats are significant enough compared to the benefits gained from open data, should limitations to the data be considered.
Our study brings an important new aspect to the narratives around open geospatial data, as there is not much open discussion or research related to the potential threats caused by spatial data, or the relationship between open data and potential threats. Furthermore, our study reveals that there is an urgent need for further developing the guidelines (such as the one by Open Data Institute (2022)) and risk assessment frameworks that would better consider the threats and risks related to opening and sharing geospatial data from the perspective of national security.
References
Open Data Institute. (2022). Assessing risk when sharing data: A guide (p. 21). Open Data Institute. https://www.theodi.org/article/assessing-risk-when-sharing-data-a-guide/

Academic track
Omicum
11:00
11:00
30min
A straightforward approach to field data collection with real world examples
Saber Razmjooei

This talk will showcase how Mergin Maps, powered by QGIS, helps you capture data faster and collaborate effectively, using real-world examples. We'll skip the technical jargon and focus on practical solutions for capturing:

  • Animals & plants: Track locations directly from your phone, even with volunteers!
  • Infrastructure: Streamline data collection for pipes, cables, and more, using the same maps as your office team.
  • And more! Ditch pen & paper, spreadsheets, and clunky apps.

Mergin Maps is open-source, free, and used by thousands for over 2 years. Easy-to-use mobile apps on Android & iOS require no training. A powerful server lets you store, version control, and collaborate on your QGIS projects seamlessly.

Join us and discover how Mergin Maps can solve your field data collection challenges!

Use cases & applications
QFieldCloud (246)
11:00
30min
Pan-European open building footprints: analysis and comparison in selected countries
Marco Minghini

Building footprints (hereinafter buildings) represent key geospatial datasets for several applications, including city planning, demographic analyses, modelling energy production and consumption, disaster preparedness and response, and digital twins. Traditionally, buildings are produced by governmental organisations as part of their cartographic databases, with coverage ranging from local to national and licensing conditions being heterogeneous and not always open. This makes it challenging to derive open building datasets with a continental or global scale. Over the last decade, however, the unparalleled developments in the resolution of satellite imagery, artificial intelligence techniques and citizen engagement in geospatial data collection have enabled the birth of several building datasets available at least at a continental scale under open licenses.
In this work, we analyse four such open building datasets. The first is the building dataset extracted from the well-known OpenStreetMap (OSM, https://www.openstreetmap.org) crowdsourcing project, which creates and maintains a database of the whole world released under the Open Database License (ODbL). OSM buildings are typically derived from the digitalisation of high-resolution satellite imagery, and in some case from the import of other databases with ODbL-compatible licenses. The second dataset is EUBUCCO (https://eubucco.com), a pan-European building database produced by a research team at the Technical University Berlin by merging different input sources: governmental datasets when available and open, and OSM otherwise [1]. EUBUCCO is mostly licensed under the ODbL, with only exceptions for two regions in Italy and Czech Republic. The third dataset is Microsoft Open Building Footprints (MS, https://github.com/microsoft/GlobalMLBuildingFootprints), extracted through the application of machine learning technology from high-resolution Bing Maps satellite imagery between 2014 and 2023, available at the global scale and also licensed under the ODbL. The fourth dataset, called Digital Building Stock Model (DBSM), was produced by the Joint Research Centre (JRC) of the European Commission to support studies on energy-related purposes. It is an ODbL-licensed pan-European dataset produced from the hierarchical conflation of three input datasets: OSM, MS and the European Settlement Map [2].
The objective of this work is to compare the four datasets – which derive from different approaches following heterogeneous processing steps and governance rules – in terms of their geometry (i.e. attributes are out of scope) in order to draw conclusions on their similarity and differences. It is known from literature that building completeness in OSM (which plays a key role in three out of the four datasets – OSM itself, EUBUCCO and DBSM) varies with the degree of urbanisation [3] and that machine learning applied to satellite imagery (used in MS) may have different performance depending on the urban or rural context [4]. In light of this, we analyse the building datasets according to the degree of urbanisation of their location using the administrative boundaries provided by Eurostat, which classifies each European province as urban, semi-urban or rural (https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units/countries).
We chose five European Union (EU) countries for the analysis: Malta (MT), Greece (EL), Belgium (BE), Denmark (DK) and Sweden (SE). The choice was motivated by the needs to: i) select countries of different size and geographical location, which ensure that their national OSM communities are substantially different; ii) select countries having different portions of urban, semi-urban and rural areas; and iii) select two sets of countries for which the input source for EUBUCCO buildings was a governmental dataset (BE, DK) and OSM (MT, EL, SE) to detect possibly different behaviours.
From the methodological point of view, for each country and degree of urbanisation we first calculated and compared the total number and total area of buildings in all datasets and we examined their statistics through box plots. This was followed by the calculation, for each couple of datasets and degree of urbanisation, of the building area of intersection and its fraction of the total building area of each of the two datasets. Finally, we intersected all the four datasets and calculated the fraction of the area of each dataset that this intersection represents.
Results show that in urban areas, while the datasets are overall similar in terms of total area of buildings, the total number of buildings is typically higher in EUBUCCO for DK and BE, where the information comes from governmental datasets. This suggests that such datasets outperform OSM in modelling the footprints of individual buildings in the most urbanised areas. In contrast, in semi-urban and rural areas, where OSM traditionally lacks completeness, MS (and as a consequence DBSM, which is also based on MS) captures more buildings. This is especially evident in SE, where 94% of the country area is not urban. When calculating the intersection between building areas for each couple of datasets in all countries and urban areas, the area of OSM buildings scores the lowest percentages of intersection when compared to the building areas of the other datasets. The lowest such percentages, equal to 25%, are scored when compared to MS in non-urban areas. EUBUCCO represents an obvious exception for the countries (MT, EL and SE) where it uses OSM. Finally, the dataset for which the area of intersection between the buildings of all the four datasets represents the largest percentage of the area is OSM, with values even higher than 80% for urban areas. This proves that EUBUCCO and even more DBSM can be considered a sort of ‘OSM extension’ improving its completeness. Instead, the lowest values are scored by MS and result from its radically different generation process compared to the other datasets.
The whole procedure was written in Python using libraries such as Pandas, Dask-GeoPandas and Plotly. The code is available under the European Union Public License (EUPL) v1.2 at https://github.com/eurogeoss/building-datasets in the form of Jupyter Notebooks. Work is ongoing to extend the analysis to the whole EU in order to validate the results of this study and formulate recommendations at the continental level.

Academic track
Omicum
11:00
30min
Status of GRASS GIS project
Veronica Andreo

This talk will give a comprehensive overview of the latest developments and progress of the GRASS GIS project for users and developers. The talk will cover topics relevant for integrating GRASS GIS engine into existing workflows. We will dispel some common misconceptions about the project, such as "it's just a command line", "it's just a desktop GIS", “it's a QGIS plugin” and "it's been around for a long time, so it must be well funded".

Many potential users perceive GRASS GIS as difficult to use. During the talk, we'll cover different improvements to the graphical interface that are aimed at addressing this problem. The switch to a mature single-window layout, an easier startup, streamlined data management and the upcoming command history pane are all improvements attempting to increase user-friendliness and make it easier for newcomers to adopt GRASS GIS.

The talk will also go through a series of improvements relevant for industry and academic users to facilitate the integration of GRASS data processing and analytic tools in their workflows using Python or R, either on the command line or in the cloud. Examples of these improvements are the parallelisation of many modules with OpenMP enabling accelerated processing of large data sets and the stricter compiler configurations ensuring code quality in C, C++ and Python.

Finally, the latest community activities and funding opportunities will be presented.

State of software
Van46 ring
11:00
30min
Vector Mosaicking with GeoServer
Andrea Aime

The vector mosaic datastore is a new feature in GeoServer that allows indexing many smaller vector stores (e.g., shapefiles, FlatGeoBuf, Geoparquet) and serving them as a single, seamless data source. This has the advantage of cost savings when dealing with very large amounts of data in the cloud, as blob storage bills at a fraction of an equivalent database. It is also faster for specific use cases, e.g, when extracting a single file from a large collection and rendering it fully (e.g. tractor tracks in a precision farming application).

Attend this presentation to learn more about vector mosaic setup, tuning, migration from large relations databases, and real world experiences.

Use cases & applications
LAStools (327)
11:00
30min
pygeoapi mid-year update
Tom Kralidis, Angelos Tzotsos, Just van den Broecke

pygeoapi is an OGC API Reference Implementation. Implemented in Python, pygeoapi supports numerous OGC APIs via a core agnostic API, different web frameworks (Flask, Starlette, Django) and a fully integrated OpenAPI capability. Lightweight, easy to deploy and cloud-ready, pygeoapi's architecture facilitates publishing datasets and processes from multiple sources. The project also provides an extensible plugin framework, enabling developers to implement custom data adapters, filters and processes to meet their specific requirements and workflows. pygeoapi also supports the STAC specification in support of static data publishing.

pygeoapi has a significant install base around the world, with numerous projects in academia, government and industry deployments. The project is also an OGC API Reference Implementation, lowering the barrier to publishing geospatial data for all users.

This presentation will provide an update on the current status, latest developments in the project, including new core features and plugins. In addition, the presentation will highlight key projects using pygeoapi for geospatial data discovery, access and visualization.

State of software
GEOCAT (301)
11:30
11:30
30min
A fun way to do spatial cataloguing and publishing using pygeometa and mdme
Tom Kralidis, Paul van Genuchten

Metadata, YAML files and pipelines? When I try to convince my colleagues that the approach mentioned in this presentation is fun, they look at me alienated.

This presentation will highlight the usage of pygeometa, mdme and DevOps workflow in two projects from different domains of interest.

Land-Soil-Crop data

ISRIC is endorsing the pygeometa MCF format, a YAML-based representation originally developed as a subset of ISO 19115 metadata, advertised by the pygeometa community as 'Metadata Creation for the Rest of Us'. YAML reads much better then XML, and is optimal for content versioning in Git. But YAML comes with its peculiarities, such as strict indenting and reserved characters.

'Average users should not look at code, instead use shiny (web) interfaces' is a quote often used, but we're not used to reverse the quote: "As a DevOps engineer I hate shiny interfaces. I want to look at code, see the history of that code, who changed what, when, and how can I fix it".
This is where the fun part of pygeometa MCF comes in. CI/CD pipelines which run on content changes validate the YAML format and report errors to the submitters.

Should we then fully neglect the basic user? Of course not! So we crafted web based forms that generate mcf (osgeo.github.io/mdme) and have import options for Excel sheets (every column is a metadata field). Consider that many data scientists (fortunately) are used to placing a README.md in any project folder. We just ask them to structure the content using YAML. We added an inheritance mechanism, so common properties (contact details, usage constraints) are inserted only once and inherited by lower levels in the folder hierarchy. And embedded metadata is extracted from data files (bounds, projection, format) or online sources.

All this metadata is crawled to a central search index (pycsw/pygeoapi/geonetwork). To increase the participatory experience we added 'Edit me on GIT' links to each of the records, which brings users back to the original mcf file to suggest changes.

Weather/climate/water metadata

The WMO Information System (WIS2) is the next generation data exchange infrastructure for real-time and archive weather/climate/water data. Discovery metadata is a key component for cataloguing and discovery. An event driven architecture, metadata files are managed on GitHub, which on change, trigger CI/CD workflow to generate compliant WMO discovery metadata, validation and publish to an MQTT broker.

Use cases & applications
GEOCAT (301)
11:30
30min
An open early-warining system prototype to help in management and study algal blooms on Lake Lugano
Daniele Strigaro

The effects of climate change, together with human activities, are stressing many natural resources. Such effects are altering distribution patterns, such as precipitation, and known dynamics in all natural spheres (Hydrosphere, Biosphere, Lithosphere, and Atmosphere). The monitoring of environmental parameters is becoming of primary importance to better understand the changes that we need to address. Satellite images, laboratory analysis of samples, and high-end real-time monitoring systems offer solutions to this problem. However, often such solutions require proprietary tools to better exploit data and interact with them. The open science paradigm fosters accessibility to data, scientific results, and tools at all levels of society. Hence, in this project, we aimed to apply such an approach to aid in managing a new phenomenon affecting Lake Lugano, primarily caused by the increase in water temperatures and the high load of nutrients from human activities. In fact, over the past years and particularly in 2023, distributed Harmful Algal Blooms (HABs) appeared on the lake, raising awareness of this phenomenon that can be dangerous for human and animal health. Since HABs are distributed on the water lake surface, an open source cost-effective solution based on open hardware, software and standards can potentially increase the spatial resolution to collect more dense measurements. The excessive algae growth could be composed by Cyanobacteria which can produce a wide range of toxic metabolities, including microcystins (MCs). These cyanotoxins, whose negative effect can be both acute at high concentrations and at low doses (Chen et al., 2009; Li et al., 2011), are produced by common species in Lake Lugano. Among these, the most problematic is Microcystis, as it can give rise to blooms during the summer period that accumulate along the shores due to wind and currents. In these areas, the risk of exposure to people and animals is higher, especially in bathing areas. Considering the potential risks to human and animal health, in this project an open early warning monitoring system has been designed and built upon previous experiences in water lake monitoring (Strigaro et al., 2022) by leveraging the benefits derived from the application of open science principles.
Most monitoring plans use microscopic counts of cyanobacteria as an indicator of toxicity risk. However, these analyses are time-consuming, therefore, in addition to or as an alternative to classical methods, sensors capable of measuring algal pigments are increasingly being used. In particular, phycocyanin (PC), characteristic of cyanobacteria, can be used as an indicator of cyanobacterial biomass, thus estimating the potential exceedance of critical levels of microcystins. Based on previous studies, this project aimed to develop a high-frequency sensor-based early warning system for real-time detection of phycocyanin in surface waters for bathing use. In particular, the study aimed to i) develop a pilot system for real-time phycocyanin surveillance, using a high-frequency fluorimeter positioned below the surface near a bathing beach; ii) develop a data management software that automatically notifies the exceeding of predicted phycocyanin risk thresholds; iii) test the system during cyanobacterial blooms, comparing the measured phycocyanin values with microcystin concentrations.
The hardware solution consists of a Raspberry Pi connected to a Trilux fluorimeter by Chelsea Technologies, which allows the measurement of three algal pigments (Chlorophyll-a, Phycocyanin, and Phycoerythrin), along with a module for transmitting data using NB-IoT. On the node, leveraging the concept of edge computing, the istSOS software has been installed. istSOS is an open-source Python implementation of the Sensor Observation Service of the Open Geospatial Consortium, fostering data sharing and interoperability. Raw data are retrieved from the sensor every minute and then stored in the local instance of istSOS. Simultaneously, a simple on-the-fly quality control is activated to flag each value with a quality index. The data are then aggregated every 10 minutes and transmitted every 15 minutes to the data warehouse. On the server side, another instance of istSOS is hosted to provide data for reports, post-processing validation, and the early warning system. Additionally, the open-source software Grafana has been explored to set up alerts based on three different thresholds. Each threshold has been developed including a hypothetical bathing water management plan, and they are expressed as follows:

1. Monitoring - PC threshold of 3.4 Chl-a eq µg/L, corresponding to a value of 5 μg/L of MCs (with PC greater than Chl-a). This threshold defines abundant phytoplankton growth with dominance of cyanobacteria. Upon exceeding this threshold, frequent monitoring of the situation and identification of the dominant genus is recommended to predict its potential toxicity.

2. Alert - PC threshold of 6.7 Chl-a eq µg/L, corresponding to a value of 10 μg/L of MCs. This threshold defines abundant cyanobacterial growth and the potential onset of a bloom. Upon exceeding this threshold, site inspection, identification of the dominant genus, and cyanotoxin analysis are recommended.

3. Prohibition - PC threshold of 13.4 Chl-a eq µg/L, corresponding to a value of 20 μg/L of MCs. This threshold defines an ongoing cyanobacterial bloom. Upon exceeding this threshold, the toxic risk is at its maximum, as we are approaching the maximum limits imposed for the bathing prohibition. Therefore, temporary bathing prohibition is recommended until confirmation of bloom toxicity with verification of any exceeding of the World Health Organization limit of 25 μg/L of MCs.

The adoption of open hardware, software, and standards allows the implementation of a toolchain that can be easily replicated. The promising results and openness of the solution will permit further expansion of the network to help decision makers and researcher to better manage and study this phenomena using sensor data. The solution can also effectively increase citizen awareness by implementing kits that local stakeholders can use to monitor the status of the lake water, providing additional data.

Academic track
Omicum
11:30
30min
Gleo Feature Frenzy
ivansanchez

Gleo is a nascent javascript WebGL mapping library. It aims to find a niche alongside Leaflet, OpenLayers, MapLibre and Deck.gl.

This library was presented at FOSS4G 2022. This will be a presentation of the features developed during the last year, including live examples, clustering, colour spaces, and vector field handling.

State of software
LAStools (327)
11:30
30min
Maintaining topological consistency of simple features with QGIS tools & PostGIS SQL rules
Antero Komi

This talk presents use of QGIS tools in topological editing of multiple simple features together and shows a SQL-based approach to checking the data in a PostGIS database to conform with given topological rules between multiple tables.

When using simple features in a PostGIS database, topological relations are not handled with the data model, data is duplicated on shared segments and thus may contain differences on segments which should be shared and equal between all features on that egde, and in QGIS each modification must also consider the topological vertices and possibly make changes to other features as well.

QGIS has built-in tools to handle some topological editing cases, this talk shows use cases for those and shows additional plugins available for making for example topological reshapes for shared segments, as if the segment was an edge in a topological data model.

This talk also discusses SQL-based methods for checking the topological consistency. Simple checks shown include the built-in PostGIS functions like intersects or contains. More advanced cases show use of relate-checks for allowing certain types of intersections, or distance-based exists checks for requiring either connected or clearly separate features for topographic data modeling. These kind of SQL checks also allow maintaining complex topological relation checks based on attributes of the features, and for example a shoreline-lake relation can be checked fully inside state boundaries, and gaps are allowed on those parts of lakes outside of state boundaries.

Use cases & applications
QFieldCloud (246)
11:30
30min
QGIS Feature Frenzy
Kurt Menke

QGIS releases three new versions per year and each spring a new long-term release (LTR) is designated. Each version comes with a long list of new features. This rapid development pace can be difficult to keep up with, and many new features go unnoticed. This presentation will give a visual overview of some of the most important new features released over the last calendar year.

In March of 2024 a new Long-term release was published (3.34), and shortly before FOSS4G, the latest stable version of QGIS (3.38) will be released. I will start by comparing the new LTR (3.34) to the previous (3.28). Here I will summarize by category the new features found in the latest LTR (GUI, processing, symbology, data providers etc.). I will then turn my attention to the most important new features found in the latest releases (3.36 & 3.38).

Each highlighted feature will not simply be described, but will be demonstrated with real data. The version number for each feature will also be provided. If you want to learn about the current capabilities of QGIS, this talk is for you!

Potential topics include: GUI enhancements * New Expressions * Point cloud support * Data Providers * Processing *3D * Editing

State of software
Van46 ring
12:00
12:00
5min
A standardised approach for serving environmental monitoring data compliant with OGC APIs
Juan Pablo Duque Ordoñez

Environmental monitoring is fundamental for addressing climate change. Environmental data, in particular air quality and meteorological parameters, are widely used for risk assessment, urban planning, and other studies regarding urban and rural environments. Finding open and good quality environmental data is a complex task, even though environmental and meteorological monitoring are considered some of INSPIRE's high value datasets. For this reason, having robust, open, and standardised services that can offer spatial data is of critical importance.

A good example of open, high-quality, environmental and meteorological data is one of the Regional Agencies for Environmental Protection, ARPA Lombardia. This agency maintains the air quality and meteorological monitoring station networks of the region and serves a high volume of sensor observations. The Lombardy region is located in northern Italy and is considered its financial and industrial muscle. Due to its topology, during the colder months of the year, the pollution levels of the region increase, in particular the concentrations of particulate matter (PM10 and PM2.5), as portrayed in [1]. For this reason, having a well-established monitoring network is critical. The ARPA Lombardia monitoring network generates huge volumes of data, which is served through its catalogue and a set of services. It is possible to download air quality and meteorological observations, as well as the information of the monitoring stations. These data have been extensively used in research, in particular, in the study of air quality in the region [2][3].

ARPA Lombardia environmental monitoring data is served through the API (Application Programming Interface) of the Lombardy region, Open Data Lombardia. Although this service is highly functional, thoroughly documented and works correctly, we identified some limitations that could pose problems for researchers, especially in the field of geospatial information. This service has geospatial capabilities, such as the possibility to download data in GEOJSON format, however, it is not compliant with other open standards such as WFS, WMS, or OGC APIs, posing a problem of interoperability with other geoportals and catalogues that do follow these standards. Additionally, column names of the meteorological and air quality observations and meteorological stations datasets are not homogenised, making them not fully interoperable. Finally, the ARPA Lombardia services and data fields are only available in Italian, which also poses interoperability concerns.

Highlighting the societal, environmental, and economic importance of this kind of information, in this work we present and document the implementation of a web API compliant with OGC API specifications for exposing the air quality and meteorological information from ARPA Lombardia. The data provided by ARPA Lombardia is shared under the licence CC0 1.0 Universal, meaning it is public domain.

The developed API serves environmental monitoring data (both air quality and meteorological) in compliance with a set of OGC APIs. This API is capable of exposing data in different standardised formats, filtering by multiple fields and locations, and performing server-side processing of the observations. OGC APIs are modern standards for geospatial information. Although they are still in the adoption phase, many reference implementations are being developed, and governmental institutions are starting to adopt such standards [4][5]. They differ from widespread, older OGC standards such as WMS or WFS as they are based in JSON and OpenAPI, while older standards are based in XML. By implementing new OGC API applications we contribute to the spread of these standards in academic environments and their overall development.

In particular, we developed a web service compliant with OGC API - Features for exposing the stations’ information and locations as vector data, OGC API - Environmental Data Retrieval (EDR) for serving observations from environmental and meteorological stations, and OGC API - Processes to allow researchers to perform server-side processing to the underlying data, such as data cleansing, interpolations (e.g., conversion to a coverage format, obtaining data at an arbitrary point, etc.), and data aggregation (e.g., by day/month/year, by station). The API also complies with OpenAPI standards, HTTP content negotiation, and homogenised column names in English, to improve the usability of ARPA data by foreign researchers.

This work is not intended to replace ARPA Lombardia API, but to provide an alternative for accessing the data and extend even further the possibilities of researchers with additional processing capabilities. Additionally, to further improve the ecosystem of OGC API implementations available and push forward those open standards in academic literature. The full paper will provide a system architectural description and the particular technologies used to develop the application, a comparison with ARPA Lombardia's current API, and a case study portraying the API capabilities for research. This work is of interest to the FOSS4G community and European regional agencies as it is an implementation of a promising open standard for environmental monitoring and sensor networks, as it is the OGC API - EDR, and as an example of the infrastructure and the capabilities that services for environmental monitoring should have.

References:

[1] Maranzano, P. (2022). Air Quality in Lombardy, Italy: An Overview of the Environmental Monitoring System of ARPA Lombardia. Earth 2022, Vol. 3, Pages 172-203, 3(1), 172–203. https://doi.org/10.3390/EARTH3010013

[2] Gianquintieri, L., Oxoli, D., Caiani, E. G., & Brovelli, M. A. (2024). Implementation of a GEOAI model to assess the impact of agricultural land on the spatial distribution of PM2.5 concentration. Chemosphere, 352, 141438. https://doi.org/10.1016/J.CHEMOSPHERE.2024.141438

[3] Cedeno Jimenez, J. R., Pugliese Viloria, A. de J., & Brovelli, M. A. (2023). Estimating Daily NO2 Ground Level Concentrations Using Sentinel-5P and Ground Sensor Meteorological Measurements. ISPRS International Journal of Geo-Information, 12(3). https://doi.org/10.3390/IJGI12030107

[4] MSC GeoMet - GeoMet-OGC-API - Home. (n.d.). Retrieved February 23, 2024, from https://api.weather.gc.ca/

[5] API for downloading geographic objects (API-Features) of the National Geographic Institute. (n.d.). Retrieved February 23, 2024, from https://api-features.ign.es/

Academic track
Omicum
12:00
30min
Building Enterprise GIS with FOSS4G
Pekka Sarkola

Enterprise GIS is often understood to be only a marketing term for big companies and public institutions to purchase more software and services. Enterprise GIS can be built with FOSS4G (Free and Open Source Software for Geospatial). In this talk, I will cover basics of enterprise architecture and real world examples of how to use FOSS4G to build sustainable and affordable solutions for enterprises.

Enterprise GIS is an organisation-wide collection of interoperable GIS softwares to manage and process geospatial information. Enterprise GIS will follow basic principles of enterprise architecture. Enterprise GIS architecture is based on three layers: User Interface, Application Server and Data Storage layers. In this talk, I will give best practices to define and design Enterprise GIS architecture.

This presentation is targeted to ICT and GIS experts to better understand how to build Enterprise GIS with FOSS4G. After the presentation, you understand how to start the design and implementation of Enterprise GIS. You also have basic knowledge on how to use FOSS4G to build Enterprise GIS.

Transition to FOSS4G
GEOCAT (301)
12:00
30min
Building a New Rendering Backend for MapLibre Native: Industry Collaboration in FOSS
Bart Louwers

Fostering organic contributions from volunteers is sometimes seen as the only path towards a sustainable FOSS project. This talk will challenge that common wisdom. We will look at some of the engineering and the organization behind a complex year-long project: building a new rendering backend for MapLibre Native. This was delivered by a team of professional graphics engineers in a collaboration with AWS and Meta. What can other FOSS projects learn from this success story?

Community & Foundation
Van46 ring
12:00
30min
Development of a QGIS based topographic data management system
Olli Rantanen

The National Land Survey of Finland (NLS) is rebuilding its topographic data management system using open-source components. The plan is to replace the current production system after the first phase of development in 2025.

The goals of the renewal are:
- Utilization of new technologies and standards
- Advancement in the transition from producing map data to producing spatial data
- Enhancement of the quality and timeliness of data
- Enhancement of the production through automation and better tools

In this talk, I will talk about the status of the development, elaborate the main objectives of the first phase and introduce the published OS components so far. In the first two years of the development the focus was on concurrent data management by 100 operators and on the integration of the stereo mapping tools (proprietary). In addition, we have designed and implemented OS quality assurance tools to ensure the logical consistency of the features concerning the attributes, the geometries, and the topology. These tools also include a topological rule set for topographic data management in PostgreSQL.

Recently, we’ve added tools for managing the elevations of the features. Going forward, we plan to develop a custom feature search that combines data from multiple databases, layers and can also be adjusted for multiple attributes to search for.
Some of our mapping requirements also necessitate field mapping, and we’re also working on integrating our system with QField. Our field mapping requirements are quite specific. Our operators need to be able to use large quantities of data while also being offline in the field.

To aid development, we’ve published plugins for operators, streamlining digitization workflows. Furthermore, we’ve contributed development tools for QGIS plugin developers. While the OS publications of service and client components for concurrent data management aren’t yet on our roadmap, they remain as our final goal.

Use cases & applications
QFieldCloud (246)
12:00
30min
Open Source vs. Open Core for the Protomaps Project
Brandon Liu

Since 2023, the Protomaps project has transitioned from an "Open Core" project to a fully open source one, centered around the PMTiles open data format. I'll go over the success stories of open sourcing, including grant funding and growth in features and contributions from across the OpenStreetMap ecosystem.

I will showcase just a few of the dozens of production applications using Protomaps, and specifically highlight the tradeoffs in deployment methods and cloud vendors, such as static S3-like hosting vs. using a serverless runtime like Lambda or Cloudflare Workers. A new focus for 2024 has been public sector use, and I'll outline some specific challenges related to localization.

State of software
LAStools (327)
12:05
12:05
5min
Modernizing Geospatial Services: An investigation into modern OGC API implementation and comparative analysis with traditional standards in a Web application
Sudipta Chowdhury

The Open Geospatial Consortium (OGC) APIs are a new set of standards released in response to existing WxS standards which is considered as a modern technology for data sharing over the internet. This study explores the transition from traditional geospatial service standards to modern Open Geospatial Consortium (OGC) API standards in web applications by implementing it in the field of urban development management. The main goal of this study is to explore the potential for enhancing web applications through a comparative analysis of the integration of modern and traditional geospatial technologies based on their performance and practical implications.
The research scope encompasses the design and development of a modern web application architecture, involving database design and preparation, and automatic integration of data from various format; implementation of geospatial services using both traditional standards and modern OGC API standards, including the creation of a frontend website using Openlayers for the user. However, the core focus was given on the comparative analysis of the traditional and modern geospatial services standards, evaluating data compatibility, deployment processes, and performance metrics with different levels of concurrent requests.
The study is structured into two primary segments: an extensive theoretical evaluation of the standards, and followed by a hands-on testing phase. involving the setup of both traditional and modern services separately while keeping the other components (database and frontend) same in the architecture. In the database tier, PostGIS was employed, Geoserver and Pygeoapi were used in the server section for publishing data in both traditional (WxS) and modern (OGC API) standards to the user tier. OpenLayers was used for the frontend to visualize the data for users.
Database design and preparation were accomplished using Geodjango and PostgreSQL, and automatic data integration was conducted using Python. The ALKIS (Authoritative Real Estate Cadastre Information System of Germany) includes both spatial and non-spatial information encoded in NAS (i.e., the standards-based exchange interface defined by the Surveying Authorities of Germany) format using Extensible Markup Language (XML), served as the primary data source in this study with essential details such as street names, house numbers, and land parcel id. The comparison (Geoserver and Pygeoapi) platforms considered key findings, lessons learned, data format compatibility, and the evaluation of the installation process through literature review. Performance metrics were measured through hands-on testing in terms of rendering time, overall performance of the website for different zoom level for different scale of vector features. Testing also included different data source formats such as PostGIS, GeoPackage (gpkg), and Shapefile (shp), with a focus on how performance varied with the change of the data source in the front end. Apache Jmeter and Google Chrome developer tools like network and lighthouse were used to get the rendering data from the front end. Usability evaluations are currently underway to gain user perspectives on aspects like data retrieval speed, map rendering speed, and the ease of use (e.g., panning, zooming, popups) in comparison to the previous system.
In a theoretical comparison Geoserver, a well-established and widely adopted open-source platform with an organized Graphical User Interface (GUI), boasts robust security features with support for various authentication methods and precise access control. With a rich history and a large user community, Geoserver provides extensive documentation and support resources. It supports a diverse array of data stores, including popular databases and file-based formats. On the other hand, Pygeoapi, a newer but increasingly popular project, emphasizes simplicity and ease of use. Offering modern technologies like the OpenAPI standard for a RESTful API, Pygeoapi supports various data stores, including PostgreSQL/PostGIS and Elasticsearch. Installation is straightforward, leveraging Python and its dependencies. While Geoserver stands out for its comprehensive feature set, including support for OGC standards and numerous plugins, Pygeoapi focuses on being lightweight and customizable according to OGC API standards.
Based on the extensive hands-on testing, the analysis reveals persistent trends in rendering times across different scenarios. Pygeoapi consistently demonstrates higher rendering times compared to both Geoserver (WFS) and Geoserver (WMS). The fluctuation in rendering times remains relatively uniform as the zoom level increases from 14 to 18. However, as the number of features escalates from 4891 to 23319, both Pygeoapi (1.55s to 7.56s) and Geoserver WFS (454ms to 2.19s) exhibit a proportional increase in rendering time. Remarkably, Geoserver (WMS) showcases notable stability in rendering times across various zoom levels and feature counts, attributed to its tile-based approach. The observed linear correlation between feature count and rendering time suggests a scalability factor affecting both Pygeoapi and Geoserver. Consequently, users may need to consider factors beyond rendering times, such as ease of use, scalability, and available features, when making a choice between Pygeoapi and Geoserver for their specific spatial data needs. Moreover, concerning different data formats, it becomes apparent that PostGIS consistently outperforms SHP, JSON, WFS, and GPKG in Pygeoapi. In Geoserver, SHP and GPKG exhibit superior performance compared to other formats. These findings underscore the importance of considering the nuances of data formats when optimizing the performance of spatial data services. To overcome the issue of prolonged rendering times in Pygeoapi, especially when managing substantial amounts of GeoJSON data, a viable solution lies in incorporating vector tiles. The adoption of vector tiles led to a substantial reduction in rendering times (from 5.6s to 898ms) by transmitting pre-styled and pre-rendered map data. This approach enhances efficiency in visualizing data on the client side, demonstrating a significant improvement in performance.
In conclusion, at the end this research will endeavour to provide actionable insights towards the effective integration of geospatial technologies, with the goal of narrowing the divide between well-established standards and emerging APIs within the dynamic realm of web applications.

Academic track
Omicum
12:10
12:10
5min
The template for a Semantic SensorThings API with the GloSIS use case
Luís M. de Sousa

Motivation

Spatial Data Infrastructures (SDI) developed for the exchange of environmental
has heretofore been greatly shaped by the standards issued by the Open
Geospatial Consortium (OGC). Based on the Simple Object Access Protocol (SOAP),
services like WMS, WFS, WCS, CSW became digital staples for researchers and
administrative bodies alike.

In 2017 the Spatial Data on the Web Working Group (SDWWG) questioned the overall
approach of the OGC, based on the ageing SOAP technology
[@SDWWG2017]. The main issues identified by the SDWWG can be summarised as:

  • Spatial resources are not identified with URIs.
  • Modern API frameworks, e.g. OpenAPI, are not being used.
  • Spatial data are still shared in silos, without links to other resources.
  • Content indexing by search engines is not facilitated.
  • Catalogue services only provide access to metadata, not the data.
  • Data difficult to understand by non-domain-experts.

To address these issues the SDWWG proposed a five point strategy inspired on the
Five Star Scheme [@BernersLee2006]:

  • Linkable: use stable and discoverable global identifiers.
  • Parseable: use standardised data meta-models such as CSV, XML, RDF, or JSON.
  • Understandable: use well-known, well-documented, vocabularies/schemas.
  • Linked: link to other resources whenever possible.
  • Usable: label data resources with a licence.

The work of the SDWWG triggered a transformational shift at the OGC towards
specifications based on the OpenAPI. But while convenience of use has been the
focus, semantics has been largely unheeded. A Linked Data agenda has not
been pursued.

However, the OpenAPI opens the door to an informal coupling of OGC services with
the Semantic Web, considering the possibility of adopting JSON-LD as
syntax to OGC API responses. The introduction of a semantic layer to digital
environmental data shared through state-of-the-art OGC APIs is becoming a
reality, with great benefits to researchers using or sharing data.

This communication lays down a simple SDI set up to serve semantic environmental
data through a SensorThings API created with the glrc software. A use case is
presented with soil data services compliant with the GloSIS web ontology.

SensorThings API

SensorThings API is an OGC standard specifying a unified framework to
interconnect Internet of Things resources over the Web [@liang2016ogc].
SensorThings API aims to address both the semantic, as well as syntactic,
interoperability. It follows ReST principles [@fielding2002principled],
promotes data encoding with JSON, the OASIS OData protocol
[@chappell2011introducing] and URL conventions.

The SensorThings API is underpinned on a domain model aligned with the ISO/OGC
standard Observations & Measurements (O&M) [@Cox2011], targeted at the
interchange of observation data of natural phenomena. O&M puts forth the
concept of Observation has an action performed on a Feature of Interest
with the goal of measuring a certain Property through a specific Procedure.
SensorThings API mirrors these concepts with Observation, Thing,
ObservedProperty and Sensor. This character makes of SensorThings API a
vehicle for the interoperability of heterogeneous sources of environmental
data.

glrc

grlc (pronounced "garlic") is a lightweight server that translates SPARQL
queries into Linked Data web APIs [@merono2016grlc] compliant with the OpenAPI
specification. Its purpose is to enable universal access to Linked
Data sources through modern web-based mechanisms, dispensing the use of the
SPARQL query language. While losing the flexibility and federative capacities
of SPARQL, web APIs present developers with an approachable interface that can
be used for the automatic generation of source code.

A glrc API is constructed from a SPARQL query to which a meta-data section is
prepended. This section is declared with a simplified YAML syntax, within a
SPARQL comment block, so the query remains valid SPARQL. The meta-data provide
basic information for the API set up and most importantly, the SPARQL end-point
on which to apply the query. The listing below shows an example.

#+ endpoint: http://dbpedia.org/sparql

PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbr: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>

SELECT ?band_label { 
    ?band rdf:type dbo:Band ;
          dbo:genre dbr:Hard_Rock ;
          rdfs:label ?band_label .
} ORDER BY ?band_label

A special SPARQL variable formulation is used to map into API parameters. By
adding an underscore (_) between the question mark and the variable name,
glrc is instructed to create a new API parameter. A prefix separated again
with an underscore informs glrc of the parameter type. The ?band_label
variable can be expanded to ?_band_label_iri to create a
new API parameter of the type IRI.

Use case: GloSIS

The Global Soil Partnership (GSP) is a network of stakeholders in the soil
domain established by members of the United Nations Food and Agriculture
Organisation (FAO). Its broad goals are to raise awareness to the importance of
soils and to promote good practices in land management towards a sustainable
agriculture.

Acknowledging difficulties in exchanging harmonised soil data as an important
obstacle to its goals, the GSP launched in 2019 an international consultancy to
assess the state-of-the-art and propose a path towards a Global Soil Information
System (GloSIS) based on a unified exchange. A domain model resulted, based
on the ISO 28258 standard for soil quality [@SchleidtReznik2020], augmented with
code-lists compiled from the FAO Guidelines for Soil Description [@Jahn2006].
This domain model was then transformed to a Web Ontology, relying on the Sensor,
Observation, Sample, and Actuator ontology (SOSA) [@Janowicz2019], and other
Semantic Web standards such as GeoSPARQL, QUTD and SKOS. The GloSIS web ontology
has been successfully demonstrated as a vehicle to exchange soil information as
Linked Data [@GloSIS].

A prototype API for the GloSIS ontology, formulated in compliance with the
SensorThings API specification, will be presented in this communication. It
demonstrates how the same set of SPARQL queries can be used to query through a
ReST API any end-point available over the internet, sharing linked soil data in
accordance with the GloSIS ontology. Thus providing a clear step towards the
federated and harmonised system envisioned by the GSP.

Academic track
Omicum
12:15
12:15
5min
Can the Detection of Roman Roads be Automated with the Application of Open Source GeoAI?
Ian Turton

Introduction

In the past decade there has been an increased reporting of the use LiDAR data in the discovery of archaeological features in the landscape mainly driven by the increased availability of this type of data as a by-product of flood prevention or forest surveys. Examples from Britain include the discovery of a Roman marching camp and road fragment in North Wales using LiDAR data provided by the Environment Agency and the combination of the LiDAR data with other sources to show the route of a Roman road in Southern England. Recently work on the detection of some previously undiscovered Roman roads in the SW of England using a combination of least cost paths between known Roman occupation locations and then verifying these paths by examining the recently released Environment Agency LiDAR data.

However, in all of these cases the interpretation and discovery was carried out by a trained archaeologist looking at the LiDAR data set (in a variety of representations) and often by combining it with other information, including the known location of other roman artefacts. There is now 1m resolution LiDAR data available for the whole of Great Britain and in many places 50cm resolution data is available freely to researchers (and the general public). There is clearly more data than any one researcher (or even a research group) to investigate closely. This is a data rich problem that is ripe for the introduction of artificial intelligence (AI) or machine learning technology (ML) to winnow the grain from the chaff. This paper presents an attempt to move beyond applying AI towards the development of a system to detect Roman roads in large area LiDAR data by integrating other relevant data sources

Area of Study

This paper will concentrate on the area between Hadrian's Wall (Carlisle to Newcastle upon Tyne) and the Antonine Wall (Glasgow to Bo'ness). These two walls mark the northern border of the Roman empire in 142CE and 122CE, and there are signs that the Roman engineers planned to proceed further north with a road extending towards Perth, which is thought to have been the planned basis of a road network in the Highlands of Scotland. There were two main roads, which were constructed before either of the walls were built, a western road from Carlisle across the borders to Edinburgh, and an eastern one (Dere Street) from Corbridge to Dalkeith on the outskirts of Edinburgh. There were certainly other roads that joined these two main routes to areas of occupation, however the actual route of many of these roads remains unknown.

Methodology

The next step is to look for fairly straight linear features, such as modern roads, footpaths, hedge rows, and parish boundaries that represent where the course of a Roman road has been frozen into the landscape. In some places the agger or crown of the road can still be seen, sometimes paralleled by ditches. Finally, it can sometimes be possible to "see" the position of the deep ditches that used to parallel the course of the road by more verdant or abundant crops. This can be spotted in aerial photography or by an observer on the ground. Archaeologists assume that once a promising area has been identified by the methods above the archaeologist will visit it in person and then study the terrain, ideally at sunrise or sunset when small changes in topography cast long shadows. The final confirmation of finding a Roman road is probing the soil to detect the hard metalled surface below the surface or digging a trench to find it.

In fact it is now possible to make use of all of the above "rules" with out leaving the comfort of one's own office. The Second Edition County Series six-inch-to-one-mile maps covering the whole of Great Britain, published by the Ordnance Survey between 1888 and 1914 can be downloaded from the National Library of Scotland. The Scottish Government allows the download and use of it's LiDAR surveys at a resolution of 1m and 50cm for much of the study area. This allows us to apply artificial neural network deep learning methods using the TorchGeo package which has recently become an OSGEO project. This technology has been widely used in the medical field to detect cell abnormalities but there is limited use of the technology in the geographic and archaeological fields. There are examples which are all looking for relatively small objects such as ring forts and Mayan buildings. One of the key problems of this sort of machine learning work is the requirement for a well labelled training data set. In this case it is planned to use scheduled monuments, the boundaries of these areas can be downloaded as GIS ready data from the UK Government Open Data site. Once the roman roads are extracted from this data the polygons can be used to extract all the available data from the previously discussed data sets and used to train the deep learning model to see if useful features can be detected in the remainder of the study area using the system. Various combinations and pre-processed versions of LiDAR and aerial imagery will be compared to discover which has the best detection rates.

Academic track
Omicum
12:20
12:20
5min
Beautiful Thematic Maps in Leaflet with Automatic Data Classification
Dániel Balla

With the web being a platform that provides lots of features and a high degree of customizability for creating web maps, web-based thematic maps still require expertise to visualize geospatial data in a way that highlights spatial differences in an exact and cartographically comprehensive way. While most thematic maps show data with seven or less classes, as determined by (Linfang and Liqiu, 2014), the maker of a thematic map must choose a class count and classify quantitative data to properly convey their message through the map. Data classification methods all have advantages and disadvantages for specific spatial data types, therefore choosing the most optimal method is of great importance to minimize information loss (Osaragi, 2002). Choosing an optimal class count massively helps the map user to quickly comprehend thematic data and discover relevant spatial differences. With a plethora of visual variables, summarized by (Roth, 2017), there are many ways to distinguish classes of features in geovisualization. For styling features, mapping libraries provide tools to make use of only a few visual variables natively. A thematic map requires a specific symbology tailored to the given data, which distinguishes classes by altering one or more of these visual variables for their symbols. While its symbology needs to be legible and visually separated from the background map, it also needs to be created in a way that does not overload the map visually.

The popular open source web mapping framework Leaflet lacks a straightforward approach to create thematic maps with all basic principles that they should adhere to (data classification, automatic symbology and legend generation). In the paper, features and shortcomings of Leaflet in the context of thematic mapping are examined in detail. First, Leaflet lacks any kind of native data classification process that would be needed to create discrete classes of data for thematic maps. Therefore, using GIS software beforehand to classify and style the dataset properly (to get class boundaries and exact colours) is inevitable. Second, for symbology, although it makes use of modern web standards like HTML5 and CSS3 to style vector map features (Agafonkin, 2023), it still lacks styling solutions that are common in traditional thematic cartography (e.g., hatch fill patterns), as discussed in (Gede, 2022). As a thematic map requires some kind of explanation of visualized data, the presence of a descriptive, well-formed legend with exact symbols for all data classes is non-trivial either. Although various tutorials and workarounds are available, those only solve part of the principles. The examples provided by the official website of Leaflet are hard-coded and static, meaning that they will have to be recreated for each specific thematic map, making them unsuitable for implementation in a dynamic data visualization. Moreover, these workarounds are complex to accomplish, especially for those who are not familiar with programming to an extent to be able to code visually pleasing thematic maps on websites.

As a solution, this paper introduces a highly customizable, open source plugin for Leaflet, developed by the author, which extends Leaflet’s GeoJSON class and combines all processes required for creating a thematic map in a single step. By combining all the necessary processes, this easy-to-use extension is a solution that wraps the individual processes of quantitative data classification, symbology and creation of an appealing legend. The extension puts an emphasis on providing numerous options for a highly customizable visualization. It supports well-known data classification methods, custom and Brewer colour ramps as defined by (Brewer et al., 2003), symbol colour-, size- and hatch fill pattern-based distinctions, HTML legend row templating, legend class order, data manipulation options, and many other features. For maps with graduated symbol sizes, it generates widths between user-adjustable min-max sizes. For point features, the symbol shape can also be changed to predefined SVG shape symbols. Data manipulation options include normalization by a secondary attribute field in dataset, rounding generated class boundary values and visually modifying them by applying division or multiplication (to easily change unit of displayed value). In case the input GeoJSON dataset has features without data for the set attribute field (null/nodata), these features are handled to optionally form a separate class with a neutrally styled symbol. Should the map maker wish to ignore these nodata features, they can be ignored, therefore not showing up on the map as a distinguished class. As it is an extension of a native L.geoJSON layer, multiple instances of L.dataClassification layers can also still be used within a single Leaflet map object. This allows for more complex thematic maps with multiple layers or different kinds of data with a different symbology type at the same time (e.g., a combination of a choropleth background map, with graduated symbol sized points as a second layer in the foreground). Since a legend is automatically generated and displayed for each instance, they are linked to the respective data layer, therefore it inherits all methods that are called on the layer (e.g., if the map maker uses remove() to programmatically remove the layer for some reason, the related legend also reflects these changes). Even though the legend is created with a clear and concise style by default, legend styling can easily be customized with the provided options and CSS definitions.

As one of the goals, the plugin facilitates the easy creation of clean thematic maps using exclusively open source software and libraries, with the hope of increasing the availability, accessibility and popularity of such thematic mapping on the web. The extension is still under development, and is available on GitHub (with examples), at https://github.com/balladaniel/leaflet-dataclassification.

Academic track
Omicum
12:30
12:30
90min
Lunch
Van46 ring
12:30
90min
Lunch
GEOCAT (301)
12:30
90min
Lunch
LAStools (327)
12:30
90min
Lunch
QFieldCloud (246)
12:30
90min
Lunch
Omicum
14:00
14:00
30min
QGIS for Hydrological Applications
Hans van der Kwast

Hydrological analysis is a common task in environmental and geospatial applications. However, many users of QGIS encounter challenges when they want to perform stream and catchment delineation or morphometric analysis of streams and catchments using various processing provider plugins. These plugins, such as PCRaster, SAGA, GRASS and WhiteboxTools, offer different algorithms and methods for hydrological analysis, but they also require different installation procedures and have different limitations and assumptions. In this presentation, we will review the main features and drawbacks of these plugins, and provide practical tips and examples on how to use them effectively in QGIS. We will also compare the results of different algorithms and discuss the implications for hydrological analysis workflows. By the end of this presentation, you will have a better understanding of the available tools and techniques for stream and catchment delineation in QGIS, and how to choose the most suitable ones for your projects.

Use cases & applications
GEOCAT (301)
14:00
30min
State of GeoServer
Andrea Aime

GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping, as well as to process data, either in batch or on the fly.
GeoServer powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage, disseminate and analyze data at scale.

This presentation provides an update on our community as well as reviews of the new and noteworthy features for the latest releases. In particular, we will showcase new features landed in 2.24 and 2.25, as well as a preview of what we have in store for 2.26 (to be released in September 2024).

Attend this talk for a cheerful update on what is happening with this popular OSGeo project, whether you are an expert user, a developer, or simply curious what GeoServer can do for you.

State of software
Van46 ring
14:00
30min
Turning an Old Ship Towards FOSS4G – Why, How and Where To.
Püü Polma

While Regio is a company that has consistently employed and appreciated free and open-source solutions, the primary engines powering the business have largely originated from commercial sources. Few years ago, Regio made the strategic decision to navigate away from commercial software as extensively as possible. This transition has presented significant challenges, as the ship is aged and entangled with numerous dependencies. Despite encountering setbacks, Regio remains resolute in its commitment. In addition to the advantages already acquired, the promise of open waters ahead fuels our determination to navigate this transformative journey.

Transition to FOSS4G
QFieldCloud (246)
14:00
30min
Vector Tiles Cartography: Elevate your maps with JSON tricks
Nicolas Bozon, Petra Duriancikova

Unlock the full potential of Vector Tile Cartography with the power of JSON. While working with JSON might be confusing at first, the goal of this talk is to help you understand the basic syntax and how you can leverage it to make beautiful maps.

We will learn the basics of a style.json file, the crucial document that defines the map as per the MapLibre GL JS style spec. Using JSON syntax snippets, we will navigate through specific MapTiler cartography designs that you can reuse every time you make a map.

Use cases & applications
LAStools (327)
14:00
30min
XDGGS: A community-developed Xarray package to support planetary DGGS data cube computations
Alexander Kmoch

1. Introduction

Traditional maps use projections to represent geospatial data in a 2-dimensional plane. This is both very convenient and computationally efficient. However, this also introduces distortions in terms of area and angles, especially for global data sets (de Sousa et al., 2019). Several global grid system approaches like Equi7Grid or UTM aim to reduce the distortions by dividing the surface of the earth into many zones and using an optimized projection for each zone to minimize distortions. However, this introduces analysis discontinuities at the zone boundaries and makes it difficult to combine data sets of varying overlapping extents (Bauer-Marschallinger et al., 2014).

Discrete Global Grid Systems (DGGS) provide a new approach by introducing a hierarchy of global grids that tesselate the Earth’s surface evenly into equal-area grid cells around the globe at different spatial resolutions, and providing a unique indexing system (Sahr et al., 2004). DGGS are now defined in the joint ISO and OGC DGGS Abstract Specification Topic 21 (ISO 19170-1:2021). DGGS serve as spatial reference systems facilitating data cube construction, enabling integration and aggregation of multi-resolution data sources. Various tessellation schemes such as hexagons and triangles cater to different needs - equal area, optimal neighborhoods, congruent parent-child relationships, ease of use, or vector field representation in modeling flows.

Purss et al. (2019) have explained the idea to combine DGGS and data cubes and underlined the compatibility of these two concepts. Thus, DGGS are a promising way to harmonize, store, and analyse spatial data on a planetary scale. DGGSs are commonly used with tabular data, where the cell id is a column. Many datasets have other dimensions, such as time, vertical level, ensemble member, etc. For these, it was envisioned to be able to use Xarray (Hoyer and Hamman 2017), one of the core packages in the Pangeo ecosystem, as a container for DGGS data.

At the joint OSGeo and Pangeo code sprint at the ESA BiDS’23 conference (6.-9. November, 2023, Vienna), members from both communities came together and envisioned implementing support for DGGS in the popular Xarray Python package, which is at the core of many geospatial big data processing workflows. The result of the codesprint is a prototype Xarray extension, named xdggs (https://github.com/xarray-contrib/xdggs), which we describe in this article.

2. Design and methodology

There are several open-source libraries that make it possible to work with DGGS. Uber H3 , HEALPIX , rHEALPix , DGGRID , Google S2 , OpenEAGGR – many if not most have Python bindings (Kmoch et al. 2022). However, they often come with their very own not easy-to-use APIs, different assumptions, and functionalities. This makes it difficult for users to explore the wider possibilities that DGGS can offer.
The aim of xdggs is to provide a unified, high-level, and user-friendly API that simplifies working with various DGGS types and their respective backend libraries, seamlessly integrating with Xarray and the Pangeo open-source geospatial computing ecosystem. Executable notebooks demonstrating the use of the xdggs package are also developed to showcase its capabilities. The xdggs community contributors set out with a set of guidelines and common DGGS features that xdggs should provide or facilitate, to make DGGS semantics and operations possible to use via the user-friendly Xarray API of working with labelled arrays.

3. Results

This development represents a significant step forward. With xdggs, DGGS become more accessible and actionable for data users. Like traditional cartographic projections, a user does not need to be a expert on the peculiarities of various grids and libraries to work with DGGS, and can continue working in the well-known Xarray workflow. One of the aims of xdggs is making DGGS data access and conversion user-friendly, while dealing with the coordinates, tesselations, and projections under the hood.

DGGS-indexed data can be stored in an appropriate format like Zarr or (Geo)Parquet, with according metadata to understand which DGGS (and potentially under which specific configuration) is needed to address the grid cell indices correctly. An interactive tutorial on Pangeo-Forge as open-access resource is being developed as well to demonstrate to users how to effectively utilizing these storage formats, thereby facilitating knowledge transfer in data storage best practices within the geospatial open-source community.

Nevertheless, continuous efforts are necessary to broaden the accessibility of DGGS for scientific and operational applications, especially in handling gridded data such as global climate and ocean modeling, satellite imagery, raster data, and maps. This would require, for example, an agreement ideally with entities such as the OGC for DGGS reference systems’ registry (similar to the epsg/crs/proj database).

4. Discussion and outlook

One of the big advantages of DGGS use via Xarray is the data integration between multi-source multi-sensor EO data, large global-scale ocean and climate models using the Pangeo environment and to make the data access and development practical and FAIR (Findable, Accessible, Interoperable, Reproducible) in the community. Two additional directions to improve uptake and comprise knowledge transfer could include:

1) The implementation of DGGS such as HEALPix, DGGRID-based equal-area DGGS (ISEA), rHEALPix, and (currently) more industry-friendly DGGS (Uber H3, Google S2) on Xarray should be improved further, and more user-friendly API for how to re-grid current data into DGGS grids. Training materials and Pangeo sessions should be conducted to demonstrate the use of DGGS in Xarray, aimed at enhancing the skillset of practitioners and researchers in geospatial data handling, spatial data analysis, and professional and academic institutions.

2) DGGS-indexed reference datasets could be validated and also used to highlight case studies and instructional material can be used in academic courses and workshops, focusing on the practical applications of data fusion, quick addressing of equal-area cell grids, AI, socio-economic and environmental studies. Especially the emerging property of selecting cell-ranges from different data sources to join and integrate only based on cell ids could make partial data access and sharing more dynamic and easy.

Academic track
Omicum
14:30
14:30
30min
Mastering Security with GeoServer, GeoFence, and OpenID
Andrea Aime, Emanuele Tajariol

The presentation will provide a comprehensive introduction to GeoServer's own authentication and authorization subsystems. The authentication part will cover the various supported authentication protocols (e.g. basic/digest authentication, CAS, OAuth2) and identity providers (such as local config files, database tables and LDAP servers). It will also cover the recent improvements implemented with the OpenID integrations and the refreshed Keycloak integration.

It will explain how to combine various authentication mechanisms in a single comprehensive authentication tool, as well as provide examples of custom authentication plugins for GeoServer, integrating it in a home-grown security architecture. We’ll then move on to authorization, describing the GeoServer pluggable authorization mechanism, and comparing it with a external proxy-based solution. We will explain the default service and data security system, reviewing its benefits and limitations.

Finally, we’ll explore the advanced authorization provider, GeoFence. The different levels of integration with GeoServer will be presented, from the simple and seamless direct integration to the more sophisticated external setup. Finally, we’ll explore GeoFence’s powerful authorization rules using:

  • The current user and its roles.
  • The OGC services, workspace, layer, and layer group.
  • CQL read and write filters.
  • Attribute selection.
  • Cropping raster and vector data to areas of interest.
Use cases & applications
Van46 ring
14:30
30min
Modernizing the National River and Lakes Cadastre by Transition to FOSS4G
Andrius Balciunas

This presentation is intended to introduce a project completed at the end of 2023, during which the Lithuanian National River and Lake Cadastre (https://uetk.biip.lt/) has been modernized by transferring GIS (and not only) solutions from commercial software to open source and by extending automated GIS data processing solutions. During the presentation, we will share not only the technological solutions we have adopted, but also our experience in changing the attitude of GIS specialists with experience of working with commercial GIS software towards open source.

The main technological components of the project included the development of a data management system using PostGIS and QGIS, the development of a map browser using openlayers and Vue JS (available as an open source project - https://github.com/AplinkosMinisterija/biip-maps-web), and the development of a service publishing solution based on QGIS Server. The project used Docker technology and GitHub action-based continuous deployment (CD) solutions, which should also be relevant to the audience.

Many of us know that building a system from the ground up is often much easier than upgrading an existing system that has been in place for a long time but has not been updated. This was the case for the National River and Lakes Cadastre as well. This process is particularly challenging when it comes to the modernisation of national information systems and cadastres, which are often subject to quite strict legislative control. There are also a number of challenges at the technical level: 1) old and outdated software that cannot be upgraded without overhauling the system, 2) integrations with other systems, 3) old infrastructure, 4) code that is closed and unmanageable by the organization, and 5) users who are working with the data, who are challenged by the new solutions. This is exactly the same set of problems that awaited the modernisation of the Lithuanian National Cadastre of Rivers and Lakes, managed by the Environmental Protection Agency.

The Lithuanian National Cadastre of Rivers and Lakes is a system for collecting, organizing and making available to other information systems and to the public data on rivers, lakes, ponds, as well as hydraulic engineering structures such as hydroelectric power plants, overflow culverts, fish ladders, fish passes, research stations, etc. The cadastre consists of three major components: the administration of GIS data, the publishing and viewing of online map services, and the provision of e-services (extracts, statistics). This cadastre is of particular importance as it is not only essential for the monitoring of the hydrographic network, but also for the protection of these features and the restriction of agricultural activities.

The digital cadastre of rivers and lakes was launched almost 10 years ago, and at that time it was one of the largest GIS projects in Lithuania. The cadastre was implemented using only commercial software: ArcGIS (ArcMap, ArcGIS Server, ArcGIS JS api), Oracle, Alfresco (document management system), etc. Over time, the Environment Agency has been unable to upgrade the software of these systems due to licensing costs, and has lost support as a result. The system soon showed its weaknesses: the system's availability was unacceptable (the load time of web map was 1 minute at peak times), the incorrectly implemented database structure led to a large number of errors, as the data was administered using a programmed plug-in in the ArcMap environment, any changes were difficult to implement and in practice were difficult to support due to the old code and outdated versions of ArcMap software.

We came into the modernisation project with the basic idea of migrating the core functionality to open source solutions, i.e. migrating exactly what is being used and not bloated with legacy software solutions. In this presentation, we will share these insights from the project:
- Use of PostGIS for automated data filling, e.g. automated analysis of river tributaries, lengths, distances to parent river headwaters, etc. All of this is realized with the help of PostGIS functions;
- Database structure and use to ensure data integrity;
- Creating a map browser using OpenLayers and Vue JS as a reusable component that now serves also for other informational systems;
- Use of QGIS server and Docker;
- GitHub and continuous deployment to dev, staging and production environments.

Transition to FOSS4G
GEOCAT (301)
14:30
30min
QField 3 - Fieldwork redefined
Marco Bernasocchi, Matthias Kuhn

The mobile application QField is based on QGIS and allows fieldwork to be carried out efficiently based on QGIS projects, offline or online. Developments in recent months have added additional functions to the application that are useful for fieldwork. Examples are used to present the most important new features. Discover the most recent features like 3D-layers and point clouds handling, NFC and QR reader, printing of reports and atlases, elevation profiling of terrain and layers, multi-column support in feature form, azimuth values in the measuring tool, locked screen mode, stakeout functionalities, and many more.

State of software
QFieldCloud (246)
14:30
30min
SpectralIndices.jl: Streamlining spectral indices access and computation for Earth system research
Francesco Martinuzzi

Remote sensing has evolved into a fundamental tool in environmental science, helping scientists monitor environmental changes, assess vegetation health, and manage natural resources. As Earth observation (EO) data products have become increasingly available, a large number of spectral indices have been developed to highlight specific surface features and phenomena observed across diverse application domains, including vegetation, water, urban areas, and snow cover. Examples of such indices include the normalized difference vegetation index (NDVI) (Rouse et al., 1974), used to assess vegetation states, and the normalized difference water index (NDWI) (McFeeters, 1996), used to delineate and monitor water bodies. The constantly increasing number of spectral indices, driven by factors such as the enhancement of existing indices, parameters optimization, and the introduction of new satellite missions with novel spectral bands, has necessitated the development of comprehensive catalogs. One such effort is the Awesome Spectral Indices (ASI) suite (Montero et al., 2023), which provides a curated machine-readable catalog of spectral indices for multiple application domains. Additionally, the ASI suite includes not only a Python library for querying and computing these indices but also an interface for the Google Earth Engine JavaScript application programming interface, thereby accommodating a wide range of users and applications.

Despite these valuable resources, there is an emerging necessity for a dedicated library tailored to Julia, a programming language renowned for its high-performance computing capabilities (Bezanson et al., 2017). Julia has not only established itself as an effective tool for numerical and computational tasks but also offers the possibility to utilize Python within its environment through interoperability features. This interoperation adds a layer of flexibility, allowing users to access Python's extensive libraries and frameworks directly from Julia. However, while multiple packages are available in Julia to manipulate high dimensional EO data, most of them provide different interfaces. Furthermore, leveraging Python's PyCall for interfacing with Zarr files and other high-dimensional data formats is not practical. Specifically, the inefficiency in cross-language data exchange and the overhead from cross-language calls significantly hinder performance, underlining the need for native Julia solutions optimized for such data tasks.

Recognizing the need for a streamlined approach to use spectral indices, we introduce SpectralIndices.jl, a Julia package developed to simplify the computation of spectral indices in remote sensing applications. SpectralIndices.jl provides a user-friendly, efficient solution for both beginners and researchers in the field of remote sensing. SpectralIndices.jl offers several features supporting remote sensing tasks:
- Easy Access to Spectral Indices: The package provides instant access to a comprehensive range of spectral indices from the ASI catalog, removing the need for manual searches or custom implementations. Users can effortlessly select and compute indices suitable for their specific research needs.
- High-Performance Computing: Built on Julia's strengths in numerical computation, SpectralIndices.jl provides rapid processing even for large datasets (Bouchet-Valat et al., 2023). Consequently, this makes it a time-efficient tool for handling extensive remote sensing data.
- Versatile Data Compatibility: SpectralIndices.jl supports a growing list of input data types. Furthermore, the addition of data types to the library does not slow down compilation through the built-in package extensions of Julia that allow conditional compilation of dependencies.
- User-Friendly Interface: Designed with simplicity in mind, the package enables users to compute spectral indices with just a few lines of code. This ease of use lowers the barrier to entry for those new to programming or remote sensing.
- Customization and Community Contribution: Users can extend the package's capabilities by adding new indices or modifying existing ones. This openness aligns with the FAIR principles, ensuring that data is findable, accessible, interoperable and reusable.

By providing a straightforward and efficient means to compute spectral indices, the package helps users to streamline and accelerate software pipelines in Earth system research. Furthermore, it provides a consistent and unified interface to compute indices, improving the reliability and accuracy of research outcomes. Whether tracking deforestation, studying crop health, or assessing water quality, SpectralIndices.jl equips users with the tools needed for accurate, timely analysis.

The introduction of SpectralIndices.jl reflects a broader trend in scientific computing towards adopting high-performance languages like Julia, highlighting the importance of efficient data analysis tools in addressing complex environmental challenges. This development contributes to the democratization of data analysis, making advanced tools more accessible to a diverse range of users.

The SpectralIndices.jl package is open-source and hosted on GitHub (https://github.com/awesome-spectral-indices/SpectralIndices.jl), available for public access and contribution. It is licensed under the MIT license, which permits free use, modification, and distribution of the software. This approach encourages community contributions and fosters an environment of shared learning and improvement, ensuring that SpectralIndices.jl remains a cutting-edge tool for environmental analysis and research. Additionally, the code is commented and documented, facilitating both contribution and adoption. The code in the examples is run during the compilation of the online documentation, assuring its reproducibility. Finally, the software is tested using continuous integration through GIthub Actions, ensuring its correct execution in different use cases and environments.

Academic track
Omicum
14:30
30min
pg_featureserv - Publication of vector data with OGC API features
Jakob Miksch

This presentation introduces the pg_featureserv program. It is lightweight, written in Go, and is used to publish vector data from PostGIS databases using the OGC API features standard. First, the main features of pg_featureserv are presented, including installation and setup.

Special attention is given to the tool's numerous filter functions, which allow precise and efficient queries of spatial data. The usage of these functions is explained with examples.

In addition, the OGC API Features standard is presented as an alternative to the traditional WFS for publishing spatial data. The differences between the two approaches are explained and the advantages of the new standard are highlighted.

State of software
LAStools (327)
15:00
15:00
30min
Creating a New River Network for Ireland
Seth Girvin

In 2024, Ireland's Environmental Protection Agency (the EPA), in collaboration with Compass Informatics, began creating a new national river network using highly accurate vector data provided by the national mapping agency Tailte Éireann. This will replace the current river network based on 50,000 scale data.

The existing river network contains around 85,000 Km of water channels, whereas the new dataset is almost 125,000 Km in length. However, the new dataset contains gaps where water flow is not visible, such as through culverts under roads, and lacks essential attributes required for environmental monitoring such as flow direction and stream order. Currently, a project is underway to join gaps in this dataset, create flow lines through waterbodies such as lakes and transitional waters, add missing features, and connect the network through groundwater aquifiers.

An on-line GIS editing portal was developed to support the project using open-source software, which will be the focus of the talk. A front-end web GIS was developed using OSGeo projects OpenLayers and GeoExt, and other open-source geospatial JavaScript libraries including cpsi-mapview.
The backend uses MapServer and Python web services, building on numerous open-source geospatial libraries including NetworkX, Shapely, skgeom, and two libraries created by Compass Informatics: Wayfarer, a Python library for analysing geospatial networks, and Cascade, a Python library for applying stream orders to vector networks. Cascade will be released as an open-source library as part of this project.

Upon completion, the new river network will enhance the EPA's modelling and assessment capabilities across water and environmental domains. This includes sediment and flow modelling, catchment assessments, water quality monitoring, and delineation of river waterbodies. The new network will benefit many other organisations for applications such as fisheries and flood management and become a key component of Ireland's national data infrastructure.

Use cases & applications
GEOCAT (301)
15:00
30min
Facilitating advanced Sentinel-2 analysis through a simplified computation of Nadir BRDF Adjusted Reflectance
David Montero Loaiza

The Sentinel-2 mission, pivotal to the European Space Agency's Copernicus program, features two satellites with the MultiSpectral Instrument (MSI) for high-to-medium resolution (10-60 m) imaging in visible (VIS), near-infrared (NIR), and shortwave infrared (SWIR) bands. Its 180° satellite phasing allows for a 5-day revisit time at the equator, essential for Earth Observation (EO) tasks. Sentinel-2 Surface Reflectance (SR) is crucial in detailed Earth surface analysis. However, for enhanced accuracy in SR data, it is imperative to perform adjustments that simulate a nadir viewing perspective (Roy et al., 2016). This correction mitigates the directional effects caused by the anisotropy of SR and the variability in sunlight and satellite viewing angles. Such adjustments are essential for the consistent comparison of images captured at different times and under varying conditions. This is particularly critical for processing and analysing Earth System Data Cubes (ESDCs, Mahecha et al., 2020), which are increasingly used due to their organised spatiotemporal structure and the ease of their generation from cloud-stored data (Montero et al., 2023).

The MODIS BRDF/Albedo product presents spectral Bidirectional Reflectance Distribution Function (BRDF) model parameters, enabling the calculation of directional reflectance across any specified sensor viewing and solar angles. Building on this foundation, Roy et al. (2008, 2016) introduced a novel approach leveraging MODIS BRDF parameters, named the c-factor, for the adjustment of Landsat SR data. This adjustment produces Nadir BRDF Adjusted Reflectance (NBAR) by multiplying the observed Landsat SR with the ratio of reflectances predicted by the MODIS BRDF model for both the observed Landsat SR and a standard nadir view under fixed solar zenith conditions. Subsequently, Roy et al. (2017) expanded this method to include adjustments for multiple Sentinel-2 spectral bands (VIS to SWIR).

While the c-factor method facilitates straightforward computation for individual Sentinel-2 images, there is a notable absence of a unified Python framework to apply this conversion uniformly across multiple images, especially for ESDCs derived from cloud-stored data.

To bridge this gap, we introduce “sen2nbar,” a Python package specifically developed to convert Sentinel-2 SR data to NBAR. This tool is versatile, aiming for converting both individual images and ESDCs generated from cloud-stored data, thus streamlining the conversion process for Sentinel-2 data users.

The "sen2nbar" package, meticulously designed for simplicity, facilitates the direct conversion of Sentinel-2 Level 2A (L2A) SR data to NBAR through a single function. To streamline this process, the package is segmented into multiple modules, each dedicated to specific tasks within the NBAR computation pipeline. These modules include functions for extracting sun and sensor viewing angles from metadata, calculating geometric and volumetric kernels, computing the BRDF model, and determining the c-factor.

“sen2nbar” supports NBAR calculations for three distinct data structures:

  1. Complete scenes via SAFE files: Users can input a local SAFE file from a Sentinel-2 L2A scene. The package processes this file, generating a new folder where each spectral band is adjusted to NBAR at its original resolution. The adjusted images are saved as Cloud Optimised GeoTIFF (COG) files, with an option for users to choose standard GeoTIFF formats instead.

  2. Xarray Data Arrays via “stackstac”: For ESDCs obtained as xarray data array objects from a SpatioTemporal Asset Catalog (STAC) using stackstac and pystac-client, “sen2nbar” requires the xarray object, the STAC endpoint, and the Sentinel-2 L2A collection name. This information allows the package to access STAC for metadata retrieval necessary for adjusting the data cube. The spatial coverage and resolution in this scenario might differ from complete scenes, and "sen2nbar" adjusts only the specific area and timeframe retrieved for the given resolution.

  3. Xarray Data Arrays via “cubo”: When users have ESDCs formed as xarray data arrays through cubo, which builds upon stackstac and incorporates the STAC endpoint and the collection name as attributes, “sen2nbar” directly adjusts these to NBAR, utilising the methodology described in the stackstac case.

For the latter two scenarios, “sen2nbar” works without writing files to disk, instead returning an xarray data array object containing the NBAR values. The package is designed to handle available bands without errors for missing bands, acknowledging that users may not require all bands and might have generated ESDCs with selected bands. Additionally, if the input arrays are ‘lazy’ arrays, created using dask arrays (a default in stackstac or cubo), “sen2nbar” executes calculations in parallel, ensuring efficient computation of NBAR values.

Importantly, “sen2nbar” automatically harmonises SR data for images with a processing baseline of 04.00 or higher before performing NBAR, ensuring consistency and accuracy in the processed data.

"sen2nbar" efficiently computes NBAR values from Sentinel-2 L2A SR data. The software supports complete SAFE files processing as well as the adjustment of ESDCs sourced from STAC and COG files, utilising tools such as “stackstac” and “cubo”. This versatility is encapsulated in a streamlined design, allowing for the adjustment of various data formats through a single, user-friendly tool, adapted to diverse user requirements.

"sen2nbar" is anticipated to become a key resource for geospatial Python users, especially in Earth System research. This tool is set to improve analyses conducted by scientists and students by significantly reducing the time and effort traditionally spent on technical adjustments. Its impact is expected to be particularly profound for multitemporal analyses, facilitating more efficient and streamlined investigations. This includes Artificial Intelligence (AI) research, particularly for studies involving multidimensional EO data. By utilising "sen2nbar", AI-based research can achieve more reliable outcomes, enhancing the overall quality and credibility of the findings.

The “sen2nbar” package is open-source and readily available on GitHub (https://github.com/ESDS-Leipzig/sen2nbar) under an MIT License. This encourages contributions from the global community, fostering collaborative development and continuous improvement. While prior experience in Remote Sensing can be advantageous for users, it is not a prerequisite for using it. The package is equipped with comprehensive documentation and tutorials, all designed to be beginner-friendly and facilitate easy adoption of the package.

Academic track
Omicum
15:00
30min
Mergin Maps: an open source platform based on QGIS for data collection and collaboration
Saber Razmjooei

Mergin Maps simplifies field data collection, offering an open-source platform built on the power and familiarity of QGIS. Capture, share, and publish your geospatial data seamlessly with intuitive mobile apps and robust web tools.

Mergin Maps (MM) has the following components:
- Desktop: QGIS to set up and design your field survey
- QGIS MM plugin: to upload/download your data to/from your cloud service (Mergin Maps server)
- Mergin Maps mobile: an app based on QGIS with synchronisation tool allowing you to open your QGIS project and edit/capture data in the field
- Mergin Maps server: a service allowing you to store and synchronise the data between QGIS and mobile app.

There are other tools and APIs available to handle the data transfer programmatically. For full list, see:
https://github.com/MerginMaps

State of software
QFieldCloud (246)
15:00
30min
OpenMapTiles - vector tiles from OpenStreetMap & Natural Earth Data
Tomáš Pohanka

OpenMapTiles is an open-source set of tools for processing OpenStreetMap data into zoomable and web-compatible vector tiles to use as high-detailed base maps. These vector tiles are ready to use in MapLibre, Mapbox GL, Leaflet, OpenLayers, and QGIS as well as in mobile applications.

Dockerized OpenMapTiles tools and OpenMapTiles schema are being continuously upgraded by the community (simplification, performance, robustness). The presentation will demonstrate the latest changes in OpenMapTiles. The last release of OpenMapTiles greatly enhanced cartography and map styling possibilities, especially the enrichment of road network, added more POI, or improved update performance. The latest version of Natural Earth brings updated data to upper zooms and includes the OpenMapTiles style showing all features in well-known colors for vector tiles. OpenMapTiles is also used for generating vector tiles from government open data secured by Swisstopo.

State of software
LAStools (327)
15:00
5min
geoserverx - a new CLI and library to interact with GeoServer
Francesco Bartoli, krishna lodha

geoserverx is a modern Python package that provides an efficient and scalable way to interact with Geoserver REST APIs. It leverages the asynchronous capabilities of Python to offer a high-performance and reliable solution for managing Geoserver data and services.
With geoserverx, users can easily access and modify data in Geoserver, such as uploading and deleting shapefiles, publishing layers, creating workspaces, styles, etc. . The package supports asynchronous requests along with synchronous method to the Geoserver REST API, which enables users to perform multiple tasks simultaneously, improving performance and reducing wait times.
Apart from being implemented in Python Projects, geoserverx also provides CLI support for all of it's operations. Which makes it useful for people who want to avoid Python all-together.
In this talk we discover for the very first time about how geoserverx work and underlying code ideology. Along with that we'll also spread some light on upcoming modules to be integrated in geoserverx

Use cases & applications
Van46 ring
15:05
15:05
5min
GeoServer Monitor PostgreSQL Extension – Persist the monitoring metrics of your GeoServer in a PostgreSQL database
Sangeetha Shankar

Optimal performance of GeoServers in production environments is essential to provide high quality of service to the users. A GeoServer deployed in production environment may host several layers that serve data from multiple data sources (datastores). GeoServer offers a monitor extension that tracks the requests received by the GeoServer and collects information such as requested resources, response time, response status and so on. The monitor extension supports two methods of storing these metrics. The first option is memory storage, where the metrics on the last 100 requests are stored in memory. However, this storage is volatile and information is lost when the GeoServer is restarted. Additionally, this option is insufficient for GeoServers receiving several hundreds of requests every day. The second option is audit logging, which stores the metrics in a file on the server. However, a secondary application will have to process them to analyze or visualize the data. Apart from these, the Hibernate Monitor community module was available to store the metrics in a database. However, this community module is not available for the newer versions of the GeoServer and also seems to no longer be maintained.

The GeoServer Monitor PostgreSQL module presented in this talk aims to overcome the aforementioned limitations by offering a solution to persist the metrics in a PostgreSQL database. This module is an extension to the official monitoring extension of the GeoServer. It fetches the metrics generated by the monitoring extension after a request is post-processed and persists them in a PostgreSQL database. The persistent storage of metrics enables the administrators as well as the users of the GeoServer to analyze the performance of their GeoServer layers. The GeoServer Monitor PostgreSQL module offers a simple, low-level approach to write records to the database through the use of native Java libraries and PostgreSQL JDBC Driver. Lesser dependency on external modules makes this extension easy to maintain and update.

The module was developed in 2022 and has been installed in a GeoServer instance managed by the Institute of Transportation Systems, German Aerospace Center (DLR). This GeoServer instance receives more than 7000 requests every day. The request metrics are being persisted since October 2022 and bug fixes and improvements have been carried out during this test run. The open-source publication of the module is currently in progress and is expected to be completed by March 2024. This work is being carried out as a part of the DLR-funded cross-domain project called “Digitaler Atlas 2.0”.

State of software
Van46 ring
15:10
15:10
5min
Spatial Data Sharing and Implications: An Example from the Map Department of General Directorate of Land Registry and Cadastre, Türkiye
Salih Yalcin

In our country, coordination among public institutions for the Turkey’s National Geographic Information System (NSDI) (Türkiye Ulusal Coğrafi Bilgi Sistemi - TUCBS) and its infrastructure, the establishment of goals and strategies, the generation and maintenance of geographic data within the thematic areas of geographic information, and ensuring its currency, management, use, access, security, sharing, and distribution are determined by the procedures, principles, and standards to be developed with the Presidential Decree No. 49.
This proposal covers the project for coordinating Standard Topographic Maps produced by the Map Department of General Directorate of Land Registry and Cadastre, highlighting significant developments in map management processes and the successes of the project, along with the detailed use of open-source software. The Department, through its photogrammetric base map production at a 1/5000 scale since 1955, examines efforts to digitize a 480,000 km² dataset. The characteristics of raster data, focusing on deformation, distortion, and quality issues in scanned data at different resolutions, are investigated to assess their suitability for automation. The testing conducted within the project includes the coordination processes using QGIS on the Ankara 1/250,000 sheet, emphasizing the contribution of open-source software to the project. The flexibility and community-driven development of open-source software have facilitated more effective project management and customization of the software. Test results indicate the successful coordination of 1,967 raster sheets and demonstrate the feasibility of more extensive testing through remote working methods.
The proposal also dives into institutional requirements related to 1/5000 sheet demands, such as registry needs, storage requirements, usage through the Metadata GeoPortal (Harita Bilgi Bankası – HBB), and web presentation. The management of open-source GeoTiff files used in the presentation with GeoServer is particularly emphasized, illustrating how storage needs change during the presentation. The use of open-source software is highlighted for its cost-effectiveness, sustainability, and increased access to a broad user base, proposing a model for the widespread adoption of this approach in similar projects.
In conclusion, this work emphasizes improvements in the map management processes of the General Directorate of Land Registry and Cadastre and the successes achieved in the coordination of Standard Topographic Maps, advocating for the adoption of this open-source approach in comparable projects.

Use cases & applications
Van46 ring
15:15
15:15
5min
Styling Natural Earth with GeoServer and GeoCSS
Andrea Aime

Natural Earth is a public domain map dataset available at 1:10m, 1:50m, and 1:110 million scales. Featuring tightly integrated vector and raster data, with Natural Earth one can build a variety of visually pleasing, well-crafted maps with cartography or GIS software.

GeoServer GeoCSS is a CSS inspired language allowing you to build maps without consuming fingertips in the process, while providing all the same abilities as SLD.

In this presentation we’ll show how we have built a world political map and a world geographic map based on Natural Earth, using CSS, and shared the results on GitHub. We’ll share with you how simple, compact styles can be used to prepare a multiscale map, including:

  • Leveraging CSS cascading.
  • Building styles that respond to scales in ways that go beyond simple scale dependencies.
  • Various types of labeling tricks (conflict resolution and label priority, controlling label density, label placement, typography, labels in various scripts, label shields and more).
  • Quickly controlling colors with LessCSS inspired functions.
  • Building symbology using GeoServer large set of well known marks.

Join this presentation for a relaxing introduction to simple and informative maps.

Use cases & applications
Van46 ring
15:20
15:20
5min
Translation Management in FOSS - The Case of GeoServer
Alexandre Gacon

One of the keys for FOSS to have a large audience in several countries is to offer the possibility to use different languages in the system, especially for the softwares with a user interface.

Through the case of GeoServer, we will see what can be translated in a software, what features are expected to offer a good translation management and what translation means in terms of development process.

Community & Foundation
Van46 ring
15:25
15:25
5min
An eMOTIONAL SDI - What makes an SDI user friendly?
Antonio Cerciello, Joana Simoes

SDIs have gone a long way since the times of OGC Web Services (e.g.: WMS, WFS, etc). Today, they are supported by a breed of modern OGC standards (e.g.: OGCAPI), which embrace mainstream web technologies, such as REST, JSON and OpenAPI. These standards are already implemented by a variety of servers and clients, including FOSS. How did this technological modernization impact the experience of end users?
In this talk, we’ll share the experience from a research project, which included a variety of stakeholders that had the requirement of having to produce and share geospatial data among them. An SDI was assembled, using a mix of modern and more established standards, implemented through a stack of FOSS4G software.
We would like to discuss some lessons learned from this project, including the need to identify strategies that can foster the adoption of the SDI by the stakeholders. As Brenda Laurel, an independent scholar, stated: “Design isn’t finished until somebody is using it.”

Open standards and interoperability for geospatial
Van46 ring
15:30
15:30
30min
Coffee
Van46 ring
15:30
30min
Coffee
GEOCAT (301)
15:30
30min
Coffee
LAStools (327)
15:30
30min
Coffee
QFieldCloud (246)
15:30
30min
Coffee
Omicum
16:00
16:00
30min
Collectively mapping the FOSS geospatial ecosystem to better understand it
Ilie Codrina

In this talk, the authors plan to take you on the development road of an initiative- community led and supported by ESA - to ingeniously map the complex and dynamic ecosystem of open source for geospatial solutions. Started in 2016 as a volunteer initiative to understand the connections and dependencies between geospatial foss by summarily documenting it in a spreadsheet, it continued with the development of a resources platform for geospatial data exploitation, that combined modern and efficiency in data collection and representation (no more spreadsheets! ), with a significantly more thorough project documentation process, as well as clear steps in the direction of community building. Having more than 300 FOSS projects documented, the team is taking the next big leap. Trying to figure out how to not only map but also extract significant quality metrics that could lead to a better, more robust understanding of the open source for geospatial ecosystem.

Community & Foundation
Van46 ring
16:00
30min
Planning for rainy days: optimizing school calendars with precipitation data and QGIS
Meri Malmari

According to the study “Rainy days and learning outcomes: Evidence from Sub-Saharan Africa” (Bekkouche, Houngbedji, Koussihouede, 2022) learning outcomes in Sub-Saharan Africa are negatively affected by rainy days mainly through the mechanism of teacher abstention. School calendars are largely shared within and across countries without taking local climatic conditions into account. This effectively means that the total number of school days in an academic year may differ according to different districts or other administrative levels..

The objective of this collaboration between IIEP and Gispo was to design a process that would enable any policy-maker in the world to look for patterns in periods of heavier precipitation in their country and to propose updated school calendars accordingly. We used precipitation data gathered by the Global Precipitation Measurement (GPM) international satellite mission and distributed by Google Earth Engine. The QGIS Processing framework was used to write algorithms for processing the raster data and to look for periods which were uninterrupted by heavy rainfall.

In this talk we will present results of the algorithms and go over the background and implementation of the process in more detail. We will also present a use case for using the algorithm in practice.

FOSS4G in education and research
GEOCAT (301)
16:00
30min
QGIS Server in an enterprise environment
Jakob Miksch

This presentation describes the functionalities and integration of QGIS Server with a focus which features we are using in our company. A particular strength of QGIS Server lies in its ability to display geodata on the web in the same way as it is visible in QGIS Desktop. This is particularly relevant if an extensive selection of different map styles based on QGIS is available. The QGIS Server is configured exclusively via a QGIS project, which makes it very easy to set up. However, it should be noted that no automated settings are currently possible, for example via API.

The QGIS Server supports various OGC standards, including WMS (Web Map Service), WFS (Web Feature Service), WCS (Web Coverage Service) and WMTS (Web Map Tile Service). In addition, it offers a prototype implementation of OAF (OGC API - Features) and integrates many features of QGIS, including the export of maps prepared in the desktop.

This presentation will also cover the installation of QGIS Server using Docker. Furthermore, the integration of QGIS Server into middleware solutions will be discussed to enable its use in complex GIS environments.

Another focus will be on the configuration of published layers and the selection of published attributes. We will also discuss how to store projects in a database and how to connect to the database using PG_SERVICE files.

Use cases & applications
LAStools (327)
16:00
30min
SDIs to open data platforms, the geOrchestra way
Florent Gravin

geOrchestra is a long-established open-source Spatial Data Infrastructure (SDI), grounded in the pillars of OsGeo:
- GeoNetwork
- GeoServer
- MapStore
- OpenLayers

This solution has proven to be exceptionally robust, having been deployed at various levels including national, regional, institutional, academic, and research centers. As the landscape of metadata management transitions, embracing open data catalogs, data-centric usages, and modern applications, SDIs must evolve and adapt to this new paradigm.

In our presentation, we will explore how the geOrchestra community, with support from the GeoNetwork community, has modernized its technology stack and offerings. This includes:
- A comprehensive system for data ingestion and preparation.
- A collaborative editor for open and geo-metadata.
- A unified portal for accessing both open data and geo-data.
- A versatile API that addresses a range of data use cases, including searching, paging, processing, analyzing, and aggregating datasets.
- Enhanced capabilities for data visualization.

These advancements collectively contribute to the development of a sophisticated open-source data platform, incorporating a streamlined data ingestion system and more.

Open standards and interoperability for geospatial
QFieldCloud (246)
16:00
30min
Soil Erosion Prediction using Earth Observation Data and Ensemble Models
Ayomide Oraegbu, Emmanuel Jolaiya

Soil Erosion, the displacement of topsoil by water and wind, poses a significant threat to global land health, impacting food security, water quality, climate change, and ecosystem stability. Earth Observation (EO) and remote sensing technologies play a crucial role in monitoring and assessing soil erosion, offering valuable spatial and temporal data for informed decision-making. This paper applied three (3) Machine Learning (ML) models, namely the XGBoost classifier, LightGBM classifier, and CatBoost classifier to perform soil erosion classification in the European Union (EU) region. The data used in this study were sourced from Kaggle, a huge repository of community-published machine learning models and data, and it includes several EO data namely the Landsat 7 seasonal Analysis Ready Data (ARD), BioClim v1.2 historical (1981-2010) average climate data using the CHLSA classification system, annual MODIS EVI data, climatic variables (water vapour, monthly snow probability, annual MODIS LST in daytime or night time, annual CHELSA rainfall V2.1), Human footprint (Hengl et al., 2023), Land cover, Landform and landscape parameters (Hengl, 2018), Lithology (Hengl, 2018). The dataset has a total of 3754 sample points and 139 features. A detailed description of the dataset features can be found here.

During the Exploratory Data Analysis (EDA) process, the visual relationship between the Landsat bands and the target variable (erosion category), revealed that the Near Infrared (NIR) , Short-Wave Infrared I (SWIR1), Short-Wave Infrared II (SWIR2), and Thermal bands were effective in differentiating between the various erosion categories, compared to other bands. This insight gave direction in the feature engineering process. As suggested by Puente et al. (2019), vegetation indices could prove effective in predicting soil erosion. Consequently, we computed various vegetation indices such as the Normalised Difference Water Index (NDWI), Normalised Difference Infrared Index (NDII), and Shortwave Infrared Water Stress Index (SIWSI) as well as applied the Tasseled Cap Transformation which includes Brightness, Wetness and Greenness, to augment the features. To capture textural variations of each pixel location, elevation, and slope-based measures were computed. The Topographic Position Index (TPI) was computed for each position using a 100,000-metre radius, calculating the mean elevation of points within the radius and subtracting it from each point elevation within the radius. Other features computed were the Topographic Wetness Index (TWI), Aspect, LS-Factor, and Stream Power Index (SPI) which reflects the erosive power of streams. Leveraging the thermal band, Land Surface Temperature (LST) was derived. As noted by Ghosal (2021), combining LST with temporal data can identify regions vulnerable to soil erosion.

The development of these models incorporated Scikit-Learn Recursive Feature Elimination (RFE) in the preliminary feature selection process using the XGBoost model as the estimator. The goal of RFE is to return “n” features by training the model on all features, rank all features by importance, and remove the least important features until “n” features remain. The RFE “n” features were set to 200. Afterward, an XGBoost model was trained with the 200 features, and Scikit-Learn’s Randomised Search CV was employed to optimise its hyperparameters, leading to an improved F-1 score for the XGBoost classifier. Using the XGBoost’s classifier feature importance ranking, the top 155 features were selected for use in the final ensemble model for predictions. To provide a more reliable estimate of the performance of the training model, Scikit-Learn's Stratified KFold was implemented with n_splits set to 5 and the erosion category as the stratification variable. By using stratified KFold, a balanced class representation in each fold during training was achieved. For modelling of erosion categories, an ensemble voting classifier combined predictions from three optimised gradient boosting models (XGBoost, LightGBM, CatBoost) using a "soft" voting scheme. This approach aimed to improve accuracy and reduce overfitting compared to individual models. The confusion matrix was used to evaluate the ensemble's performance, considering precision, recall, and F1-score metrics. These metrics assess the model's ability to correctly identify positive and negative cases, with a higher F1 score indicating better overall performance.

The weighted F-1 score reached 0.86, and the weighted precision and recall were 0.86 and 0.86 respectively, indicating that the proposed method using various EO data to predict soil erosion categories (No Gully/badland, Gully, Badland, Landslides) displayed good performance. Specifically, No Gully/badland (0.89, 0.91) and Landslides(1.00, 1.00) had higher precision and recall values, which means that the model can correctly identify areas that fall within these erosion categories with low false positives and false negatives. The Badland(0.49) had the least recall value indicating that the model could not identify a substantial amount of this category.

According to the Feature Importance analysis; Year, Latitude Coordinates, Topographic Wetness Index (TWI), Longitude Coordinates, Maximum Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), Minimum Annual Water Vapour, Mean of Slope, Weighted Difference Vegetation Index (WDVI), Normalised Difference Snow Index (NDSI) and Standard Deviation of Slope emerged as the top ten (10) factors influencing soil erosion. Indicating that Topographic factors and vegetation indices were important for predicting soil erosion. The year was the most important feature, which shows that temporal trends have a huge impact in predicting soil erosion.

In conclusion, this project successfully explored the potential of ensemble learning and EO data for classifying soil erosion, highlighting its promising role in addressing this crucial environmental issue. The proposed framework indicates that Topographic indices like the TWI and vegetation indices like the WDVI hold valuable information for predicting soil erosion. Furthermore, band combinations using near-infrared (NIR), SWIR1, SWIR2, and thermal bands can significantly improve the classification of soil erosion categories. Crucially, EO data like digital elevation models (DEMs) and Analysis Ready Landsat data serve as the foundation for accurate soil erosion prediction. The proposed approach to incorporate multi-temporal EO data offers exciting prospects for even more accurate soil erosion classification.

Academic track
Omicum
16:30
16:30
45min
Changing the mindset of "Open Source is just for those who can't afford to pay licenses"
Miriam Gonzalez

It makes me very happy that this year there is a topic focused on "Building a business with FOSS4G". Last year I attended other Geospatial industry events in the ones I spoke with people from different organizations and some of the comments I received when mentioning I am an Open Source advocate was that it is great that Open Source exists because it can help students who can't afford licenses, this is one of the very recurrent comments I hear, another comment is that Open Source is for companies with no money. From my point of view, we, the community need to find a way to communicate better the values and benefits of Open Source so we can change this mindset that still exists.

More than a talk I would like to organize a panel with 3-4 Open Source Entrepreneurs in the one we have an open discussion about how we can change that mindset and impulse more business opportunities together.

Community & Foundation
Van46 ring
16:30
30min
Complexity of Land use planning - simplicity from FOSS4G
Riku Oja, Sanna Jokela

Land use planning is a complex process involving legal frameworks, decision making, participation with the community and skills for making maps that are artistic but still comprehensible to both experts and laypeople. In the end, however, land use maps are “just” geographical data. They usually consist of an area (a town or a region) and smaller features (blocks, land use areas, lines, points of interest) with lots of different regulations. Attached are different types of documents (decisions, reports) and events describing when and how the process goes from phase to phase.

Harmonisation of data for national and global use has been the aim in the geospatial field for a very long time. Now in Finland there is an ambitious scheme to gather all the land use plans from municipalities and regions together in a harmonised manner in an open API service. This is a huge opportunity for FOSS4G since there are no tools available in any software to do this.

In this talk we shall briefly go through the current situation in Finland in land use planning. We focus on how to use PostGIS and QGIS in land use planning and bring simplicity to the complex database models. We present two use cases: Regional land use plan which can be done using QGIS attribute forms, and another for detailed zoning and land use plans, which require QGIS plugin development. We also describe our architecture for
the PostGIS database and its related services, such as how to import and export data to the national land use planning API.

Transition to FOSS4G
LAStools (327)
16:30
30min
Learning paths with FOSS4G
Elisa Hanhirova, Reetta Lindberg

There are multiple different ways to learn and teach the use of FOSS4G software. How to combine different platforms smoothly and how to get the most out of them is the problem new users are usually dealing with. One way to structure the learning is introducing learning paths to articulate the learning goals and the most useful path to navigate different learning modules and courses.

Most e-learning platforms can use some kind of learning paths as a way to structure learning but they are applicable also to contact training and online training. Learning paths are a way to help the learner to build their knowledge in a structured way. They can be either visual representations of how to navigate the courses or built into your platform. The purpose is to guide students from the current level of competence towards a better level of competence. Learning paths need to be structured in a way the learner can track their progress and choose a different path if their learning needs change. Learning paths give the learner more flexibility and a sense of empowerment in their learning process.

From the course organiser, systematic planning is required in constructing course content to ensure that courses do not remain disjointed entities but are logically interconnected. When designing each course, it is essential to define the learning objectives and the level of expertise the course aims to achieve. It is crucial to consider to whom the course is intended and what level of skills participants are expected to have throughout the planning process. Planned learning objectives should be in the heart of the course and should try to support the student's interest in the subject. To mark the competences or completion of the course the learner should have some sort of feedback or certification of completing the course. These are an important part of keeping up the learners motivation and engaging them in the learning process.

Once the course organiser has analytically examined the content of their course offerings, they can begin to plan various learning paths. It may be necessary to build new courses or modify existing content in addition to the already existing courses, so that pathways become seamless, and learning can be built upon previously acquired skills. Learning paths should allow learners to join at any stage, avoiding the need to start from the beginning every time. Learners should be able to jump in at any point, continuing their journey based on their skills and learning objectives.

Learning paths can be just learning the basics of some FOSS4G software and focus on the essential skills the learner needs to use the software efficiently. In best case the learning paths can move from basics to advanced skills and find out they need some other FOSS4G software to upgrade their workflows and skills. Learning paths could steer the learner from basic QGIS skills to using geodatabases with PostGIS and PostgreSQL or to plugin development and using mobile apps to enhance their own workflow.

FOSS4G in education and research
GEOCAT (301)
16:30
30min
SDI maintenance DevOps style
Paul van Genuchten

At ISRIC - World Soil Information we increasingly maintain our data services through CI-CD pipelines configured via GIT. Both from the service as well as content perspective. The starting point are metadata records of our datasets being stored on GIT. With every change of a record, the relevant catalogues (pycsw) get updated and any relevant web services (mapserver) are updated.

These pipelines are reproducable and there are never inconsistencies between catalogue content and the services. On top of that our users can directly report issues (or even improvement suggestions) through git.

The stack is build on proven OSGeo components. A tool pyGeoDataCrawler brings the power of GDAL and pygeometa to CI-CD scripting. It crawls files on a folder and extracts relevant metadata, then prepares a mapserver configuration for that folder, while updating the metadata with the relevant service url's.

Typical use cases for this stack are; a search interface to any file based data repository or a participatory data catalogue for a project. At the conference we hope to hear from you if any of these components could be relevant to your cases or if there are similar initiatives we can contribute to or benefit from.

What's next? At ISRIC we receive and ingest a lot of soil data from partners. To harmonize this data is a huge effort. Via automated pipelines and interaction with the submitters via git comments, we hope to improve also this aspect of the data management cycle.

Use cases & applications
QFieldCloud (246)
16:30
30min
Spatiotemporal variation of Land Surface Temperature and their relationship with multispectral indices in Dhaka city
Sourav Karmakar

Land Surface Temperature (LST) serves as a crucial indicator for assessing urban living conditions, particularly in the context of rapid urbanization and population growth. As cities like Dhaka continue to expand both vertically and horizontally, the transformation of green spaces, open areas, and water bodies into built-up structures significantly alters the urban thermal environment, leading to elevated LST. This study delves into the spatiotemporal variation of LST in Dhaka city and explores its relationship with three key remote sensing indices: Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Normalized Difference Built-up Index (NDBI). The primary objective of this research is to derive LST patterns and trends over time while examining their association with changes in vegetation, water, and built-up areas. In this study, Landsat 8 was selected as the primary data source for LST retrieval in this study due to its suitability for capturing detailed and accurate thermal information. One of the key advantages of Landsat 8 is its Thermal Infrared Sensor (TIRS), which provides high-quality thermal imagery in two spectral bands (Band 10 and Band 11) between the wavelength of 10 and 12 μm, specifically designed for LST estimation. For this study, Landsat 8 data from the summer seasons (March to August) of 2015, 2018, and 2021 were utilized for analysis. The mono-window algorithm was employed to retrieve LST values from the Landsat 8 imagery. Only images with cloud cover of less than 5% were considered for inclusion in the study to ensure data quality and accuracy. Following this selection process, a median composite was generated from the available image collection for each year, producing a single representative image per year for subsequent analysis. After that, the study computed three key multispectral indices: the Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Normalized Difference Built-up Index (NDBI) using their standard formulas. After that, the entire study area was divided into a grid of 500*500 square meters, one centroid was generated per grid, resulting in a total of 1205 centroids. At these fixed points, raster values for LST, NDVI, NDWI, and NDBI were sampled. The next step involved conducting a correlation analysis based on the values obtained from these points, providing insights into the relationships between LST and various multispectral indices across the study area.
The results of the analysis reveal significant changes in LST distribution across Dhaka city over the study period. From 2015 to 2021, there was a notable 58.73% net increase in areas exhibiting surface temperatures exceeding 32 °C, indicative of urban heat island effects exacerbated by urbanization. Conversely, there was a 5.09% reduction in areas with temperatures below 24 °C from 2018 to 2021, suggesting a decline in cooler microenvironments within the city. These findings underscore the escalating thermal stress faced by urban residents and the urgent need for effective urban planning strategies to mitigate temperature extremes. In parallel with changes in LST, our analysis also examined variations in NDVI, NDWI, and NDBI values over the study period. NDVI and NDWI, which are indicative of vegetation and water abundance, exhibited decreasing trends, reflecting the ongoing urbanization and reduction of green and blue spaces within Dhaka city. Conversely, NDBI, a measure of built-up density, showed a significant increase over the study period, highlighting the proliferation of built-up structures in the urban landscape. To elucidate the relationship between LST and the remote sensing indices, correlation analyses were conducted. Negative correlations were observed between LST and both NDVI and NDWI, with R2 values ranging from 0.29 to 0.30 and 0.08 to 0.14, respectively. These findings suggest a moderate to poor negative correlation, indicating that as vegetation and water cover decrease, surface temperatures tend to rise. However, in Dhaka city, water bodies constitute only a small fraction of the land cover compared to built-up areas. As a result, the influence of water on local temperature variations might be minimal, leading to weak correlations between LST and NDWI. Conversely, a robust positive correlation was observed between LST and NDBI, with R2 values ranging from 0.57 to 0.62, indicating a strong association between increased built-up density and elevated surface temperatures. The findings of this study shed light on the complex interplay between land surface temperature dynamics and urban indices in Dhaka city. They underscore the importance of sustainable urban planning initiatives aimed at preserving green spaces, enhancing water bodies, and mitigating the adverse effects of urban heat islands. By integrating these findings into urban development policies and practices, stakeholders can work towards creating more resilient and livable urban environments for the residents of Dhaka.
This research leveraged the capabilities of open-source geospatial tools, including Google Earth Engine (GEE) for analysis and QGIS for map visualization. It holds particular relevance for the FOSS4G Europe academic audience due to its emphasis on using free and open-source geospatial cloud platforms and software for addressing real-world challenges related to urbanization, land use change, and environmental monitoring. By showcasing the integration of open-source tools such as GEE and QGIS in the study, attendees can gain insights into the practical applications of these platforms for analyzing and understanding urban thermal dynamics and their implications for sustainable urban planning.

Academic track
Omicum
17:15
17:15
45min
Removing emissions from freight transportation (with help of GIS and FOSS)
Jaak Laineste

Transportation emits about 20% of greenhouse gases (GHG) worldwide and in Europe. Commercial freight is about half of the largest problem for humankind now. My talk will cover some known and a few secret tricks on how the transportation sector plans to achieve zero carbon emissions, with the help of smart people, geodata, science, and various technologies like GIS and GenAI. In Trimble, we manage about 10% of the entire European transcontinental freight. We enable and help manage and reduce emissions through cooperation with the largest shippers (and emitters) in the world, one shipment at a time, eventually affecting every transport mode, vehicle, vessel, train, and even airplane. However, in detail, it is not that simple and cheap. It includes technical, commercial, and even political challenges. For this sector, sustainability and tackling climate change are already the second largest issue, so I hope it is insightful for the conference visitors as well.

Keynote
Van46 ring
18:00
18:00
60min
BoF
Van46 ring
18:00
60min
BoF
GEOCAT (301)
18:00
60min
BoF
LAStools (327)
18:00
60min
BoF
QFieldCloud (246)
09:15
09:15
45min
Leading with Open Source: Driving Innovation from Ground to Space
Stefanie Lumnitz

Over the past fifty years, space technology has dramatically expanded our knowledge of Earth’s systems. Today, the challenge is to utilize the wealth of technology and big data from space to address pressing global challenges like climate change effectively.

The mature open-source ecosystem, long recognized as a catalyst for innovation and collaboration, supports sustainable initiatives that transcend coding to encompass governance and community engagement. Practices such as transparent collaboration, community-driven development, and iterative innovation accelerate development and foster a sustainable, inclusive model for global cooperation. The frameworks used by open-source projects provide essential lessons for addressing complex global challenges like climate change, underscoring the imperative for open-source leaders to help adopt and maintain these practices in mainstream scientific, political and industrial innovation. This leadership is key to initiating significant grassroots transitions, enabling communities and individuals to engage meaningfully in broad-scale strategies.

Together we will explore the role of the geospatial open-source innovation ecosystem as a collaborative endeavor linking policy, industry and society. This discussion will center on transitioning from Earth Observation Science and technology to impactful action. We will examine current hurdles and prospects of open-source innovation, connecting ecosystem-wide views to individual contributions. Highlighting open-source working practices, we aim to underscore the importance of fostering open-source leadership to encourage and spread grassroots innovations that empower communities.

Keynote
Van46 ring
10:00
10:00
30min
Coffee
Van46 ring
10:00
30min
Coffee
GEOCAT (301)
10:00
30min
Coffee
LAStools (327)
10:00
30min
Coffee
QFieldCloud (246)
10:00
30min
Coffee
Omicum
10:30
10:30
30min
Creating Interoperable Tiled Maps
Joana Simoes

Tiled maps are the backbone of most web applications that show geospatial information. Before OGC API - Tiles was approved, last year, there was not really an interoperable way of creating these maps using a resource oriented architecture and JSON encodings.
OGC API - Tiles puts some formality to what people have been doing for years, with ‘xyz’ tilesets, but it also enables the clients to create a better user experience, by providing metadata, such as title, description or available zoom levels.
In this talk we’ll provide an overview of this standard, and discuss its advantages, when compared to other standards/specifications, like WMTS or TileJSON. We’ll illustrate the benefits of interoperability, with an example that uses FOSS4G software implementing OGC API - Tiles.
Finally, we’ll point out some resources, available to anyone who wishes to develop and validate an OGC API - Tiles implementation.

Open standards and interoperability for geospatial
LAStools (327)
10:30
30min
Insights on Earth Observation cloud platforms from a user experience viewpoint
Margherita Di Leo

The European Strategy for Data aims at creating a single market for data sharing and exchange to increase the European Union’s (EU) global competitiveness and data sovereignty. Additionally, emphasis is put on the need to prioritize people's needs in technology development and to promote EU values and rights.
The EU has largely invested in making data accessible. Examples of this are the Copernicus Programme, the Group on Earth Observation (GEO) intergovernmental partnership, and the Horizon 2020 and Horizon Europe funding programmes. In the scope of such programmes, several Earth Observation (EO) cloud platforms have been developed, providing access to data, tools and services for a wide range of users, including support to policymakers in developing evidence-based and data-driven policies.
Typically, these platforms are an expression of very specific research communities with different sizes and scope, even niche in some cases, with various and -often under-represented- user needs, as opposed to more mainstream platforms with a wider user uptake.
As a consequence, the current landscape of EO cloud platforms and infrastructures in the EU is rather fragmented, thus their potential is only partially exploited by users. We started our research by classifying existing infrastructures, identifying available good practices and highlighting the technological enablers, in order to point out and leverage the building blocks needed to improve the usability of such platforms (Di Leo et al., 2023).
In this follow-up study, we seek to provide a user-centric perspective, aiming at identifying limitations in the current offer of EO cloud platforms by conducting a research study on user experience. We aim to propose good practices to improve both the platform design and functionalities by taking into account the user viewpoint. Our research questions are:
• Does the current offer cover the entire development lifecycle?
• What are the pain points / bottlenecks to address on the current platforms from a user’s viewpoint?
To create a meaningful sample of EO cloud platforms, we have surveyed use cases from EU flagship initiatives like e-shape, OpenEarthMonitor and GEOSS Platform Plus, to understand more on their use of the platforms. In addition, we have developed an additional use case so to gain hands-on experience on cloud platforms.
Responders to the survey were developers of different use cases in a wide range of sectors, including agriculture, energy, health, ecosystem, disaster management, water, climate and climate change, forestry and oceans. Intended end user categories ranged from business owners to analysts, developers, data scientists and policy makers, as well as citizens. A common need emerging from the exercise is the possibility to integrate datasets of different nature and from different sources: EO, in situ, Internet of Things (IoT) data, etc. Final products of the considered use cases ranged from static maps to streams of data and web apps. In the development lifecycle, techniques such as machine learning, deep learning, parallel computing, virtualization / containerization and data cubes, are of common use among developers.
The main concerns on EO cloud platforms that emerge from the survey were:
(1) The difficulty to discover the services offered / the lack of services to browse the available services;
(2) The reduced accessibility to data and services, as well as the timeliness and coverage of data provision and quality;
(3) The poor transparency of the price;
(4) The limited possibility to integrate heterogeneous datasets and tools from different providers;
(5) The limited quality of learning material and documentation, as well as the frequency of their updates;
(6) The lack of effectiveness of support services such as helpdesks and forums;
(7) The limited possibility of exchanging code, good practices, and support with other users, and the liveness of the communities around the platforms;
(8) The lack of possibility to customize tools and services;
(9) The lack of strategies for the sustainability of platforms after the funding period;
(10) The lack of effective facilities for storage and for advanced functionalities such as machine learning, deep learning, parallel computing, etc.
Based on these responses, we identified a set of dimensions of high relevance for users, which are meant for a self-evaluation by platforms so to improve their offer. Such dimensions can be summarized as 1) discoverability, 2) accessibility, 3) price transparency, 4) interoperability, 5) documentation, 6) customer care, 7) community building (data, models and knowledge sharing), 8) customization, 9) sustainability of business plan and 10) characteristics and performance of the platform.
Among others, the adherence to the FAIR principles (Wilkinson et al, 2016) and to the TRUST principles (Lin et al., 2020), the use of open source components and the compliance to open standards (e.g. from the Open Geospatial Consortium – OGC), all represent essential dimensions to enhance both the platforms’ usability and the user’s satisfaction.
Finally, we discuss the emerging trend of creating federations among platforms. Federations can be of different types such as: federation of identity (e.g. single sign-on); federation of trust; federation of resources (e.g. storage and computational facilities) etc. Federations may overcome many of the problems that we identified, such as i.e. interoperability, discoverability, accessibility, etc. They provide a set of services available from one single place. This trend is expected to grow progressively, especially towards the concept of data spaces, in which the EU is largely investing.
To conclude, the study outlines the need to address challenges and limitations to improve both the usability and user satisfaction when using available EO cloud platforms. The identification of user needs and concerns, along with the emphasis on principles such as FAIR and TRUST, open source components and OGC standards, will be crucial in shaping the future of data platforms and infrastructures in the EU and beyond. Furthermore, the potential of federations among platforms presents an immediate opportunity to move towards the vision of data spaces that the EU is putting forward, thus enhancing both collaboration and data sharing, ultimately contributing to the development of a more cohesive and effective data market in Europe.

Academic track
Omicum
10:30
30min
Semantic annotation and classification of EU tendering data on open geospatial software, standards and data using GPT and Machine Learning techniques
Marco Minghini

Tenders Electronic Daily (TED) is the platform where all public tenders published in European Union (EU) Member States and European institutions are accessible. With approximately 520,000 public procurement notices published per year that are worth more than €420 billion, TED is a cornerstone of EU public procurement. The TED database is available as open data, providing an extremely interesting source for in-depth analysis on public procurements in the EU.
We developed an application that – based on an extraction of the TED database for two years (2021 and 2022) – allows users to: i) automatically label TED documents using GPT; ii) visualise the labels generated by GPT for all documents and manually correct them; iii) use the corrected labels to train a Support Vector Machine (SVM) Machine Learning classifier; and iv) assess the classification accuracy. The application supports an iterative process of re-labelling (using GPT) and re-training the SVM classifier until the expected classification performance is reached and the classifier can be applied to the whole TED dataset. In addition to the progressive improvement of the Machine Learning classifier through the controlled cycle of iterations, the benefits of this approach include user involvement in the correction/enrichment of labels and flexibility in adapting to the specific needs of the datasets and domain – the latter meaning that applicability is not limited to the TED database. Inclusion of the TED database for 2023 is currently ongoing; similarly, a dedicated UI is currently under development to provide a user-friendly access to the application.
The use case investigates the degree to which EU public procurements are relevant to open source geospatial software, open geospatial standards and open geospatial data. To this purpose, for each of the three categories a specific set of keywords was initially listed; this was then complemented by a series of similar keywords retrieved through a semantic text analysis tool named SeTA (https://seta.jrc.ec.europa.eu, developed in-house at the JRC) and further validated by an expert. The final list of keywords represented the input to filter a list of documents from TED to be annotated in the first step using GPT. The presentation will show the classification results and shed some light on the relevance of open source geospatial software, open geospatial standards and open geospatial data in EU tenders.
GPT models used by the application are run in a platform created under a special contract signed by the European Commission with Microsoft Azure. The platform, named GPT@JRC, provides internal APIs that can be accessed upon obtaining an authorization token. Through Python, users can query the APIs using the OpenAI library that offers convenient access to the OpenAI REST API from any Python 3.7+ application. The GPT@JRC offers several Large Language Models including 'gpt-35-turbo-0613', 'gpt-35-turbo-16k', 'gpt-35-turbo-0301'.
More concretely, GPT models are used for annotating the TED database by asking whether a certain TED document, typically through its abstract, covers a specific topic. The expected response is a simple ‘yes’ or ‘no’. By interacting with the APIs, we retrieve the responses and append them as labels to our documents. This allows us to perform unsupervised document classification. Subsequently, we can verify whether the documents have been correctly classified on a sample basis. Following this manual validation phase, as mentioned before, we use the result as input to an SVM classifier (using the Python scikit-learn library) to determine if there is a general rule to distinguish the topic of any TED document from the text of its abstract.

FOSS4G ‘Made in Europe’
Van46 ring
10:30
30min
State of PDAL
Michael Smith

PDAL is Point Data Abstraction Library. It is a C/C++ open source library and applications for translating and processing point cloud data. It is not limited to LiDAR data, although the focus and impetus for many of the tools in the library have their origins in LiDAR. PDAL allows you to compose operations on point clouds into pipelines of stages. These pipelines can be written in a declarative JSON syntax or constructed using the available API. This talk will focus on the current state of the PDAL Pointcloud processing library and related projects such as COPC and Entwine, for pointcloud processing. Coverage of the most common filters, readers and writers along with some general introduction on the library, coverage of processing models, language bindings and command line based batch processing. First part will be covering new features for current users. Some discussion of installation method including Docker, binaries from package repositories, and Conda packaging. For more info see https://pdal.io

State of software
QFieldCloud (246)
10:30
30min
What's up in Space?
Miriam Gonzalez

Back in my childhood, I fell in love with Space by watching Carl Sagan's Cosmos Television series and a local kids' TV show with a Planetary rocket named "Popotito 22" which traveled through time and Space. Several decades later I still remember Carl Sagan's words speaking in Mexican Spanish and explaining the wonders of our Pale Blue Dot, this love took me to the Earth Observation field. Last year in the talk I gave in FOSS4G Kosovo, "Unlocking the potential of Earth Observation combining Optical and SAR data" I realized how useful it could be a talk about the current state of Earth Observation, most of the FOSS4G attendees were very knowledgeable about the Copernicus and Landsat programs but there is so much more happening in the New Space Industry where several commercial companies are also committed with Open Data programs so they can help organizations to build more solutions and keep supporting startups, research and education in this briliant field.
In this talk, I want to share what is happening today in the New Space Industry, which companies are launching Satellites and developing new sensors, how "space buses" are supporting reducing Satellite costs and making Space Data costs even more accessible, how Platforms are a way to facilitate the access of all this Data available and also how these sensors have a variety of applications of EO. In my current role I do Partnerships with Satellite and Geospatial companies, in four years, together with my team we have signed more than 80 partnerships.

Use cases & applications
GEOCAT (301)
11:00
11:00
30min
Digital Twins: Metropolitan Cooperation Platform and Underground Network
Yves Bolognini, Ben Kuster

Digital twins and 3D are becoming increasingly important for planning, data diffusion and decision-making. Several projects are currently underway at Camptocamp, in collaboration with Virtual City Systems and Cesium. We will present two very different use cases: developments around Rennes Métropole and the underground network for the SUEZ project.

Rennes Métropole

In a context of digital transition and the increasing availability of urban data, Rennes Métropole wishes to better equip its decisions and public policies on the basis of data and cooperation. Ultimately, the goal is to promote cooperation and the contribution of the different actors and "enlighten" public decisions and policies, in particular the democratic, ecological and energy transition projects. Issues of transparency, public service efficiency and cost control are also sought.

The platform is developed partly on VC Map which is an Open-Source JavaScript framework and API for building dynamic and interactive maps on the web. It can display 2D data, oblique imagery and massive 3D data including terrain data, vector data, mesh models, and point clouds making it easy for users to explore and interact with the data in an integrated and high-performance map application. VC Map is built upon open, proven, and reliable GIS and web technologies such as OpenLayers and Cesium for the visualization of 2D and 3D geo-data.

A particular effort was made on the design in order to offer users, mainly citizens, a pleasant user experience that allows an exploration of the development projects of the metropole in 2D and 3D. We will present the cooperation platform through three use cases of interest for Rennes Metropole : simulation of the photovoltaic production potential, linear transport systems and exposure to electromagnetic waves.

SUEZ

As part of its work in the field of water management, SUEZ has a number of requirements for 3D data visualization, particularly for underground data. The project focuses on two main areas: the visualizer and data preparation.

The visualizer is designed to be integrated into an application developed by SUEZ. It is based on Cesium, to which specific functionalities have been added. One of the major challenges was to integrate two types of navigation into the same application:
* Constrained navigation. Possibility of positioning oneself in an underground pipe and moving through it without passing through the walls, with video game-style controls
* Free navigation. More traditional 3D controls with floor transparency

The other aspect of the project is data preparation. A processing chain was set up to construct the pipe tubes, whose data was initially in 2D. Other objects in IFC format, such as pumping stations, were merged and added to the model, while allowing them to be queried via Cesium. Finally, a textured 3D mesh was used to realistically reproduce the interior of the pipes. The challenge was to ensure consistency between these heterogeneous data sources provided by SUEZ.

Use cases & applications
QFieldCloud (246)
11:00
30min
MOOC Cubes and Clouds - Cloud Native Open Data Sciences for Earth Observation
Peter James Zellner

Motivation: The Massive Open Online Course (MOOC) “Cubes and Clouds” teaches the concepts of data cubes, cloud platforms, and open science in the context of Earth Observation (EO). The course is designed to bridge the gap between relevant technological advancements and best practices and existing educational material. Successful participants will have acquired the necessary skills to work and engage themselves in a community adhering to the latest developments in the geospatial and EO world.

Target group: The target group are earth science students, researchers, and data scientists who want to dive into the newest standards in EO cloud computing and open science. The course is designed as a MOOC that explains the concepts of cloud native EO and open science by applying them to a typical EO workflow from data discovery, data processing up to sharing the results in an open and FAIR way.

Content: This MOOC is an open learning experience relying on a mixture of animated lecture content and hands-on coding exercises created together with community renowned experts. The course is structured into three main chapters Concepts, Discovery and Process and Share. The degree of interaction (e.g. hands-on coding exercises) is gradually increasing throughout the course. The theoretical basics are taught in the first chapter Concepts, comprising cloud platforms, data cubes and open science practices. In the second chapter the focus is on discovery of data and processes and the role of metadata in EO. In the final chapter the participants carry out complete processing workflows on cloud infrastructure and apply open science practices to the produced results. Every lesson is concluded with a quiz, ensuring that the content has been understood.

The course contains 13 written lectures that convey the basic knowledge and theoretical concepts, 13 videos which have been created with a professional communication team and in collaboration with a leading expert on the topic and shines a light on a real world example (e.g. The role of GDAL in the geospatial and EO), 16 pieces of animated interactive content which engage the participants to actively interact with the content (e.g. Sentinel 2 Data Volume Calculator) and 11 hands-on coding exercises in the form of curated jupyter notebooks that access European EO cloud platforms (e.g. CDSE) and carry out analysis there using standardized API’s like openEO (e.g. full EO workflow for snow cover mapping).

Infrastructure: The EOCollege platform hosts the lectures and the animated content (e.g. videos, animations, interactive elements) of the course. The hands-on exercises are directly accessible from EOCollege via a dedicated JupyterHub environment, which accesses European EO cloud platforms, such as the Copernicus Data Space Ecosystem, using its open science tools like the Open Science Data Catalogue, openEO and STAC. Guaranteeing that the learned concepts are applied to real-world applications. In the final exercise the participants will map the snow cover of an area of interest they choose and make their results openly available according to the FAIR principles on an web viewer (STAC browser). This community mapping project actively lives the idea of open science, collaboration and community building.

Learning achievements: After finishing the course, the participants will understand the concepts of cloud native EO, be capable of independently using cloud platforms to approach EO related research questions and be confident in how to share research by adhering to the concepts of open science. After the successful completion of the course the participants receive a certificate and diploma supplement and their personal map is persistently available in the web viewer as a proof of work.

Benefits for the open geospatial community: The MOOC is valuable for the geospatial and EO community and open science as there is currently no learning resource available where the concepts of cloud native computing and open science in EO are taught jointly to bridge the gap towards the recent cloud native advancements. The course is open to everybody, thus serving as teaching material for a wide range of purposes including universities and industry, maximizing the outreach to potential participants. In this sense also the raw material of the course is created following open science practices (e.g. GitHub repository, Zenodo, STAC Browser for results) and can be reused and built upon.

The "Cubes and Clouds" MOOC equips participants with essential skills in cloud native EO and open science, enhancing their ability to contribute meaningfully to the open geospatial community. By promoting transparency, reproducibility, and collaboration in research, graduates of the course strengthen the foundations of open science within the community. Access to cloud computing resources and European EO platforms empowers participants to undertake innovative research projects and share their findings openly, enriching the collective knowledge base. Ultimately, the MOOC fosters a culture of openness and collaboration, driving positive change and advancing the field of geospatial science for the benefit of all.

Structure of the Talk: Our talk will interactively guide through the MOOC and showcase the learning experience. To evaluate its usefulness the perception of the first participants will be analyzed and finally we will jointly discuss activities to integrate with other teaching and tech communities (e.g. Pangeo).

Links:
- EOCollege: MOOC Cubes and Clouds
- GitHub
- Zenodo
- Community mapping project - Cubes and Clouds Snow Cover Stac Collection

Academic track
Omicum
11:00
30min
Serving earth observation data with GeoServer: COG, STAC, OpenSearch and more...
Andrea Aime

Never before have we had such a rich collection of satellite imagery available to both companies and the general public. Between missions such as Landsat 8 and Sentinels and the explosion of cubesats, as well as the free availability of worldwide data from the European Copernicus program and from Drones, a veritable flood of data is made available for everyday usage.
Managing, locating and displaying such a large volume of satellite images can be challenging. Join this presentation to learn how GeoServer can help with with that job, with real world examples, including:
* Indexing and locating images using The OpenSearch for EO and STAC protocols
* Managing large volumes of satellite images, in an efficient and cost effective way, using Cloud Optimized GeoTIFFs.
* Visualize mosaics of images, creating composite with the right set of views (filtering), in the desired stacking order (color on top, most recent on top, less cloudy on top, your choice)
* Perform both small and large extractions of imagery using the WCS and WPS protocols
* Generate and view time based animations of the above mosaics, in a period of interest
* Perform band algebra operations using Jiffle

Attend this talk to get a good update on the latest GeoServer capabilities in the Earth Observation field.

Use cases & applications
GEOCAT (301)
11:00
30min
Terra Draw: A web map drawing library for 2024
James Milner

Terra Draw is an open source JavaScript library for building frictionless drawing and editing tools for web maps. The project was founded in June 2023 and has been building momentum since then, with over 57 releases.

The library provides a selection builtin in modes, for drawing geometries like Point, Line and Polygon and a supports several well known mapping libraries out the box via the adapter pattern, including open source favourites like MapLibre, Leaflet and OpenLayers.

In this talk, we will demonstrate the purpose and benefit of using Terra Draw in your web mapping projects, with the libraries useful out the box functionality. We will cover how to execute on common patterns that geo developers often face in their day to day work. The talk will further delve into how the library supports extension, allowing developers to write their own modes and adapters and also configure Terra Draw's deep styling options to keep your mapping tools looking fresh.

Finally, the talk will aim to provide a summary of how Terra Draw has improved in the last year, for people who have already been following the project and want to get insight over what has changed since FOSS4G 2023 in Kosovo.

State of software
LAStools (327)
11:00
30min
Why would you need open data from National Mapping Agencies?
Hanno Kuus

There are excellent global open datasets available, like OpenStreetMap, Natural Earth and others, but it is often beneficial to use smaller, local datasets for reasons like coherency, completeness, and regional specialities.

National Mapping Agencies (NMAs) are organisations in government structures that produce authoritative geospatial data and maps for a country or region. In Europe, there is continuing trend to make data produced in public sector available as open data and many spatial datasets from numerous countries (mapping agencies) are also available as open data.

Most NMAs do not limit their work and available data to classical map products and operate also in related fields: land cadastre, geodesy, addresses and place names, to name a few. It is good to know about the existence of such geospatial datasets also, especially if made available as open data and services, as these could be very useful to many projects.
Additionally, the sources of modern country-level mapping (aerial and terrestrial imagery, lidar point clouds, etc.) are useful not only for national mapping programs but also for wide range of other applications and use cases.

Estonian Land Board is one of the European NMAs sharing its produced data openly. In this talk we take a look at data samples from Estonia and elsewhere: what is available, how to find it and in which file formats/services the data is distributed. Additional tips are given towards Pan-European initiatives (EuroGeographics, GeoE3, etc.) and regulations (INSPIRE, Open data Directive, etc.) that aim to make data from each country more openly and uniformly interoperable and accessible, thus providing potential value-added service to end users who need similar data from several European countries.

Open Data
Van46 ring
11:30
11:30
30min
Bridging geomatics theory to real-world applications in alpine surveys through an innovative teaching summer school program
Federica Gaspari

Applying skills gained from university courses marks a pivotal step in crafting engaging teaching methods. Including practical activities in higher education programs plays a crucial role in knowledge transfer, especially in geomatics (Tucci et al., 2020). Moreover, engaging groups of students along the entire process of in-situ survey design, data collection, management, processing and results preparation furtherly foster their responsibility as well as the awareness of the technologies adopted, actively understanding their limitations and potentials (Balletti et al., 2023). In recent years, STEM and geomatics have seen a growing number of learning experiences based on open knowledge (Gaspari et al, 2021, https://machine-learning-in-glaciology-workshop.github.io/, Potůčková et al., 2023). In this context, this work is presenting an innovative teaching experience framed in the mountainous environment of the Italian Alps describing the structure of the course and the potential of open geo education in geomatics.

Since 2016, the Geodesy and Geomatics Section of the Department of Civil and Environmental Engineering of Politecnico di Milano organised a Summer School for Engineering, Geoinformatics and Architecture Bachelor and Master students consistently aimed to bridge the divide between theory and practice. The Summer School is framed within a long-term monitoring activity of the Belvedere Glacier (https://labmgf.dica.polimi.it/projects/belvedere/), a temperate debris-covered alpine glacier, located in the Anzasca Valley (Italy), where annual in-situ GNSS and UAV photogrammetry surveys have been performed since 2015 to derive accurate and complete 3D models of the entire glacier, allowing the derivation of its velocity and volume variations over the last decade.

In a week-long program, students are encouraged to collaborate, with the supervision of young tutors passionate about the topic, to develop effective strategies for designing and executing topographic surveys in challenging alpine regions. This program involves them in hands-on learning experiences, also directly engaging students in a wider ongoing research project, getting familiar with the concept of open data and with the adoption of dedicated open-source software.

The summer school program is divided into 6 modules whose goal is to introduce students to key theoretical concepts of fieldwork design, UAV photogrammetry, GNSS positioning, GIS and spatial data analysis, image stereo-processing and 3D data visualization. Along with theory, practical sessions are organised with guided case study-driven exercises that allow students to get familiar also with FOSS4G tools such as QGIS, CloudCompare and PotreeJS. The teaching materials used to guide students through exercises with processing software is made openly accessible online with a dedicated website built on top of an open-source GitHub repository with MkDocs (https://tars4815.github.io/belvedere-summer-school/), setting the groundwork for developing collaborative online teaching and expanding the material for other learning experiences with future versions.

Adding value to the experience, students also contribute to a research project regarding the monitoring of the glacier (Ioli et al., 2021; Ioli et al., 2024), providing valuable insights on the recent evolution of the natural site. The georeferenced products derived from the in-situ surveys are indeed published in an existing public repository on Zenodo (Ioli et al., 2023), sharing results with a wider scientific community.

Furthermore, in order to optimise the management of the information and data collected during the different editions of the summer school, a relational database has been designed and is currently under implementation with PostgreSQL and PostGIS. Such solutions allow for querying the location of markers deployed on the glacier surface and measured every year by GNSS, making it possible to accurately describe the glacier movements. Additionally, a database allows for effectively storing the results of the annual in-situ surveys carried out during the summer schools, as well as documenting the instruments and the procedure employed to acquire and process the data.

In summary, this study highlights the commitment to open education within the realm of geomatics, with the ongoing transformation of the Belvedere Summer School program into an experience mainly driven by open-source software. Beyond the educational focus on fieldwork design and data analysis, the project extends to a comprehensive approach to transparency, making resources openly accessible through a dedicated website. In this way, the Summer School aspires to contribute significantly to the principles of open education in geomatics, thereby establishing an accessible bridge between education, research, and the open-source community.

Bibliography:

Balletti, C. et al. (2023): The SUNRISE summer school: an innovative learning-by-doing experience for the documentation of archaeological heritage, https://doi.org/10.5194/isprs-archives-XLVIII-M-2-2023-147-2023

Gaspari, F., et al. (2021): Innovation in teaching: the PoliMappers collaborative and humanitarian mapping course at Politecnico di Milano, https://doi.org/10.5194/isprs-archives-XLVI-4-W2-2021-63-2021

Ioli, F. et al. (2021). Mid-term monitoring of glacier’s variations with UAVs: The example of the belvedere glacier. Remote Sensing, 14(1), 28.

Ioli, F., et al. (2023). Belvedere Glacier long-term monitoring Open Data (1.0) Zenodo. https://doi.org/10.5281/zenodo.7842348

Ioli, F., et al. (2024). Deep Learning Low-cost Photogrammetry for 4D Short-term Glacier Dynamics Monitoring. https://doi.org/10.1007/s41064-023-00272-w

Potůčková, et al. (2023): E-TRAINEE: open e-learning course on time series analysis in remote sensing, XLVIII-1/W2-2023, 989–996

Tucci, G., et al. (2020). Improving quality and inclusive education on photogrammetry: new teaching approaches and multimedia supporting materials, https://doi.org/10.5194/isprs-archives-XLIII-B5-2020-257-2020

Academic track
Omicum
11:30
30min
Exploring the use of 3D tiles in QGIS – case Helsinki
Meri Malmari, Emil Ehnström

The QGIS version 3.34 introduced support for Cesium 3D tiles. At the same time there is a growing number of 3D data published as 3D tiles. This is also true for the city of Helsinki, Finland, that has published diverse datasets as open data, including textured and untextured buildings, terrain data, and photogrammetry-derived mesh models.

Mobility Lab Helsinki is a test bed for smart mobility, which is a common effort between Forum Virium Helsinki and Business Helsinki. Mobility Lab Helsinki is running several agile pilot projects and one of them in the beginning of 2024 dealt with developing the mobility digital twin of Helsinki. We at Gispo conducted a pilot project that aimed to develop QGIS-based workflows for Helsinki to enhance the use of 3D data and 3D data production processes.

In this pilot we integrated 3D tile datasets into QGIS and thoroughly tested out the 3D tile features. One aspect of the project was to identify use cases within the organization of the city of Helsinki and to explore ways for them to benefit more from the 3D data available. By providing easier access to 3D data and facilitating integration with other spatial datasets, the pilot seeked to enable more comprehensive analysis and decision-making in urban planning.

We learned that with the new support for 3D tiles in QGIS, the accessibility of using such data is improved. With a growing enthusiasm around digital twins, we see that 3D tiles support in QGIS is a welcomed feature, though still in its infancy. We found that at the moment it is possible for organizations to use 3D tiles in QGIS, primarily for visualization purposes but also for rudimentary measuring. However, taking into account the wide spectrum of other features in QGIS, it is only a matter of time until we can see some spectacular applications where 3D tiles meet more traditional spatial data.

Use cases & applications
QFieldCloud (246)
11:30
30min
Mapbender 4.0 - create applications for your needs with the new version
Astrid Emde

Mapbender improved a lot. With the new version we have a refactored design and many new or improved features. You can integrated your WMS Services and confirgure them individually. You can manage access rights for applications.

State of software
LAStools (327)
11:30
30min
Scalable geospatial processing using dask and mapchete
Joachim Ungar

Dask is a flexible parallel computing library that seamlessly integrates with popular Python data science tools. With its task graph and parallel computation capabilities, Dask excels in managing large-scale computations on both the local machine as well as on a computing cluster.

Mapchete, an open-source Python library, specialises in parallelizing geospatial raster and vector processing tasks. Its strengths lie in its ability to efficiently tile and process geospatial data, making it a valuable asset for handling vast datasets such as satellite imagery, elevation models, and land cover classifications.

This talk delves into the integration of these two technologies, showcasing how their combined capabilities can be used to conduct large-scale processing of geospatial data. It will also show how we at EOX are currently deploying our infrastructure and which challenges we face when using it to process the cloudless satellite mosaics under the EOxCloudless product umbrella.

Use cases & applications
GEOCAT (301)
11:30
30min
Transition from one INSPIRE metadata standard to another and move from Geonetwork 3.x to 4.x – lessons learnt
lena.hallin-pihlatie@nls.fi

National Land Survey of Finland has used GeoNetwork https://www.geonetwork-opensource.org/ for more than a decade to provide spatial data and service providers in Finland a platform for maintenance of their metadata. More than a hundred user groups have described spatial data sets and services in the metadata catalogue, which is called Paikkatietohakemisto https://www.paikkatietohakemisto.fi/geonetwork/srv/fin/catalog.search#/home. The metadata published are widely reused in other web sites and services, such as https://www.opendata.fi/en.

If you want to learn from our experiences, come and listen to this presentation during which we'll tell you:

  • How we managed to transfer from one INSPIRE metadata standard to another in co-operation with data providers and

  • How we by summer 2024 managed to transfer from GN version 3.12 to 4, summarising the challenges and opportunities we faced.

Use cases & applications
Van46 ring
12:00
12:00
30min
All MapLibre projects, present and future, in one status update
Yuri Astrakhan

Present everything MapLibre community has been working on, including tile serving, fonts and sprite handling, to visualizations for both web and native, to new types of tools and format standards.

State of software
LAStools (327)
12:00
30min
Analyzing Charging and Petrol Station Distribution with FOSS4G: Implications for Energy Transition Monitoring in European Regions
Lorenzo Stucchi

The global transition towards sustainable energy sources, particularly in the transportation sector, has sparked significant interest in understanding the distribution patterns of electric vehicle (EV) infrastructure compared to traditional petrol stations. Leveraging the wealth of openly available geospatial data through platforms like OpenStreetMap (OSM) and routing engines such as OpenRouteService (ORS), this presentation explores the disparities in the distribution of electric columns and petrol stations across different European regions. Moreover, it delves into the potential of utilizing open data to monitor the energy transition's evolution and its implications for societal perception and awareness.

With the growing richness of OpenStreetMap data about transportation infrastructure, researchers and practitioners have unprecedented access to detailed information about electric vehicle charging stations and traditional petrol stations. This study harnesses this data to conduct a comparative analysis of their spatial distribution across various European regions. By leveraging the capabilities of OpenRouteService, we perform analyses to evaluate the accessibility and coverage of both types of refuelling infrastructure, shedding light on potential gaps and disparities in their distribution.

Furthermore, this research underscores open data's significance in monitoring the energy transition progress in different European regions. The diffusion of the charging station follows different paths in Europe. Initially, charging stations were sparsely distributed, primarily concentrated in urban areas and along major transportation routes. However, a discernible discrepancy can be observed in the evolution of charging station networks across Europe in recent years. While some regions have accelerated their efforts to expand and enhance charging infrastructure, others still need to catch up, resulting in an uneven distribution of charging stations across the continent.

Importantly, this study emphasizes the role of visualizations and data-driven insights in enhancing public awareness and understanding of electric vehicles in the European context. By presenting visual graphs and data analyses depicting the current reality of electric column distribution compared to petrol stations across different regions, we aim to dispel misconceptions and increase knowledge about EV infrastructure.

In conclusion, the presentation underscores the transformative potential of open data and geospatial analysis in studying the energy transition and promoting sustainable mobility solutions in European regions. By leveraging platforms like OpenStreetMap and routing engines such as OpenRouteService, we can gain valuable insights into the distribution of electric vehicle infrastructure and its implications for the transition towards clean energy. Through visualizations and data-driven analyses, we can enhance public awareness and understanding of electric vehicles, paving the way for a more sustainable and environmentally conscious transportation system across Europe.

Use cases & applications
Van46 ring
12:00
5min
Does open data open new horizons in urban planning?
Nikola Koktavá

The aim of this study is to provide a comprehensive view of the issue of open data in Czech cities and thus give the world community an insight into the state of open data in the Czech Republic. It serves as a basis for further research and implementation of open data in urban planning. Its results can be used not only for the benefit of the professional community but can also serve as a basis for decision-making by city authorities in the planning and development of urban space. The open data are therefore integral part of developing smart cities (Ojo, Curry, Zeleti, 2015). This extensive study deals with the issue of the availability of open data in Czech cities and to what extent are they used use in the framework of urban planning and development of urban space. In the context of rapid digitization and technological progress, open data is becoming increasingly important for the effective management and design of urban infrastructure. This study systematically analyses the current state of open data in Czech cities, identifies key aspects of their availability and examines their potential applications in urban planning. The study focuses in more detail on Brno, which is the second largest city in the Czech Republic and provides freely available data on its website data.brno.cz.

The first part of the study focuses on the theoretical framework of open data and its significance for modern urban planning. The basic principles of open data are introduced, including the standards and formats currently in use. The advantages of open data in the context of transparent decision-making, citizen participation and sustainable urban development are also discussed. In the Czech Republic, the possibilities of providing and using open data has been more and more discussed in the last ten years, especially at the level of data from state organizations. Nevertheless, the term open data is not understood in the same way by all organizations, when for example PDF format is considered as open data format. At the same time, we also perceive the problem of the completeness, data quality and consistency of open data, as well as missing metadata for easier understanding of lineage.

The analysis of available data in specific Czech cities follows up in the second part of the study. The analysis performed includes the identification of existing data sources such as geographic data, traffic information, demographic data and other relevant information for urban planning. Each data source is subject to a detailed evaluation, including assessment of quality, topicality and availability. The Czech regional cities try to provide open data using geoportals., The largest geoportals are data.brno.cz, geoportalpraha.cz, mapy.ostrava.cz, but there exist others. However, state-government institutions also provide data. The geoportal.cuzk.cz and subsequently geoportal.gov.cz might be considered as the largest provider of data (including open data). A large amount of basic statistical data is provided by the Czech Statistical Office, including the last census from 2021, published mainly as open data.

Regular hackathons are already organized to increase awareness about open data on these portals, to illustrate the range of possible use of data and the power of making data available to a wide professional and general public. One of the most creative examples can be the Minecraft world derived from a 3D model of the city of Brno. Such an unconventional method may better attract general public to think more about their city and how to contribute to its improvement.

In the following part of the study, concrete examples of the use of open data in urban planning are presented. The making available of 3D data of cities became one of the most significant step for the needs of architectural or urban studies. We cannot forget the making accessible of the basic map study (The Fundamental Base of Geographic Data of the Czech Republic) including Digital model of relief, Digital model of surface and orthophoto in the form of open data last year by the State Administration of Land Surveying and Cadastre. Some potential can be hidden in the emerging Digital Technical Map. Different insights onto the location can arise with the optics of the crime rate published by police office. In short, successful projects are described where open data played a key role in optimizing traffic, planning public spaces, and improving the quality of life of residents. Based on these examples, recommendations are proposed for the further development and use of open data in the urban planning environment.

In the final part of the study, the challenges and opportunities associated with the implementation of open data in Czech cities are discussed. Potential strategies for improving the availability of open data are presented, including collaboration between city authorities, the academic sector and civil society. In addition, ethical and security issues related to the handling and sharing of sensitive data in an urban context are stressed.

Academic track
Omicum
12:00
30min
G3W-SUITE and QGIS integration: state of the art, latest developments and future prospects
Walter Lorenzetti

G3W-SUITE is a modular, client-server application (based on QGIS-Server) for managing and publishing interactive QGIS cartographic projects of various kinds in a totally independent, simple and fast way.

Accessing administration, consultation of projects, editing functions and use of different modules are based on a hierarchic system of user profiling, open to editing and modulation.

The suite is made up of two main components: G3W-ADMIN (based on Django and Python) as the web administration interface and G3W-CLIENT (based on OpenLayer and Vue) as the cartographic client that communicate through a series of API REST.

The application, released on GitHub with Mozilla Public Licence 2.0, is compatible with QGIS LTR versions and it is based on strong integration with the QGIS API.

This presentation will provide a brief history of the application and insights into key project developments over the past year.

The developments affected both the administration and management component of the exposed WebGis services, both the aspects of interaction with web maps and their contents, as well as the aspects and functions related to online editing through integration with the QGIS API.

A specific development, specifically covered in another submission, concerns the integration with the QGIS Processing API in order to migrate the analysis models, created in QGIS via the ModelDesigner, to a web environment.

The talk, accompanied by examples of application of the features, is dedicated to both developers and users of various levels who want to manage their cartographic infrastructure based on QGIS.

State of software
GEOCAT (301)
12:00
30min
Navigate urban scenarios with MapStore 3D tools
Lorenzo Natali, Stefano Bovio

This presentation focuses on the use of MapStore to navigate urban scenarios using its 3D tools and capabilities. Latest versions of MapStore include improvements and tools related to the exploration of 3D data such as Map Views, Styling, 3D Measurements, Annotations and more. Support for 3D Tiles and glTF models through the Cesium mapping library has also been greatly enhanced to provide support for more powerful integration.

Attendees will be presented with an overview of our work related to 3D data visualizations and a selection of use cases around the following topics: visualization of new projects for urban planning, relations between different levels of a city and descriptions of events inside a city. At the end of the presentation attendees will be able to use the presented workflows to replicate them on different urban scenarios using the 3D tools of the MapStore WebGIS application.

Use cases & applications
QFieldCloud (246)
12:05
12:05
5min
Advancing water productivity monitoring: Waplugin for the analysis and validation of FAO WaPOR data in QGIS
WAPlugin Team, Akshay Dhonthi, Fabian Humberto Fonseca Aponte

Remote sensing data have become indispensable for monitoring water resources and agricultural activities worldwide, offering comprehensive spatial and temporal information critical for understanding water availability, agricultural productivity, and environmental sustainability (Karthikeyan et al., 2020). The FAO Water Productivity Open Access Portal (WaPOR), developed by the Food and Agriculture Organization of the United Nations (FAO), provides extensive datasets derived from remotely sensed data (FAO, 2019). These datasets play a crucial role in water productivity monitoring, especially in regions facing water scarcity and intensive agricultural activity.
However, the manual extraction and importation of WaPOR datasets from the WaPOR platform can be time-consuming and complex. Users typically navigate the platform to locate specific datasets, download the files, and then import them into their preferred Geographic Information System (GIS), such as QGIS. This process often requires users to repeat these steps for multiple datasets, consuming a significant amount of time. Additionally, ensuring the accuracy and reliability of remotely sensed data, including WaPOR datasets, requires validation against ground-based measurements (Wu et al., 2019). This validation process involves evaluating the correlation between remote sensing data and ground measurements to determine their suitability for further analysis and decision-making. However, this process involves a complex workflow and often requires multiple tools and software programs, further increasing the time and effort needed to process and analyze the data.
To address these challenges comprehensively, we developed WAPlugin, a comprehensive solution designed to streamline the entire process of accessing and analyzing FAO WaPOR datasets within the QGIS environment. WAPlugin is a user-friendly plugin that automates the retrieval of WaPOR datasets directly from the WaPOR platform, eliminating the need for users to navigate through the platform manually. The manual extraction and importation of WaPOR datasets into QGIS for analysis can be time-consuming, with users often spending around 30 minutes on each dataset. WAPlugin significantly reduces this time by automating the extraction and importation of WaPOR data directly into the QGIS environment, allowing users to reduce the time required for each dataset by approximately 83%. With an estimated time of just 5 minutes per dataset, WAPlugin saves users valuable time, enabling them to focus more on data analysis and decision-making.
Moreover, WAPlugin not only streamlines the data acquisition process but also enhances the validation process by offering integrated functionality. Users can effortlessly upload ground observations and conduct comprehensive statistical analyses within the QGIS environment. This includes the calculation of a wide range of validation metrics, such as root mean square error (RMSE), mean absolute error (MAE), bias, coefficient of determination (R-squared), and scatter index. These metrics provide detailed insights into the accuracy and reliability of the WaPOR data by quantifying the level of agreement between remote sensing measurements and ground observations. By facilitating the calculation and visualization of these metrics directly within the QGIS environment, WAPlugin empowers users to make informed decisions regarding the suitability of the data for their specific applications. This built-in workflow not only saves time but also ensures the robustness of analyses, ultimately contributing to more accurate and reliable assessments of water productivity and agricultural activities.
By combining these tasks into a single, intuitive interface, WAPlugin significantly reduces the time and effort required for data acquisition and validation, enabling users to focus more on data analysis and decision-making. WAPlugin provides a complete solution for using FAO WaPOR datasets to analyze water productivity within the QGIS environment. By simplifying data retrieval and integrating validation functions, the plugin improves the accessibility and reliability of remotely sensed information.
Furthermore, WAPlugin contributes to enhancing collaboration among researchers and practitioners in the field of water resources and agriculture. The streamlined process for accessing and analyzing WaPOR datasets promotes knowledge sharing and facilitates interdisciplinary research endeavors. This collaborative aspect is crucial for addressing complex challenges such as water management and agricultural sustainability, which require insights from diverse perspectives and expertise.
In addition to its practical utility, WAPlugin also serves as an educational tool, empowering users with the knowledge and skills to leverage remote sensing data for addressing real-world challenges. By providing a user-friendly interface and integrating essential functionalities, the plugin facilitates learning and capacity building in the field of geospatial analysis and environmental science.
WAPlugin represents a significant advancement in the field of remote sensing and geospatial analysis, offering a practical solution for enhancing the accessibility and usability of WaPOR datasets. Its impact extends beyond technical efficiency to broader implications for research, collaboration, and education in the domains of water resources management, agricultural productivity, and environmental sustainability. As remote sensing technologies continue to evolve and play an increasingly vital role in addressing global challenges, tools like WAPlugin will remain essential for maximizing the potential of these technologies in informing evidence-based decision-making and fostering sustainable development.
In conclusion, WAPlugin stands as a pivotal tool for remote sensing applications for water resources management and agricultural productivity. Its ability to streamline data acquisition, analysis, and validation processes not only enhances efficiency but also promotes collaboration and knowledge exchange among stakeholders. As we navigate the complexities of sustainable resource management in a changing climate, WAPlugin exemplifies the transformative potential of technology in addressing pressing global challenges.

Academic track
Omicum
12:10
12:10
5min
Benefits and pitfalls of emotional and mobility web mapping
Nikola Koktavá

The popularity of participative mapping continuously grows and is becoming an essential tool to involve citizens in urban planning, architectural solutions and transport design. Citizens can quickly and easily review proposals and variants, explore models and visualizations, express their opinions, pin comments, and vote on their favourites (Ribeiro and Ribeiro 2016). Emotional maps and similar mapping tools are frequently used in Czechia, especially for mapping citizens’ attitudes towards both physical and social features of the urban environment. Quantitative assessment of mapping results can help urban planners better understand citizens’ perception and improve the targeting of planned measures (Camara, Camboim, and Bravo 2021). Discussion sometimes arises about the validity of such mapping, complementarity or substitution of traditional questionnaire surveys. The objective of the paper is to discuss benefits and weaknesses of such tools and to compare them with questionnaire surveys.
The case study is focused on two middle-sized Czech cities, Ostrava (OV) and Hradec Kralove (HK), and selected rural municipalities in their surroundings. Participants are all seniors (age 65+) due to the project aim of understanding seniors’ spatial mobility, accessibility and perception.
The questionnaire survey was conducted in 2022 by the Research Agency STEM/MARK (n=536, PAPI method 86%, CAWI method 14%). Quota sampling used stratification by age, gender, territory, and urbanization based on census data.
At the same time, two web map applications were launched - the emotional and mobility maps. We used the platform EmotionalMaps.eu which utilizes a Leaflet library (Pánek et al. 2021).
In the map application, respondents indicate their age group and health limitations, and mark one or more locations: attractive locations, repulsive locations, barriers to movement, attractive paths, repulsive paths, and approximate residence location. Each marked target can be further specified by 16 reasons with a multiple-choice option, visiting frequency, schedule, and weather and social constraints (Horak et al. 2022).
In the mobility map, respondents specify one or more of their favourite locations in the following categories: home, workplace, retail, pharmacy, post office, doctor, supermarket, ATM, worship, services, park, restaurant, visiting family or friends, garden or cottage, or other place. After marking each point, they may specify frequency of attendance and transport mode.
The main advantages for emotional and mobility web mapping are cost effectiveness, flexibility of use, usually large sample size, attractiveness of design, ease of use for people with computer or mobile skill, ability for accurate positioning of the targets, customized map design (zoom, pan, etc.), larger extent, ability to describe more specific conditions, use of illustrative pictures or icons, interactive help, consistency monitoring, integrity constraints, and selection from specified options. Disadvantages include no validation of the respondent profile, bias of respondents towards more technically skilled and wealthier people, privacy concerns, and duplicate responses (Wikstrøm 2023).
The biggest problems were encountered when drawing lines to specify attractive and repulsive paths. We obtained only 32 records from OV and 29 records from HK and evident errors represent 19% and 40%, respectively.
Quota sampling was not applied on the web mapping data, only a basic selection of the relevant age group and residence in HK or OV. The differences of the respondents’ profiles between the three methods of survey show clear bias towards younger and more healthy seniors in the case of web mapping and CAWI.
Any surveys’ raw data contains some inaccuracies, errors, or odd responses from people misunderstanding questions, misusing tools, trial responses, intention to damage data or outputs, or having concerns (e.g., losing privacy). Deviations from planned quota shares in the quota-based survey may result in the removal of some respondents and/or the need to conduct an additional survey (in our case, 40-46% in two villages). The data's temporal consistency is deteriorated by such changes.
The primary aim of the survey was to discover seniors’ mobility targets. We asked for their dwelling location and up to four of their most important targets, listed in descending order by their perceived importance, written as a free text. To specify the locations of residence and targets we asked for addresses or another useful specification. Respondents identified 23 kinds of important targets in HK and 24 in OV with the following main priorities: shopping (37 and 24%, resp.), doctor (19 and 22%), family (10 and 13%), walking (8 and 6%), and friends (5 and 4%). An additional problem is that 5% of free-text destinations had multiple targets.
The web mobility mapping requested specification of favourite locations for one or more targets in the 13 categories, the residence and the “other” target (specified by free text). Respondents identified 16 kinds of important targets in HK and 12 in OV with the following priorities: retail (15 and 12%, respectively), supermarket (12 both), pharmacy (12 and 10%), post office (11 and 10%). Such a flat distribution is caused by the respondents’ tendency to mark only one target per category.
The accuracy of location is variable. While the web mapping application instantly provides coordinates for each location, the targets from questionnaires require geocoding. In our case, geocoding was successful only for 65% of records. Among these, 18% were geocoded by utilizing the complete address, 53% were geocoded by finding the nearest matching destination, 24% were geocoded manually with interpretation, and 5% were geocoded but only to the center of the street
Further, the spatial distributions of targets were compared. The clustering of both indicated targets and all targets available in OpenStreetMap is confirmed by the M-function in both variants (questionnaire and web mapping). The analysis of distances from a residence to an indicated real target shows more clustering for questionnaire targets around a residence than for those from web mobility mapping. However, the selection of closer destinations in the questionnaire is influenced by the age bias of respondents and by the limited number of requested targets (up to four).
The study contributes to the discussion on the validity of participative mapping and sheds a light on the importance of carefully preparing such surveys and pre-processing data comprehensively.

Academic track
Omicum
12:15
12:15
5min
The Use of FLOSS GIS in Documenting the Historic Center of Prizren
Shaindere, verontara

The historic center of Prizren stands as a testament to Kosovo's rich cultural and historical legacy. Nestled amidst the scenic landscapes of the Balkans, Prizren boasts a diverse tapestry of architectural marvels, ancient monuments, and cultural traditions that span centuries. However, the city's historical fabric is increasingly under threat due to rapid urbanization, population growth, and socio-economic transformations. In this context, the effective documentation and preservation of Prizren's cultural heritage emerge as imperative tasks requiring innovative approaches and collaborative efforts.

Geographic Information Systems (GIS) have emerged as indispensable tools in heritage conservation and management. By integrating spatial data with advanced analytical capabilities, GIS enables stakeholders to gain insights into complex urban dynamics, identify heritage sites, and formulate informed preservation strategies. In recent years, the open-source GIS platform, QGIS, has gained prominence for its flexibility, affordability, and user-friendly interface, making it particularly suitable for heritage documentation initiatives in resource-constrained contexts.

This comprehensive study delves into the extensive utilization of Geographic Information Systems (GIS), with a specific focus on the QGIS platform, as a tool for documenting the historic center of Prizren. Situated as one of Kosovo's most culturally and historically significant cities, Prizren faces formidable challenges in preserving and documenting its rich historical heritage amidst ongoing urban developments. The study recognizes GIS technology as a robust mechanism that facilitates the gathering, organization, and visualization of geographic data, thus enabling the identification and preservation of invaluable cultural legacies embedded within the city.

Through a meticulous synthesis of existing literature, illuminating case studies, and invaluable insights gleaned from local stakeholders, this research endeavors to explore the multifaceted applications and advantages of GIS systems in the documentation process of Prizren's historic center. It meticulously elucidates the pivotal role played by GIS, particularly highlighting the prowess of QGIS, in the creation of foundational maps delineating the historical contours of the city, the precise identification of significant cultural and environmental landmarks, and the nuanced analysis of the ever-evolving urban landscape. Additionally, the study delves into the intricate challenges that intersect with the integration of GIS technologies, underscoring the paramount importance of ensuring access to reliable data sources, fostering seamless interagency collaboration, and enhancing technical expertise among stakeholders engaged in heritage documentation endeavors.

Moreover, this research underscores the indispensable significance of fostering robust interdisciplinary collaborations and knowledge-sharing platforms among governmental bodies, academic institutions, heritage conservationists, and GIS practitioners. It advocates for the establishment of standardized protocols and methodologies governing the acquisition, processing, and dissemination of spatial data, thereby fortifying the interoperability and reliability of GIS-driven documentation initiatives aimed at safeguarding Prizren's rich cultural heritage.

Furthermore, the study emphasizes the imperative of embarking on proactive capacity-building endeavors, meticulously tailored to augment the proficiency of local stakeholders in effectively harnessing GIS technologies for heritage conservation purposes. By investing in specialized training programs and workshops, meticulously tailored to cater to the unique needs and challenges inherent within Prizren's heritage landscape, stakeholders can cultivate a highly skilled workforce adept at harnessing GIS tools to execute comprehensive documentation and monitoring initiatives aimed at safeguarding cultural assets for posterity.

Additionally, this study accentuates the pivotal role played by community engagement and participatory approaches in nurturing a collective sense of ownership and stewardship over Prizren's cultural heritage. By actively involving local communities in the documentation process through innovative citizen science initiatives, crowdsourcing endeavors, and targeted public outreach campaigns, heritage practitioners can effectively tap into the invaluable indigenous knowledge reservoirs and foster a collective ethos of custodianship towards the city's illustrious historical legacy.

In conclusion, this study fervently advocates for the adoption of a holistic approach towards seamlessly integrating GIS systems into the meticulous documentation and preservation endeavors pertaining to Prizren's historic center. Such an approach necessitates the forging of strategic partnerships, the implementation of proactive capacity-building initiatives, and the cultivation of community-centric methodologies aimed at harnessing the transformative potential of GIS technologies in tandem with interdisciplinary collaborations and community engagement strategies. Through such concerted efforts, stakeholders can effectively fortify Prizren's resilience against the myriad challenges posed by urbanization while safeguarding its timeless cultural heritage for the benefit of future generations.

Stakeholder Consultations: Engagement with local governmental agencies, academic institutions, heritage conservation organizations, and GIS practitioners will provide firsthand perspectives on the opportunities and challenges associated with GIS integration in Prizren's heritage preservation efforts.

Data Collection and Analysis:

Data collection will involve gathering spatial data, historical records, archival documents, and georeferenced photographs pertaining to Prizren's historic center. GIS techniques such as spatial analysis, georeferencing, and attribute querying will be employed to organize, analyze, and visualize the collected data.

Results and Discussion:

The results of the study will be presented in the form of thematic maps, spatial analyses, and qualitative assessments. Key findings will be discussed in relation to the identified advantages, challenges, and recommendations for GIS integration in documenting Prizren's cultural heritage.

Conclusion:

The study will conclude with a synthesis of findings, highlighting the potential of GIS, particularly QGIS, as a valuable tool for documenting and preserving Prizren's historic center. Recommendations for enhancing the effectiveness of GIS-driven documentation initiatives will be proposed, with a focus on capacity-building, interagency collaboration, and community engagement strategies.

Academic track
Omicum
12:20
12:20
5min
Assessment of Women and Girls' safety in public transit
Caroline Akoth

Safe public transit for women and girls accounts for and accommodates the reality of the travel patterns of women and girls. Public transit trips have the potential to be less safe since many women must walk through, or wait in, unsafe areas in order to access public transit. Moreover, at odd times of day and in isolated places, public transit may be unreliable (by necessity many women must travel through the city very early in the morning and late at night) (Peters, 2002, 7)
It has been reported in Bogota, Colombia that most robberies occur in critical transit routes just before 4.00 am to 7.00 am or after 6.00 pm – 12.00 am peak hours.
This project aims to document security pain points and strategies adopted by different cities in the world to ensure inclusive development.
In Nairobi Kenya this project explored and analyzed what safety means for commuters using public service vehicles.
In a collaborative effort between Women in GIS, Kenya and CPCS Transcom, This project was delivered in 3 phases, marked as training sessions. The objective of the sessions was to conduct training sessions that took students and recent graduates through the project implementation process. The three highlighted projects were: data collection workshop, that explored the principles and ethics of data collection for projects. A data analytics workshop that explored methodologies and tools available to analyze data collected in an effort to quantify security and safety measure in the public transit sector in Kenya. The final workshop was a data visualization and reporting workshop that explored ways of reporting and visualizing project findings.
The project collected sampled data / responses from users of public transit in Nairobi. The sample was made up of users of Matatus, Boda bodas and taxis. By demographics the research group was made up of men, women and gender non confirming adults who use public transit mode of transportation for above 75% of their travels. Only about 50% of the respondents are close enough to walk to the nearest bus stops, increasing the need for combined modes of travel to work or to school. Descriptive statistics was used to understand safety and qualify safety and security concerns in public transportation. Correlation Matrix analysis was also conducted to determine correlation of the occurrence of unsafe incidences (Physical, emotional, and driving experience based).
According to the study, different demographics define safety in public transit differently based on personal lived experiences. Definition of the lived experiences was classified in 6 categories based on whether the unsafe incidence was verbal or physical. These categories are : Vehicle Conditions, Theft, Sexual Harassment and Physical Violence, Risk of Infection, Reckless driving, Driver/ Service staff behaviour. Most commuters defined safety by way of driving. From this study, It was discovered that safety or unsafety is gender neutral. 84% of women respondents have experienced unsafe incidences while 67% of men have experienced unsafe incidences while in transit. Up to 62.5% of the responds believe that their experience with unsafe incidences was highly influenced by the gender. Depending on the time of day (Peak and off-peak hours), over 44% of the respondents considered the possibility of theft, sexual harassment and physical assault as unsafe occurrences and would use that a basis of the decision on whether to board a PSV or not.
The results of correlation matrix showed that there’s a positive correlation between driver and service staff behavior, conditions of the public service vehicle, driving experiences and the possibility of experiencing unsafe incidences. 70% of the respondents who experienced reckless driving, coupled by rude and unruly bus conductors reported to have ended up either getting an accident, experiencing theft, sexual or physical assault. Distance to bus stop influences feeling of unsafety and occurrence of unsafety. The study also identified major hotspots based on respondents lived experiences. Most of these hotspots are located near, around or close to busy stops.
This study used open source and customizable tools for the development and execution of the study. For data collection, the project used ODK toolkit for data collection, open GTFS feeds from the digital transport for Africa portal and QGIS analysis tools. We also made use of python libraries for data analysis.
As a citizen science / Collaboratory project, this project impacted 60 participants majorly university students and recent graduates in various academic backgrounds (Geospatial Engineering, Civil Engineering, Environmental Management and Gender and Social development studies). These students came from University of Nairobi, Kenyatta University, Technical University of Nairobi and Moi University.
The greatest impact of this was documenting security pain points and strategies adopted in Nairobi to ensure inclusive infrastructure development. Outputs of this project were shared to UN Habitat, Ministry of Transport and Naipolitan. A project impact report was also externally published by both Women in GIS, Kenya (WIGIS Ke) and CPCS Transcom

Academic track
Omicum
12:30
12:30
90min
Lunch
Van46 ring
12:30
90min
Lunch
GEOCAT (301)
12:30
90min
Lunch
LAStools (327)
12:30
90min
Lunch
QFieldCloud (246)
12:30
90min
Lunch
Omicum
14:00
14:00
30min
CITY TRANSPORT ANALYZER: A POWERFUL QGIS PLUGIN FOR PUBLIC TRANSPORT ACCESSIBILITY AND INTERMODALITY ANALYSIS
Gianmarco Naro, Carlo Andrea Biraghi

Mobility is one of the main factors affecting urban environmental performances. Car dependency is still widespread worldwide and integrated planning approaches are needed to exploit the potential of active and shared mobility solutions, making them an effective alternative to the use of private vehicles. The analysis and optimization of public transportation (PT) services have so become increasingly important in the planning and management of urban infrastructure. This work aims to develop and implement a QGIS plug-in for analyzing urban PT networks, assessing the accessibility and intermodality dimensions, relying on General Transit Feed Specification (GTFS) data as source of information.

GTFS is a standardized format for PT schedules and geographic information. It defines a common format for transit agencies to share their data, making it possible for developers to create applications that provide accurate and up-to-date information about services. This standard was chosen because it is one of the most popular and widely used, especially when the data are used for static type analysis. The information extracted mainly concerns PT stops, routes and nodes preparatory to route construction and connection. All data belonging to the geospatial standard, in order to be usable by GIS software, must be extracted, interpreted and converted to a GIS layer. Specifically, all information regarding stops and routes was extracted to obtain a vector layer for each type of data. Going deeper, one of the most important layers concerns that of the PT routes, as it shows the entire urban network, obtained by converting the data within a graph data structure using NetworkX, a library for the creation, management and manipulation of complex networks, including graphs. This graph was created following a personal interpretation with the aim of facilitating the achievement of our purpose. to facilitate the achievement of our purpose, it was decided to model the edges of the graph in such a way that an edge is only used by one PT route. If two public vehicles use the same edge, there will be two different overlapping edges. It is also important to emphasise that each edge in the graph shows the type of means of transport using it (underground, train, bus, ...), the average travel time of that edge, and the length of the edge itself. The creation of the graph is fundamental to carry out two types of analysis.

The accessibility analysis is conducted to determine which areas are reachable within the specified time frames via all the possible combinations of PT lines. Starting from any point in the city, it provides service areas combining PT and walking within a given time interval defined by the user up to a maximum of 60 minutes. The outputs are both lines, all the edges of the network that can be travelled, and polygons, convex hulls built on them. This analysis, already available only within proprietary software ArcGIS, is extremely useful to provide very detailed information about the potential of each PT stop and its surrounding urban area. The second analysis concerns PT interoperability and introduces some elements of novelty. It is intended to assess intermodality beyond the PT nodes (hubs), exploring which paths in the street network have the higher probability of being taken to change from one line/mode to another. The evaluation is purely physical and only considers network distance. Its results are expected to be integrated with complementary dimensions as proximity to Point of Interests, street comfort and safety for a holistic planning approach. Starting from any PT stop, a circular catchment area is drawn using a user-defined distance and the PT stops within it are selected. Among them, those with at least one PT line in common with the departure stop are discarded, the remainder being selected. This is done assuming that PT is generally faster than walking and so, when the PT alternative is available, walking is less attractive. It is then shown how the starting stop is connected to the other stops via the most direct pedestrian path. Finally, once drawn all the pedestrian paths, the number of times that each street segment is used is also calculated, providing a classification according to their potential use for modal change. The pedestrian graph is obtained through OSMnx, a library for retrieving, processing, and visualizing road network data from OpenStreetMap.

The plugin was tested on two different case studies, Milan and Rio de Janeiro, producing significant results highlighting the created plug-in’s utility and application in the context of GTFS data-driven studies of urban public transportation networks. The outcomes of both analyses were consistent, demonstrating the plugin’s applicability in comprehending the dynamics of metropolitan public transit networks. Overall, the plug-in stands out as an important tool that can analyse GTFS data and use it to create a network of a city’s PT. The built plug-in provides a flexible and easy-to-use tool for studying urban PT networks, which constitutes a significant addition to the geospatial community. The plug-in offers a thorough overview of service coverage, accessibility, and connectivity within various metropolitan contexts by utilizing GTFS data. Subsequent examinations offer a powerful tool for analysing specific areas of a city, showing interconnections between stops and possible routes that can be travelled. The studies are therefore very useful as they quantitatively analyse a context, considering the context itself, assessing the accessibility and interoperability of an urban area.

The ultimate goal is to contribute to a deeper understanding of urban public transportation networks and urban areas through a practical and intuitive tool that can be used by those involved in the analysis and management of city infrastructure. Work is also underway to extend these analyses to other city contexts, thus not limiting them to public transportation alone. For example, by showing the distribution of Points of Interest within the city, highlighting how they are interconnected. This must, however, be done while trying to maintain a reasonable runtime, as it can still be a problem for very complex and detailed networks.

Academic track
Omicum
14:00
30min
Destination Earth Data Lake (DEDL) – discovery, access and process data
Patryk Grzybowski

Destination Earth initiative (DestinE), driven by the European Organisation for the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the European Space Agency (ESA) and the European Centre for Medium-Range Weather Forecasts (ECMWF) aims to create a highly accurate replica - Digital Twin - of the Earth. The first two existing Digital Twins describe weather-induced and geophysical extremes, as well as climate change adaptation. Ine the next years, the number of Digital Twins is going to be grown. Thus, to develop new models, there is a high need to facilitate access to data and ways of working with data.. This is made possible by one of three key DestinE’s elements - Destination Earth Data Lake (DEDL) which provides open discovery, access, and big data processing services.

DEDL Discovery and Data Access services is provided by Harmonized Data Access (HDA) tool which provides a single, federated entry point to the services and data. The DestinE Data Lake federates with existing data holdings as well as with complementary data from diverse sources like in-situ, socio-economic, or space data. And very importantly, it provides access to data generated by DestinE Digital Twins All this allows for exploration, combination and assimilation of data shared by existing services with innovative Digital Twins data. What is more, all this data is provided as a full archive immediately available to the user. The services rely on use of the SpatioTemporal Asset Catalogs (STAC) standard which means:
• The search in the dataset is done according to the STAC protocol;
• The Federated Catalog Search Proxy component converts STAC queries into queries adapted to the underlying catalog and returns the results to the user in STAC format;
• The services are presented in service catalog.

Thus, exploring through the datasets and work with data provided by DEDL is user-friendly as well as adapted to the newest trends and requirements.

Big Data Processing Services offered by DEDL provides:
• Cloud computing;
• STACK application development environment;
• Hook service.

Cloud computing service uses the ISLET infrastructure as a service (IaaS) deployed on OpenStack - open source cloud computing infrastructure software - with the Horizon interface. It allows users to create and manage virtual machines as well as local users’ storage through graphical user interface (GUI) and command line interface (CLI). It also makes possible to use Kubernetes for more demanding jobs. What makes ISLET exceptional is providing services in proximity to data holdings as a distributed infrastructure close to High Performance Computing (HPC). It is possible due to data bridges – edge clouds enables operating with large volume data (also generated by DestinE Digital Twins) in conjunction with computing capabilities of HPC.

STACK application development environment utilizes JupyterHub/DASK with Python, R and Julia. It allows users create DASK cluster on selected infrastructure/cloud site and do processing close to the data. Thank to that, users do not need to adapt their own local environment to work with DEDL services.

Hook Services is a set of pre-defined workflows which could be used by users as a ready-to-use processors, e. g. : Sentinel-2: MAJA Atmospheric Correction; , Sentinel-2: SNAP-Biophysical; Sentinel-1: Terrain-corrected backscatter. It also enables workflow functions to generate on-demand higher-level products, such as temporal composites.

The DestinE Data Lake represents a groundbreaking initiative that transforms the management of Earth Observation data, enhancing capabilities in climate research, and bolstering initiatives for sustainable development. DEDL provides unique infrastructure (ISLET), open source services to discover and obtain data (HDA), reliable and trusted processors (Hook) and service to user-friendly dealing with data (Stack). The fundamental principles behind the DEDL as well as novel cloud solution will enable data harmonization, federation and processing on a scale beyond current capabilities.

Open Data
Van46 ring
14:00
30min
GeoMapFish Status
Yves Bolognini

GeoMapFish is an open source platform for the development of web-based geographic information systems (WebGIS). The platform is rich in functionality, highly customizable and offers multiple interfaces - desktop, mobile, administration and an API for integrating maps into third-party applications. OpenLayers and an OGC architecture allow the use of different cartographic engines: MapServer, QGIS Server, GeoServer. A solid and proven backend enables opening up to other web viewers.

The platform has been developed in close collaboration with a large user group. It targets a variety of uses in public administrations and private groups, including data publication, geomarketing and facility management.

A highly integrated platform, a large number of features, fine grained security and a mature reporting engine are characteristics of the GeoMapFish solution. In this talk, we will present the key usages, the state of the migration process to web components and latest functional developments. We will share our experiences from the productive operation of GeoMapFish-based geoportals in various Kubernetes clusters.

State of software
LAStools (327)
14:00
30min
QGIS 3D and point clouds enhancements
Saber Razmjooei, Martin Dobias

We have been busy improving QGIS 3D, point clouds and 3D Tiles integration in the past year. This presentation highlights the key features and enhancements.

State of software
QFieldCloud (246)
14:00
30min
View Server - Cloud Native EO Data Access
Fabian Schindler-Strauss

The View Server (VS) software project was developed for cloud native geospatial data access. This includes functionality to browse, search, transform and download Earth Observation (EO) and other geospatial data via a range of OGC compliant and other standardized interfaces such as STAC, OpenSearch, WMS, WMTS and WCS and their OGC API successors.

Based on EOxServer, powerful rendering capabilities are built in, allowing on-the-fly data transformation and colorization for exploring datasets, which can subsequently be tiled and cached via the built in MapCache. Data is ingested via feature rich components to harvest, enrich metadata, preprocess and pre-seed caches, to offer a performance optimized and flexible rendering of the data via its service endpoints. Using the harvested and enriched metadata, CQL filters can be employed to filter down the records to be visualized or searched, whereas an expression language is used to flexibly define the renderings of either RGB or color-scaled outputs.

As a cloud native component, View Server allows various storage systems, such as OpenStack Swift and S3 and can be installed as a system in a kubernetes (via Helm Charts) or Docker Swarm environments.

In the recently concluded Earth Observation Exploitation Platform Common Architecture (EOEPCA), View Server was both employed in the global and user workspace data access contexts.

In ongoing developments, View Server will be made compatible with the other components of the eoAPI, allowing it to share a common data model based on STAC and as an interchangeable component in an eoAPI deployment but with all rendering features remaining.

View Server is also used in other operational deployments for data preprocessing, access and visualisation. Those include the ESA Payload Data Ground Segment Software Applications (PDGS) or Copernicus Space Component Data Access system (CSCDA) for a vast number of active and discontinued optical and SAR satellite missions. Lastly, it supports data access for our Agricultural Area Monitoring Systems (AMS).

https://gitlab.eox.at/vs/vs

https://eox.at/2021/09/eoxserver-1-0/

State of software
GEOCAT (301)
14:30
14:30
30min
Architecture of OGC Services Deployment on Kubernetes Cluster based on CREODIAS Cloud Computing Platform
Marcin Niemyjski, Michał Bojko

The Copernicus Data Space Ecosystem provides open access to the petabyte-scale EO data repository and to a wide range of tools and services, limited to some predefined quoatas. For users who would like to develop commercial services or for those who would like to have larger quotas/unlimited access to services the offer of CREODIAS platform is the solution. In this study an example of such a (pre)commercial service will be presented which publishes Copernicus Sentinel-1 and Sentinel-2 products (and selected assets) in the form of a WMS (Web Map Service) and WCS (Web Coverage Service). The architecture of the services based on the Kubernetes cluster allows horizontal scaling of a service along with a number of users requests. The WMS/WCS services to be presented combine data discovery, access, (pre)-processing, publishing (rendering) and dissemination capabilities available within a single RESTful (Representational state transfer) query. This gives a user great flexibility in terms of on-the-fly data extraction across a specific AOI (Area Of Interest), mosaicing, reprojection, simple band processing (cloud masking, normalized difference vegetation), rendering. The performance of the Copernicus Data Space Ecosystem and CREODIAS platform combined with the efficient software (Postgres 16 with PostGIS extension, MapServer with GDAL backend) allows to achieve WMS/WCS service response time below 1 second on average. This in turn, gives a potential for massive parallelization of the computations given the horizontal scaling of the Kubernetes cluster. The work demonstrates the capabilities of European data processed using open software deployed on European cloud-based Ecosystem in form of CDSE.

Use cases & applications
Van46 ring
14:30
30min
Climate Risk Overview of Coastal Hotspots
Zehra Jawaid, Görkem

As part of the company’s goal to make coastlines more resilient and work on nature based solutions, we created a tool which gives an overview of flood vulnerable areas and protected areas.
Using mostly open geodata and open source frontend libraries, the GIS and Data lab team at Van Oord worked on getting together and analysing key parameters such as population, low-lying land and expected sea level rise to anticipate the hazard of flooding for global coastlines and societies.
The climate risk overview tool is open to use at: https://climaterisk.data.vanoord.com
The tool is meant to encourage collaboration and discussion between different organizations on climate solutions for coastal hotspots and offer different views of areas near the coast based on selected criteria and applied filters.
We’d like to talk about the process and some interesting GIS problems we came across during this project:
Several iterations to break up the world’s coastlines into equal polygon areas of 10 km2 were tried. With this as a base layer to make aggregated calculations of people exposed to flooding, it became tricky to capture the Small Island Developing States with the medium resolution data available. How did this get solved?
Another aspect we had to think about was how to load the results of over 60,000 points in a web map application, without a full-fledged backend, which performs well with respect to user experience – the user should be able to see instant results while applying various filters on the layer attributes.
Our stack – Vue, Quasar, dc, PostGIS, Postgrest, Python

Use cases & applications
QFieldCloud (246)
14:30
30min
Demystifying AI4EO: from prototyping to training an AI model for Earth Observation
Fran Martín, Juan B. Pedro

Leveraging AI technology and utilising Earth observation data to extract valuable insights is challenging. The need for high-performance cloud environments for model training and inference, the scarcity of suitable and accessible Training Datasets used to train AI models to perform specific tasks, where the main barrier is that gathering and labelling EO data is a convoluted process, or the need for libraries and environments that allow streamlining the training of models specifically designed to use EO data are just some of the problems that AI4EO engineers face.

With the aim of alleviating the pain points of this entire process and encouraging the development of applications that extract valuable information from Earth Observation data through AI, trying to generate a positive impact, EarthPulse has developed a set of open source solutions aimed at democratizing AI4EO, covering everything from the generation and labeling of a Training Dataset to the training of a model. We'll dive into SCANEO, an intelligent and AI-powered standalone labeling tool for spatial data; EOTDL, a complete environment funded by ESA that allows the creation, exploration, download and upload of both Training Datasets and pre-trained ML models for EO applications, and PyTorch EO, a Python library that aims to make Deep Learning for Earth Observation data easy and accessible to real-world cases and research alike.

Use cases & applications
GEOCAT (301)
14:30
30min
QGIS in your browser - QGIS WASM
Saber Razmjooei, Martin Dobias

Imagine the analytical capabilities of QGIS, the popular desktop GIS software, readily accessible in your web browser. With QGIS WebAssembly, that vision becomes reality. This groundbreaking technology brings core QGIS functionalities to the web, empowering everyone to publish and share their geospatial data without cumbersome spatial data infrastructure.

This presentation will give the audience an overview of the current state of QGIS WebAssembly, its potential and hurdles we have to overcome to bring this technology to the end users.

The code is now available here: https://github.com/qgis/qgis-js

State of software
LAStools (327)
14:30
30min
Street-level photo collections and large-size architectural documentation: Octree-based point cloud for storing, visualizing and assessing buildings information and its reliability
Antonio Suazo
  1. Introduction

In recent years, the documentation of built heritage has undergone a true revolution with the advent of drones, airborne lidar, cloud processing, and online visualization platforms. Taking advantage of this momentum, the challenges of the niche corresponding to the documentation of urban archaeological sites and large architectural ensembles have been renewed and updated. One of the premises now available, is the possibility of carrying out a dedicated photographic capture, at street level, which allows to cover large areas and generate documentation of the structures in very short time.

In this context, and to obtain documentation of times prior to the present, some studies have tested the possibility of resorting to non-dedicated photographic captures, such as heterogeneous sources (i.e. Internet photo collections). Although this has already been tested and confirmed, it is still difficult to know in detail the coverage of these photographic sets, to determine possible well-documented or undocumented areas. On the other hand, and understanding that the distance between the structures and the camera has an impact on the quality of the information, there are still no tools available to evaluate the information from these heterogeneous sources and the reliability of the documentation they produce.

  1. Methods

As a way to address the presented problem, this paper proposes to combine two contributions in an integrated strategy.

  • On the one hand using the method presented by (Suazo, 2021), where the visual information contained in the images is represented as a point cloud. This is the most promising approach and proposes a georeferenced workflow, although it has only been tested in small structures, and for which it generates huge volumes of data, so its scalability justifies an improved application strategy for large architectural ensembles. For this purpose, it is proposed to store and manage these massive quantities of point clouds using a format based on a spatial partitioning system, an octree, associating in a georeferenced database the relationship between the points obtained and the associated photographs.

  • On the other hand, and in relation to the reliability of the information provided by the photographs, it is proposed to classify the points with a preprocessing step, which calculates the actual distance between the camera sensor and the surfaces depicted on it. This classification information is then stored and used in run-time as an 'attribute' of the individual point clouds.

Regarding the implementation details, the Entwine Point Tile (EPT) format has been chosen, which is an octree-based storage format for massive amounts of point cloud data. Among its features, it has support for defining the geographic coordinate system of individual point clouds, and at the same time it has support for storing per-point classification attributes.

As a way to test the proposed approach, we have chosen to test its application in two known cases in Rome, Italy: for relevant architectural works we have chosen the Pantheon and its immediate surroundings, and for large archaeological sites, the central area of the Roman Forum. In both cases we have worked with datasets published under open licenses, both with respect to the set of photographs and the data on their local registration: in the first case, they were obtained from the "Heritage-Recon" benchmark (Sun et al, 2022), while the second was retrieved from the "1DSfM Landmarks" dataset (Wilson and Snavely, 2014). To infer global registration, known coordinates from LiDAR scans were used as ground truth, derived from Open Heritage 3D, also available under open licenses.

  1. Results

For the evaluation of the system, a real-time application with WebGL technology was developed to allow the free exploration of the 3d georeferenced data in the case studies, taking advantage of the potential of the partitioning system as a mechanism for loading and unloading the point clouds, thus evaluating the scalability of the proposed approach.

The application was developed based on the Potree point cloud viewer (Schütz, 2020), under FreeBSD license, built on the Three.js library (Cabello, 2024), under MIT license. The specification and implementation for WebGL of the EPT format (Manning, 2024) is under LGPL-2.1 license, and provides for the possibility of representing both color information (coming from the pixels in each photograph) and the new classification value. To obtain the latter, the individual point clouds were processed in CloudCompare (Girardeau-Montaut, 2024), storing the result as a scalar field. The workflow is completed by temporarily storing individual point clouds in .LAS format, then indexed and processed in EPT format for consumption by the final application. All the files needed to run the developed application, as well as the source files used and process codes will be published with open licenses on the project website in our github account.

  1. Conclusions

The results of the work confirm the expected applicability and scalability of the system and the underlying proposed approach. This has been ascertained through various exploration exercises using the testing application.

On the one hand, and with respect to scalability, it is confirmed that the system is able to work with data associated with large point clouds, covering large areas such as the Pantheon (14250 mts2) and the Roman Forum (80830 mts2). The EPT format therefore manages to represent both the RGB color information and that associated with the "distance" classifier. On the other hand, and with respect to the applications derived from the latter, it has been possible to visually infer areas with greater and lesser photographic coverage and sectors without documentation, and to filter the point clouds according to the calculated "distance" value, considered a reliability attribute of the photographic collections used.

Thanks to this study, it has been possible to validate a system that allows the correct representation of large amounts of visual information associated with photographs from heterogeneous sources, as well as to assess the quality of the documentation they provide. This opens up new questions and possibilities, which are already being identified, and undoubtedly contribute to the better documentation and management of information associated with large-scale built heritage.

Academic track
Omicum
15:00
15:00
5min
Cloudferro's open source QGIS plugin for discovery of Copernicus data
Michał Bojko

Cloudferro’s repository contains nearly 67 Pb of EO data. So far, there wasn’t any service providing
easy access to data basing on OGC standards. For the past year, there was put work on creating a
WebMapService (WMS), specifically for European satellite missions - Sentinel -2 and Sentinel-1. In
result, company developed a vast OGC services, based on analysis ready original Sentinel data stored
in Cloudferro’s repository, which serves as a official ESA storage. Although the services are here, there
is also a need for a tool enabling users to use those services.
This paper presents the tool, which uses those services and works as a framework for potential users
in form of a QGIS plugin. Although web services are based on OGC standards and this allows majority
of GIS software establish connection with them, it’s still unintuitive to build and use raw URL request.
QGIS plugin provides a simple GUI to construct all necessary requests in a simple and fast way. Thanks
to that, users can start work with EO data in a simple and comfortable manner. This plugin not only
serves as a display tool, but also provide functions for analysis and download of Sentinel-1 and
Sentinel-2 images thanks to WebCoverageService (WCS). On the other hand, thanks to usage of
Virtual Rasters (VRT), displayed data can be analysed on demand i.e.: mask all clouds in Sentinel-2
true colour images.
The biggest advantage of this solution is an easy access to original, not processed Sentinel data,
which are obtained every day. Since plugin can provide both display and download capabilities, this
tool seems perfect for small processing tasks done by students on vast universities. By this, those
students could easily get in touch with Sentinel data and enlarge European EO community.

Use cases & applications
GEOCAT (301)
15:00
30min
Code for Earth – and what’s in for you
Athina Trakas

Code for Earth, an ECMWF-run partnership programme, fosters innovation and collaboration and supports advancements in weather, atmosphere and climate research, including in the Copernicus programme and the Destination Earth (DestinE) initiative, which are both EU-funded. Since its first edition in 2018, the programme has brought together talented individuals and developer teams with experienced mentors from ECMWF to work on cutting-edge projects covering a wide range of topics. In 2023, ten developer teams participated in Code for Earth.

This presentation will give an insight into the programme and the current 2024 edition. It will also explain how interested people can join Code for Earth and make an impact on real-world challenges.

Each summer, several individuals and developer teams from different backgrounds test, explore and/or develop open source software solutions supported by ECMWF’s mentors. Their projects tackle topics such as data science in Earth-, weather-, climate- and atmosphere-related challenges, including visualisation, machine learning/artificial intelligence, user support tools and data analysis. By encouraging multidisciplinary collaboration and embracing open source principles, Code for Earth facilitates the development of cutting-edge solutions and advancements in Earth system sciences.

Since its start, the programme has produced 45+ open-source software developments highly beneficial to activities at ECMWF.

Use cases & applications
Van46 ring
15:00
30min
Geometrically guided and confidence-based denoising
David Youssefi

Introduction

As part of the CO3D mission (Lebegue et al., 2020), carried out in partnership with Airbus, CNES is developing the image processing chain including the open source photogrammetry pipeline CARS (Youssefi et al., 2020). By acquiring land areas within two years, providing 4 bands (Blue, Green, Red, Near Infra Red) at 50 cm, the objective is to produce a global Digital Surface Model (DSM) with 1 m relative altimetric error (CE90) at 1 m ground sampling distance (GSD) as target accuracy. The worldwide production of this 3D information will notably make a real contribution to the creation of digital twins (Brunet et al., 2022). Satellite imagery provides global coverage, which unlocks the possibility to update the 3D model of any location on Earth within a rapid time frame. However, due to the smaller number of images or lower resolution than drone or aerial photography, a denoising step is necessary to extract relevant 3D information from satellite images. This step smooths out features while retaining their edges that are sometimes barely recognizable relative to the sensor resolution, such as the edges of small houses or the narrow gaps between them as our results show.

Geometrically guided and confidence-based point cloud denoising

Point cloud denoising is a topic widely studied in 3D reconstruction: several methods, ranging from classical to deep learning-based have been designed over the past decades. In this article, we propose a new method derived from bilateral filtering (Digne and de Franchis, 2017) integrating new constraints. Our aim is to show how a priori knowledge can be used to guide denoising and, above all, to produce a denoised point cloud that is more consistent with the acquisition conditions or metrics obtained during correlation.

This new method takes into account two important constraints. The first is a geometric constraint. The input to the denoising step is a point cloud from photogrammetry resulting from matched points on the sensor images. Our pipeline CARS derives lines of sight from theses matched points and, the intersection of these lines give the target 3d positions. In our method, when we denoise this point cloud, the points are constrained to stay on their initial line of sight. This has two main advantages: the associated color will remain consistent with the new position and points won't accumulate in certain spaces and create dataless areas.

The second constraint comes from the correlator PANDORA. The article (Sarrazin et al., 2021) describes a confidence metric, named ambiguity integral metric, to assess the quality of the produced disparity map. This measurement determines the level of confidence associated with each of the points. Each point is moved along the line of sight according to its confidence: the less confident the correlator, the more the point is moved while respecting the geometric constraint mentioned earlier. Appart from these two added major constraints, our method still uses usual denoising parameters, such as initial color and position of each considered point regarding its neighborhood. Normal smoothing is included to compensate correlation inaccuracy.

Evaluation and applications

Early results are extremely promising. A visual comparison of the mesh obtained before and after our proposed denoising step in a dense urban area will be provided in the final article (Figure 1). This illustration shows that the regularization preserves fine elements and sharp edges and smooths out the flat features (roofs, facades). Even if we cannot yet guarantee that denoising will improve the accuracy of the 3D point cloud (or the DSM compared to the airborne LiDAR), this verification will be the subject of future work which will be described in the full paper, we can already affirm that the proposed denoising filter significantly improves rendering and realism. In fact, this denoising makes it possible to enhance roof sections that are barely visible in the denoised point cloud, thus facilitating the building reconstruction stage for the generation of 3D city models (CityGML). In order to evaluate the quality of the 3D reconstruction on a larger scale, we plan to use Lidar HD®. This freely distributed data contains 10 points per m² and includes a semantic label for each point, allowing for a class-specific quality assessment according to building, vegetation or ground. We are currently benchmarking state of the art solutions according to metrics that reflect how fine elements are missed in the absence of geometric and confidence constraints.

Perspectives

In future work, we would like to see the potential of adding the constraints proposed in the paper to other denoising methods, find out whether it is possible to do this using deep learning techniques. In addition to comparisons with ground truth, we would also like to prove that denoising makes it easier to reconstruct 3D city models, for example by showing that we can increase the level of detail even with very high resolution satellites such as Pleiades HR. Finally, with a view to using 3D as a digital twin, this denoising could be a tool for simplifying 3D models according to specific simulations. We would therefore like to begin a parameterisation study to quantify the trade-off between simplicity and quality.

Academic track
Omicum
15:00
30min
MapStore, a year in review
Lorenzo Natali, Stefano Bovio

MapStore is an open source product developed for creating, saving and sharing in a simple and intuitive way maps, dashboards, charts, geostories and application contexts directly online in your browser.

MapStore is built on top of React and Redux, it is cross-browser and mobile ready; it does not explicitly depend on any mapping engine but it supports both OpenLayers, Leaflet and Cesium; additional engines could also be supported.

The presentation will give the audience an extensive overview of the MapStore functionalities for the creation of mapping portals, covering both previous work as well work for the future releases. Eventually, a range of MapStore case studies will be presented to demonstrate what our clients (like City of Genova, City of Florence, Halliburton, Austrocontrol and more) and partners are achieving with it.

State of software
LAStools (327)
15:00
5min
State of MapServer
Seth Girvin

MapServer is a founding OSGeo project and used for publishing spatial data and interactive mapping applications to the web [1]. This talk provides an overview of enhancements and features in the new 8.2 release of MapServer and its scripting language MapScript, along with upcoming plans for the future.

[1] https://mapserver.org/

State of software
QFieldCloud (246)
15:05
15:05
5min
Random Geodata Generator
Jakob Miksch

For developing geospatial applications often some sample data of a specific region is needed. This lightning talks presents a web application that allows to create random vector data of a required extent. The data can be exported as GeoJSON or Shapefile and runs completely in the browser without any connection to a backend. The application itself is created with Vue.js and OpenLayers. The source code and the website are freely available.

Use cases & applications
QFieldCloud (246)
15:05
5min
WebGIS for Coastal Resilience: A Use case for developing a coastal erosion Observatory
Anastasia Triantafyllou, Pano Voudouris, Dimitris Tsagkarlis

Due to the compounding impacts of climate change and human activities, the frequency and severity of hazards and natural disasters are on the rise, exerting significant impacts on the environment, economy, and human lives. Responding to this shifting landscape, numerous institutions and political structures are redirecting their focus from emergency response to proactive disaster risk reduction and planning. Notably, the public Authority has sponsored the project titled "Creation of an Integrated Observatory System for Preventing and Managing the Risk of Coastal Erosion due to the Impact of Climate Change through the Utilization of Earth Observation Data". This initiative employs Earth Observation, combined with in-situ data, advanced algorithms, and models, to develop comprehensive knowledge on hazard exposure and vulnerability.
The applied methodology encompasses three thematic phases: Phase A includes the creation of algorithms and tools for calculating necessary indicators, Phase B involves the design of the web GIS application hosting the observatory, its services, and derived datasets, and Phase C entails evaluating the current state and proposing alternatives for risk management.
Spatial databases were continually reassessed throughout the project, hosting digital products created by specialized Python scripts that process optical images from Sentinel-2 satellites, Sentinel-1 SAR acquisitions, and in-situ measurements. These data sources contribute to generating timeseries of multiple indicators related to coastline alterations. The extensive monitoring database serves not only to establish correlations between derived indicators and human activity but also to calculate 50- and 100-year simulation indicators for coastal vulnerability under tidal wave pressure. Additionally, a tool for determining passive flood mapping in different sea level rise scenarios is developed using the bathtub approach.
All this information is seamlessly integrated into a web GIS application, named "Observatory System for Coastal Erosion." The application utilizes the Angular web application framework while the Leaflet library enhances interactive mapping, providing a user-friendly interface. Navigation features include zoom in/out tools, selection/identification tools, and an address bar. Users can search data using descriptive or spatial parameters, applying filters, with results displayed in a table for export in .csv or .shp formats.
This application ensures interactivity, interoperability, and information exchange, supporting decision-making and evaluating alternative coastal zone development strategies. Fully aligned with national Integrated Coastal Zone Management (ICZM) principles, the observatory has been in use by the Department of Environment and Industry, Energy & Natural Resources of the Region of Central Macedonia since 2021. Networking activities have already commenced among stakeholders and public authorities, addressing erosion issues highlighted by the project's results and exploring alternative, sustainable prevention measures.

Use cases & applications
GEOCAT (301)
15:10
15:10
5min
GIS services for public safety with open source GIS software
Katre Kasemägi

In Estonia most of the internal security organisations and their applications use our GIS services (IT support is offered by SMIT) which are provided with the help of FOSS geospatial software. In 5 minutes I will showcase with user stories how open source software is helping saving lives, property and environment. I give overview of the services and softwares that makes it happen. For example MapServer, Openlayers etc.

Use cases & applications
GEOCAT (301)
15:10
5min
Geoharmony: A QGIS Plugin Unveiling Satellite Insights for Sustainable Development Synergy and Ecological Rejuvenation
Berit Mohr

Within the framework of the German Development Agency’s (GIZ) “Strengthening Drought Resilience Programme“ in Ethiopia, GFA developed a raster data methodology for analysing rehabilitated and protected dry valleys and implements/ed advanced trainings for governmental personnel. The project aims to assess the impact of locally installed irrigation infrastructure, i.e. water spreading weirs (WSW) along the riverbeds on its immediate surroundings. We have pioneered a rigorous and scientifically grounded methodology leveraging advanced satellite imagery. Our analysis incorporates vegetation indices and employs the Mann-Kendall Test to ascertain the notable trends in various changes induced by the WSW. These discerned patterns are systematically juxtaposed against a carefully selected control group for robust comparative analysis. A QGIS Plugin has been developed, allowing any user to undertake critical impact assessments of the WSW. During the design phase, we applied human-centric design principles ensuring the plugin efficiently blends in with daily work routines. Minding the potential gaps in technical capacity in target groups, the plugin and methodology were specifically designed so that:
1) anyone can use it
2) further development can be undertaken and
3) it can work in offline environments, e.g. to maintain utility in remote or underserved areas.
Sentinel and Landsat data are acquired for specific time frames and processed for a region of interest through an intuitive and customisable user interface. Different vegetation indices can be selected on which the Mann-Kendall Test is then applied. Finally, if desirable, a customised report can be exported showing the significant changes and accompanied test results.
In this talk, I will demonstrate the plugin and describe the developed methodology in further detail. Furthermore, I would like to share our lessons learned and immediate insights from application in the development cooperation context.

Use cases & applications
QFieldCloud (246)
15:15
15:15
5min
Crossing the bridge from research to operational. FOSS based geo-knowledge services dedicated to the insurance sector
Ilie Codrina

For geospatial enthusiasts, working with data, debugging code, running geospatial algorithms, making maps and then more maps to best depict the momentary state of an environmental or socio-economic variable - it is a great and valuable use of working time. But how to get that valuable knowledge into the radar of non-geospatial people working time? On the radar of the professionals that could/would highly benefit from geospatial knowledge but have no time, interest or curiosity to invest into learning new geo-dedicated skills? What about the operational businesses for which tabular data and e-mail are the main working tools and have no resources to invest into bringing the geospatial component on board? It is no secret that selling geo-services is not easy for proprietary software, with a marketing budget and sales people. Building on top of FOSS and then crossing the bridge from research to operational brings interesting, yet quite numerous obstacles to overcome as well.
In this talk, the authors present the long and sinuous road of getting the geospatial-extracted knowledge outside the geospatial field into the..wild.

Use cases & applications
GEOCAT (301)
15:15
5min
Processing and publishing Maritime AIS data with GeoServer and Databricks in Azure
Andrea Aime

The amount of data we have to process and publish keeps growing every day, fortunately, the infrastructure, technologies, and methodologies to handle such streams of data keep improving and maturing. GeoServer is an Open Source web service for publishing your geospatial data using industry standards for vector, raster, and mapping. It powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale. We integrated GeoServer with some well-known big data technologies like Kafka and Databricks, and deployed the systems in Azure cloud, to handle use cases that required near-realtime displaying of the latest AIS received data on a map as well background batch processing of historical Maritime AIS data.

This presentation will describe the architecture put in place, and the challenges that GeoSolutions had to overcome to publish big data through GeoServer OGC services (WMS, WFS, and WPS), finding the correct balance that maximized ingestion performance and visualization performance. We had to integrate with a streaming processing platform that took care of most of the processing and storing of the data in an Azure data lake that allows GeoServer to efficiently query for the latest available features, respecting all the authorization policies that were put in place. A few custom GeoServer extensions were implemented to handle the authorization complexity, the advanced styling needs, and big data integration needs.

Use cases & applications
QFieldCloud (246)
15:20
15:20
5min
Detecting and mapping the highway location signs from the panoramic images collected by the mobile mapping system.
Andri Riid

Highway location markers are the modern-day equivalent of what are historically known as milestones and are typically small signs on the side of the road informing the driver of the kilometer count from the start of the road (in some countries the markers also carry the road ID).

In Spain, where current study was carried out, there are four different classes of kilometric milestones as they call them, several of them also having color variations corresponding to different road classes – motorway, state road and regional roads of three levels. Besides, roads belonging to the European itinerary are complemented with a green plate carrying its European road number (only motorways, state roads and 1st level regional roads can belong to European itinerary).

The goal of current study was to identify and localize the kilometric milestones from the panoramic images collected by the mobile mapping systems tracking Spanish roads.

To achieve that, two convolutional neural networks aside from additional algorithms were employed. As a first step. traffic signs were located in the images using a YOLOv5 object detection network [1], which yields bounding boxes of detected traffic signs. This detector has evolved through several iterations of development at EyeVi with the latest version trained on over 39 thousand annotated images.

In the next step, the kilometric milestone images from the bounding boxes are to be extracted, resized to a standard size of 224224 pixels and presented to the classification network of ResNet50 type [2] to determine the type of the kilometric milestone. The classification network was trained specifically for the project and the training data was automatically annotated using the image embeddings. It is possible to query top k embeddings of a particular manually selected image patch, it is also possible to query for all similar images by defining a (cosine) similarity threshold. The embeddings of image patches were computed using CLIP [3] as the encoder.

However, as the mobile mapping was in reality carried out only on motorways and state roads, all existing kilometric milestone classes as well as not all variations were not available for the training data - the number of kilometric milestone classes was thus reduced to 3. Nevertheless, the number of panoramic images the kilometric milestone training samples were collected from was over 500 thousand and the resulting total number of collected kilometric milestone images was above 1800. The obtained F1-scores for the classification of three kilometric milestone classes on test data were 97.1, 98.7 and 98.2%, respectively.

Determining the geographical locations of detected kilometric milestones from panoramic images is a rather challenging task because one traffic sign can be found from a number of consecutive panoramic images (which are shot after every 3 meters). Complementary algorithms of tracking and localization were used for that purpose. The goal of the tracking algorithm is to determine which bounding boxes in the stream (consecutive images) represent a single sign in the physical world. The idea behind the tracking algorithm is to create bounding box vectors from the positions of where the image was taken towards the bounding box on the observed image (placed one meter ahead of the image shooting position). Given a pair of bounding box vectors, a number of properties are calculated for the pair, such as minimum separation of vector lines, angle between the two vectors and the convergence point of two vectors. This is followed by triplet analysis in order to find strong triplets whose convergence points are reasonably close to each other and whose vectors are not parallel. Strong triplets serve as the seeds of tracks. The algorithm goes through a number of additional steps in which the single bounding box vectors which are not part of any existing track are merged with those if they and individual tracks are sufficiently close to each other.

From there, the localization is pretty straightforward and is first performed pairwise by computing single localization results for each bounding box pair in the track. Then the final location for the track is found by an fair advanced weighted averaging method

In summary, the four described components of the pipeline are able to detect and localize the kilometric milestones met on the road thus place them on the map with sufficient accuracy.

References

[1] Solavetz, J. (2020). YOLOv5 New Version - Improvements And Evaluation. https://blog.roboflow.com/yolov5-improvements-and-evaluation/. Accessed 21.02.2022.

[2] He, K. et al. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-78).

[3] Radford, A. et al. (2021). Learning transferable visual models from natural language supervision. In Proceedings of the International conference on machine learning (pp. 8748-8763).

Use cases & applications
GEOCAT (301)
15:20
5min
Tile-based GIS
Felix Delattre

This short talk is about applying the concept of tiles to store geospatial information in a database and use it efficiently. We'll explore the scenarios where this method is beneficial and demonstrate how to implement it using PostgreSQL/PostGIS and the ST_AsMV function for data storage, retrieval, and visualization.

Use cases & applications
QFieldCloud (246)
15:25
15:25
5min
Q-MOKA: A QGIS-Based Application for Managing and Analyzing Traffic Accidents
Stathis Petridis, ANTONIOS FEKAS

The presentation will discuss Q-MOKA, a GIS application developed in collaboration with the Hellenic Ministry of Infrastructure and Transportation to manage and analyze traffic accidents along the Greek Primary and Transeuropean Road Network. Q-MOKA addresses the challenges of the existing Traffic Accident Registry, which lacked geospatial referencing and robust data validation. Built around QGIS and Postgres/PostGIS, Q-MOKA offers several key functionalities:

• Network Model: Supports multiple linear referencing systems for accurate accident location representation.

• Dual-Carriageway Management: Ensures consistent linear referencing even during network realignments.

• Flexible Accident Location: Records accidents by X,Y coordinates or route/offset, maintaining synchronization between the two methods.

• Linear and Point Data Management: Allows creation and editing of additional linear and point event data relevant to the Agency (e.g., number of lanes, speed limits).

• Geographic Mapping: Provides route-level accident mapping based on the recording service.

• Custom QGIS Forms: Facilitates data entry and management through user-friendly forms.

The presentation will delve into the technical aspects of Q-MOKA, including its database structure, configuration options, and potential future enhancements. By offering a robust and user-friendly platform for traffic accident management and analysis, Q-MOKA contributes to improved road safety and informed decision-making in Greece.

Keywords: Traffic accident, GIS, QGIS, PostgreSQL, PostGIS, linear referencing

Use cases & applications
GEOCAT (301)
15:25
5min
Web Mapping 101: Getting started using MapTiler SDK!
Jachym, Jonathan Lurie

This talk is for curious minds who want to get started with web mapping, regardless of their experience in GIS!

We will see how to set up a development environment from scratch and explore the coolest features from MapTiler SDK. Then, we will learn how to use them to make better maps that are actually useful and can tell stories.

Our tools: JavaScript, TypeScript, and a bit of CSS, but if you have never heard of those, that's fine too, this talk is beginner-friendly! (and what's better than learning about web programming with an actual project in mind!)

PS: It will be easier if you are already familiar with the basics of programming and some general concepts, even with another language such as Python: what's a variable? what's a function?

State of software
QFieldCloud (246)
15:30
15:30
30min
Coffee
Van46 ring
15:30
30min
Coffee
GEOCAT (301)
15:30
30min
Coffee
LAStools (327)
15:30
30min
Coffee
QFieldCloud (246)
15:30
30min
Coffee
Omicum
16:00
16:00
30min
Add a "map" tag in HTML: MapML developments and support in GeoServer
Andrea Aime, Peter Rushforth

The W3C Maps for HTML Community Group is working to define a new map HTML element that would be used to define map contents in a web page and would be directly supported and rendered by web browsers in a standardized way.
The specification has support for full screen maps, as well as tiled maps, and vector tiles.

The presentation will provide an introduction to the specification, then delve into how the MapML support has been integrated into GeoServer OGC services, with native support for TiledCRSs, as well as tiling and styling.

We’ll conclude by discussing the next evolution in the MapML structure and its GeoServer implementation.

State of software
LAStools (327)
16:00
30min
FOSS4G for policy support in Europe, a case study on water monitoring
Pieter Kempeneers

The Joint Research Centre (JRC) of the European Commission provides independent, evidence-based science and knowledge that supports European Union policies. To facilitate this, the JRC has developed the Big Data Analytics Platform (BDAP), a data platform that is entirely based on free and open-source software (FOSS). It allows data scientists from the JRC to easily access, analyze, view, and reuse scientific data at a petabyte scale. The majority of the hosted data are geospatial data from various domains including Earth observation imagery from the Copernicus Sentinel missions. Data are automatically downloaded from the Copernicus Data Space Ecosystem, processed and stored in an open source distributed filesystem (eos). These individual steps are implemented as microservices using docker compose. To facilitate data access, an application programming interface (API) was implemented following the Spatio Temporal Asset Catalog (STAC) specification. It exposes collections of spatial temporal data in a standardized way, which has given rise to an ecosystem of FOSS tools, including pystac and odc-stac. Based on simple queries through REST APIs, collections and their individual data items can be queried based on geographic location and acquisition time. In addition, the JRC has developed a suite of libraries for geospatial data processing (pyjeo) and create data science dashboards (Vois) that were released as FOSS. In this talk, these libraries will be introduced, while presenting real case studies that illustrate how these libraries were instrumental in providing policy support using reproducible workflows. In particular, a case study on monitoring water in the European continent will be presented. It uses Sentinel-2 satellite imagery to create time series of water masks based on machine learning techniques. A monitoring system is set up by comparing the extent of water for a defined set of water reservoirs over time.

FOSS4G ‘Made in Europe’
Van46 ring
16:00
30min
Introduction to Vertical Coordinate Systems
Javier Jimenez Shaw

This educational talk will explain why we need a vertical reference for our coordinates, how we define “up” and “height”. How elevations were measured in the past, and how we now use GNSS to do it, and the implications of that. What is “the geoid” (gravitational model of the earth) and its differences with respect to the ellipsoid. Different types of heights (orthometric, normal, dynamic) and how we use different geoid models. Finally I will talk about how PROJ.org (open-source library) is supporting vertical coordinate reference systems with the grid files available in PROJ-data (open-data).

FOSS4G in education and research
QFieldCloud (246)
16:00
30min
Public sharing of semi-automatically detected dead trees in remote sensing images
Jaan Liira

Forestry as well as the efforts to delay climate change, both need that tree stand in forest would be healthy and high growing potential. Even if tree damaging and killing pests are a natural part of forest ecosystem, the extensive pest outbreaks hamper the support of ecosystem services expected to be provided by forest. Therefore, instant and highly detailed awareness about the health status of trees in mature and old-age stands is vital to maintain ecosystem services, to apply timely salvage cuttings rescuing the timber of dead trees supporting local rural economy and to heal gaps in the damaged forest.
We tested several automated and semi-automated image analysis methods to pin-point dead trees from high-resolution summer orthophotos combined with ALS (Aerial Laser Scanning) derived nDSM. Both are open data provided by Estonian Land Board. Starting with object-based machine learning we reached the situation where simple map algebra was even more efficient in dead trees detection and computational resources. The methodological testing revealed multiple sources of false-positive observations. We had to apply various cleaning algorithms to reduce the proportion of biased objects. The removal occurred to be the major task. Finally, the large-scale test object layer was produced for one quarter of the country (16600km2), and these results were shared with experts for review. When feedback was collected, additional algorithms and parameters were tested to improve the results. Only then the final version was published.
The resulting object-layer is published in open-access GIS platform XGIS provided by Estonian Land Board, which has many stakeholder-oriented thematic maps (CountrysideGIS https://xgis.maaamet.ee/xgis2/page/app/maaeluGIS). The specific thematic map also provides many other open access data. For example, we will show, how the indicated dead tree locations can be assessed for the current state and assess the outbreak using the latest Sentinel-2 images and their derivates. All combined, the pile of open data will improve the forest management, maintenance of ecological quality as well as public awareness on the forest processes as a natural ecosystem in Estonia. The experience, surely, can be transferred to other countries.

Open Data
GEOCAT (301)
16:00
30min
Towards automation of river water surface detection
Stefano Conversi

It is well known that climate change impacts are increasingly affecting European territory, often in the shape of extreme natural events. Among those, in recent years, heat waves due to global warming contributed to the acceleration of drying process. Particularly, the Mediterranean areas are expected to face extraordinary hot summer and increasingly frequent drought events, which may clearly affect the population. As a partial confirmation of this forecast, in between 2022 and 2023 Southern Europe was affected by lasting drought conditions, which had several outcomes on the ecosystems. As an example, in Po River (the longest Italian water stream) the worst water scarcity of the past two centuries was recorded (Montanari et al., 2023). Experts agreed on the exceptionality of the phenomenon, stating nevertheless the repeatability of such events in near future (Bonaldo et al., 2022). Willing to face them, local authorities expressed the need of tools for monitoring the impacts of drought on rivers, so to be capable of promptly enacting countermeasures.
In this context, the authors partnered with Regione Lombardia for building a procedure oriented at the exploitation of Copernicus Sentinel-1 (SAR) and Sentinel-2 (optical) sensor fusion for water surface mapping, applied in the case study of Po River (Conversi et al., 2023), based on supervised classification of combined optical and SAR imagery. The current work will present an evolution of the proposed methodology, which includes a considerable effort towards the full automation of the process, a necessary step for making it user friendly for public administration.

The designed procedure, built in Google Earth Engine, is based on the combination of three images, namely the S-1 VV speckle filtered band (Level 1, GRD) and the spectral indices Sentinel Water Mask and NDWI derived from S-2 (Level 1-C, orthorectified). Input imagery is selected to ensure complete coverage of the area of interest, with mosaicking if necessary images coming from different dates, a reliable assumption considering that the drought is usually a slow phenomenon. The interval of time between images is anyway minimized by the code, depending on data quality and availability. Training polygons are drawn by photointerpretation and then fed to a Random Forest-based supervised classifier, jointly to the three aforementioned images. The outcome of the procedure is constituted by a map of water surface detected over the area of interest, complemented with an estimate of the extent in km2. Results are then validated and correlated with hydrometric records coming from the field, which corroborated the overall performance (Conversi et al., 2023).

This paper proposes an advancement in the methodology, aimed at enhancing its usability by non-expert users, so to set the base of the development of a tool that can be exploited by local stakeholders. An efficient automatic extraction of training samples, is achieved by randomly extracting the training set of pixels from a binary mask (water/non-water).
This water/non-water mask is derived by the combination of three sub-masks resulting from the automatic thresholding of the input imagery (VV, SWM, NDWI), obtained with the Bmax Otsu algorithm (Markert et al., 2020). The water/non water mask includes only the pixels which have the same behavior for all input images and along the reference period.
The thresholding procedure is automated using the concept of Otsu histogram-based algorithm for image segmentation. This methodology allows to define an optimal threshold value for distinguishing background and foreground objects. The inter-class variance is evaluated and the value that maximizes it is chosen, thus maximizing the separability among pixel classes as well (Otsu, 1979). A modified version of the algorithm, the Bmax Otsu, was exploited, which was originally developed for water detection through Sentinel-1. Otsu algorithm is indeed particularly effective in case of images characterized by a bimodal histogram of pixel values, while Bmax Otsu is more suitable in presence of multiple classes or complex backgrounds (Markert et al., 2020), which is the case for the application presented in this work. The Bmax Otsu is based on a checkerboard subdivision of the original image, on user-selected parameters. The maximum normalized Between-Class Variance (BCV) is evaluated in each cell of the checkerboard and sub-areas characterized by bimodality are selected for applying the Otsu algorithm, thus leading to the goal threshold value (Markert et al., 2020).
As mentioned, the outcomes of the Bmax Otsu procedure are exploited for extracting random training samples for the machine learning-based classification algorithm. The best classification performance is obtained with a number of pixels that corresponds to the 0.02% of the region of interest.
The validation was carried out with respect to another classification of the same area obtained with photo-interpreted training samples (Conversi et al., 2023), showing accuracies of the order of 80-90%. The automated version of the methodology for integrating optical and radar images in mapping river water surface then proved its effectiveness among several date intervals taken as reference.
Although the automation of the training sample selection slightly decreases the accuracy of the overall result with respect to the original approach, the gain in terms of usability is invaluable. Indeed, the elimination of the necessity for the user of photointerpreting imagery and drawing polygons to train the classification algorithm represents a relevant step towards the realization of a standalone tool to be used by the public administration in real applications of river drought monitoring.

Academic track
Omicum
16:30
16:30
30min
A PROCESSING PIPELINE FOR EUROPEAN OFFICIAL STATISTICS: TOWARDS STANDARDISATION OF MOBILE NETWORK OPERATOR DATA PROCESSING
Kadri Arrak

Disclaimer: The views in this abstract are those of the authors and do not necessarily reflect the position of the European Commission (EC) or national statistical institutes

Abstract:
The European Statistical System (ESS) – the partnership between the EU statistical authority (Eurostat) and national statistical institutes (NSI), and other statistical authorities in the European member states – considers Mobile Network Operator (MNO) data as one of the most promising new data sources for future statistical production. The production of official statistics based on MNO data has the potential to provide considerable societal value. In this context, the ESS emphasises the need for standardised reference methods adhering to the principles of statistical production, such as quality, privacy protection, and transparency.

In line with the ESS Innovation Agenda, following an open call for tenders, in December 2022, Eurostat awarded the service contract “Development, implementation and demonstration of a reference processing pipeline for the future production of official statistics based on Multiple Mobile Network Operator data (TSS multi MNO)”*. The project is a significant milestone towards the future reuse of MNO data for the production of official statistics at EU level. The goal of the project is to develop a complete, open end-to-end processing pipeline that should serve as a starting point towards the regular production of future official statistics based on MNO data Europe-wide. This “processing pipeline” encompasses a combination of a fully documented open methodological and quality framework, plus the implementation of a reference open-source software pipeline compliant with the said framework. The processing pipeline will be demonstrated across data from multiple MNOs. If successful, the reference pipeline developed by the project will be proposed for adoption by the ESS as a methodological standard.
The project is being implemented by a consortium providing extensive experience from both the business and the official statistics domains. The consortium is composed of GOPA Worldwide Consultants GmbH (DE) - lead, Nommon Solutions and Technologies SL (ES), Positium OÜ (EE), Statistics Netherlands (NL) and the Italian Statistical Institute (IT). Additionally, five European MNOs from four distinct countries will be involved in the pipeline testing.
This collaborative endeavour aligns with the European Data Strategy's goal of providing comparable and reliable statistics across European countries. The project addresses the challenge of providing open and standardised methodologies for official statistics without hampering the development of future private initiatives nor the continuation of the range of analytic products based on MNO data that have been developed and commercialised by mobile operators or other third-party entities for purposes other than European official statistics.
While the project is financed by Eurostat (the EU statistical office), its ultimate success will depend on the potential endorsement of the project result by the larger ESS community (integrating all EU statistical offices and other national authorities). It is expected that this will have positive implications for future activities and may serve as a model that can be replicated in other domains, along with seeking closer collaboration with industry or business partners, more in general, in the context of initiating or strengthening co-development undertakings for the production of official statistics.
This contribution will focus on the presentation of the overall pipeline architecture and the description of an initial version of the processing pipeline. The architecture design will adhere to the highest technical requirements and methodological soundness. The proposed pipeline considers the division between data processing at the MNO environments and additional processing steps at the NSI or other parties.
The software will be divided into modules for (1) the processing of disaggregated data exclusively at each MNO’s secured environment, and (2) the post-processing of aggregated and anonymous data at national statistical offices. The latter is particularly relevant since the post-processing will be performed on aggregated data after the application of statistical procedures, such as Statistical Disclosure Control (SDC), that ensure that individual data cannot be referenced back. Comprehensive documentation, including functionality, implementation details, and usage instructions, will accompany the software. Reference test data, consisting of synthetic or semi-synthetic samples, will be created for each software module to ensure reproducibility and ease the development of alternative but fully compliant software implementations by independent entities.
The entire open-source pipeline, including the codes and related documentation, as well as the methodological framework, will be openly published. The software codes will be published under an EUPL license, promoting transparency and accessibility, facilitating the replication and adoption of the developed software solutions, and encouraging collaboration and further advancements in the field of statistical production. The reference implementation of the pipeline will be public, and results will be communicated to interested audiences through public official channels.

FOSS4G ‘Made in Europe’
Van46 ring
16:30
30min
Comparing spatial patterns in raster data using R
Jakub Nowosad

Spatial pattern is an inherent property visible in many spatial variables. Spatial patterns are often at the heart of many geographical studies, where we search for existing hot spots, correlations, and outliers. They may be exhibited in various forms, depending on the type of data and the underlying processes that generated the data. Here, we will focus on spatial patterns in spatial rasters, but the concept can be extended to other types of spatial data, including vector data and point clouds.

Patterns in spatial raster data may have many forms. We may think of spatial patterns for continuous rasters as an interplay between intensity and spatial autocorrelation (e.g., elevation) or between composition and configuration for categorical rasters (e.g., land cover) (Gustafson, 1998). Intensity relates to the range and distribution of values of a given variable, while spatial autocorrelation is a tendency for nearby values of a given variable to be more similar than those that are further apart. On the other hand, composition is the number of cells belonging to each map category, while configuration represents their spatial arrangement. Another distinction is between the data dimensionality. The most common situation is when we only use one layer of given data (e.g., an elevation map or a land cover product for one year). However, we may also be interested in sets of variables (layers, bands), such as hyperspectral data, time series, or proportions of classes. An additional special case is the RGB representation of the data.

Assessing the similarity of spatial patterns is a common task in many fields, including remote sensing, ecology, and geology. This procedure may encapsulate many types of comparisons: comparing the same variable(s) for different areas, comparing different datasets (e.g., different sensors), or comparing the same area but at different times.

Given various possible scientific questions and the fact that we have a plethora of forms of spatial data, there is no universal method for assessing similarity between two spatial patterns. The basic method is a visual inspection; however, it is highly subjective, both from the observer’s and visualization type’s perspectives. Fairly straightforward other approaches are to create a difference map, count changed pixels, or look at the distribution of the values. More advanced methods include the use of machine learning algorithms. However, these methods are often complex, require a lot of data, and are not always interpretable. An alternative and general approach, inpired by content-based image retrieval (Kato, 1992), is to use spatial signatures to represent spatial patterns and dissimilarity measures to compare them (Jasiewicz and Stepinski, 2013).

A spatial signature is any numerical representation (compression) of a spatial pattern. For a categorical raster, it can be a co-occurrence vector of classes in a local window, while for a time series, it may be a vector of values in a given cell. Then, having spatial signatures for both areas (sensors, moments), we can compare them using a dissimilarity measure (e.g., Euclidean distance, cosine similarity, etc.) (Cha, 2007). This approach can compare complex, multidimensional spatial patterns, but at the same time, it gives some degree of interpretability. It can also be further applied to many techniques of spatial data analysis, including spatial clustering (to find groups of areas with similar spatial patterns) and segmentation (to create regions with similar spatial patterns).

While the concept of applying spatial signature and dissimilarity measures is powerful, there are still many unresolved issues and questions to consider. It includes the topics of scale of comparison, input data resolution, dimensions, or types, used spatial signatures, and selected dissimilarity metrics. There is still a lack of studies that systematically compare different methods of assessing similarity between spatial patterns, or suggest good practices in their use. At the same time, a growing number of FOSS tools allows us to test various methods and apply them to real-life scenarios.

The goal of this work is to provide an overview of existing R packages for comparing spatial patterns. These include ‘motif’ (for comparing spatial signatures for categorical rasters; Nowosad, 2021), ‘spquery’ (allowing for comparing spatial signatures for continuous rasters), and ‘supercells’ (for segmentation of various types of spatial rasters based on their patterns; Nowosad and Stepinski, 2022). It will show how they can be applied in real-life cases and what their limitations are. This work also aims to open a discussion about the methods for assessing similarity between spatial patterns and their FOSS implementations.

References

Cha, S-H. (2007). Comprehensive Survey on Distance/Similarity Measures Between Probability Density Functions. Int. J. Math. Model. Meth. Appl. Sci.

Gustafson, E.J. (1998) Quantifying landscape spatial pattern: what is the state of the art? Ecosystems

Jasiewicz, J., & Stepinski, T. F. (2013). Example-Based Retrieval of Alike Land-Cover Scenes From NLCD2006 Database. IEEE Geoscience and Remote Sensing Letters, https://doi.org/10.1109/lgrs.2012.2196019

Kato, T. (1992) Database architecture for content-based image retrieval, Image Storage and Retrieval Systems, https://doi.org/10.1117/12.58497

Nowosad, J. (2021). Motif: an open-source R tool for pattern-based spatial analysis. Landscape Ecology, https://doi.org/10.1007/s10980-020-01135-0

Nowosad, J., & Stepinski, T. F. (2022). Extended SLIC superpixels algorithm for applications to non-imagery geospatial rasters. International Journal of Applied Earth Observation and Geoinformation, https://doi.org/10.1016/j.jag.2022.102935

Academic track
Omicum
16:30
30min
Effortless GIS, CAD & BIM data exchange with Speckle
Kateryna Konieva

Every AEC professional has faced difficulties in transferring data between QGIS, Rhino, Revit, Grasshopper, and other platforms. Imagine if you could do it all with just one click! Speckle is an open-source platform that simplifies data and model exchange between urban design, architecture, and engineering software, fostering collaboration and automation.

In this presentation, we will share simple workflows that you can implement with our QGIS plugin to unlock the power of your GIS data, including various publicly available sources.

We will discuss different methods to align GIS, CAD, and BIM data, including helpful on-the-fly transformations. This will simplify the technical challenges posed, in particular, by switching between global and local coordinate systems, and between 2D and 3D representations.

State of software
QFieldCloud (246)
16:30
30min
G3W-SUITE and QGIS Processing API integration: your geographic analysis models on the web
Walter Lorenzetti

The integration between G3W-SUITE and QGIS extends, with the latest release, to the APIs relating to the QGIS Processing module, allowing the use of geographic analysis models, created in QGIS, in a Web environment.

G3W-SUITE is a modular, client-server application (based on QGIS-Server) for managing and publishing interactive QGIS cartographic projects of various kinds in a totally independent, simple and fast way.

The framework is characterized by strong integration with the QGIS API in relation to numerous aspects: project management, data access, editing and much more.

A specific development concerns the integration with the QGIS Processing API in order to migrate the analysis models, created in QGIS via the ModelDesigner, to a web environment.

The new module is dedicated to the creation of geographic analyzes on the web and it is based on the automated analysis models prepared on QGIS through the Processing ModelDesigner.

During the presentation, both the aspects of interactions with the APIs and the workflow to allow the association of the analysis models with the published WebGis services and their use on the web will be described.

Finally, the limits of the current integration and future developments dedicated to simplifying the creation of personalized geographical analyzes on web maps will be described.

State of software
LAStools (327)
16:30
30min
dsm2dtm: Generate DTM from DSM for free!
Naman, Rajat Goel

Digital Surface Models (DSMs) are a valuable geospatial data source, but to analyze underlying terrain, Digital Terrain Models (DTMs) are essential. This presentation showcases dsm2dtm, an open-source tool automating the process of generating DTMs from DSMs.

We will cover:
- Introduction: Importance of DTMs and challenges in DTM generation workflows
- Overview of dsm2dtm: Walkthrough the code and see the core functionality
- Demonstration: How to use it
- Use cases: Some real world applications
- Contribution: How can you contribute

In the meantime, checkout dsm2dtm here - https://github.com/seedlit/dsm2dtm

Use cases & applications
GEOCAT (301)
17:00
17:00
30min
Good old Europe and (geospatial) open source software. Outlook for 2024+
Vasile Crăciunescu

Europe's journey through the open-source domain is a narrative woven with contradictions, where ambitious policy frameworks and groundbreaking initiatives often clash with the realities of implementation and cultural resistance. This presentation embarks on an exploration of this land of paradoxes, where the drive for innovation in geospatial technologies meets the inertia of traditional practices. Amidst this backdrop, the EU's legislative endeavours, including Directive (EU) 2019/1024 on open data and the nascent Interoperable Europe Act, emerge as double-edged swords—championing progress yet ensnared by bureaucratic complexities.

With a discerning eye, we delve into the tangle of drivers and barriers shaping the adoption and development of open-source geospatial software within Europe. From the lofty aspirations of the European Green Deal to the pragmatic challenges posed by the Cyber Resilience Act, the presentation unpacks how the continent's policy landscape is moulding the ecosystem for open-source innovations. Yet, Europe's strength lies in its ability to navigate through its own contradictions. The Copernicus Programme, INSPIRE, and Destination Earth stand as testaments to Europe's commitment to open data and science, even as the spectre of the war in Ukraine casts long shadows over cybersecurity and data & software sovereignty concerns.

This dialogue extends to the technological frontiers of cloud migration, generative AI in geospatial realms, and the FAIR data principles, each reflecting the continent's struggle and success in marrying tradition with innovation. Europe's path is fraught with contradictions, yet therein lies its potential for equilibrium—finding balance amidst discord, innovation in the face of adversity.

FOSS4G ‘Made in Europe’
Van46 ring
17:00
30min
LUMI supercomputer for spatial data analysis, especially deep learning
Külli Ek

Geoinformatics applications often include analyzing high volumes of data which may require a lot of time and computing resources. Many of these applications can benefit from high performance computing resources (supercomputers) to speed up the computation, or even make them possible -more memory, storage space, available tools and a computing system suitable to handle big data. They also provide more processing units (CPU and GPU) than an average research computer, which are essential components for efficient computational analysis. Particularly Deep Learning applications benefit from the use of one or multiple GPUs.

One of these supercomputers is LUMI, provided by EuroHPC Joint Undertaking and 10 European consortium countries. LUMI is particularly well suited for large scale modeling and deep learning applications. LUMI supercomputer is available for free for European academic researchers and for companies and public organizations for open R&D purposes.

Compared to commercial computing options, where technical support is rather limited, CSC and LUMI partners offer case-by-case support for projects. Also a wide range of courses is provided to get familiar with supercomputers.

This presentation aims to introduce the audience to supercomputing for geoinformatics tasks as well as the benefits and challenges that a move to the supercomputer may introduce for researchers and companies. It will also highlight some of the recent use cases from geoinformatics.

FOSS4G in education and research
GEOCAT (301)
17:00
30min
Manage GeoServer configuration with Terraform
Alexandre Gacon

This presentation presents how to manage GeoServer configuration with a custom made Terraform provider. It will focus on the different resources available in the provider and the updates made since the last FOSS4G. Different use cases will be explained to show how you can use Terraform capabilities.

State of software
LAStools (327)
17:00
30min
WGS 84: I don't know, I don't care.
Javier Jimenez Shaw

WGS 84 (EPSG:4326) is the most commonly Coordinate Reference System used.
It is the default in QGIS, the one used by OpenStreetMap, and what many people have in mind talking about latitude-longitude (or even for projected coordinates!).
However in many cases users are not aware of the accuracy of coordinates in this system.

Nowadays with more affordable RTK devices (or PPK post-processing) people expect amazing accuracies (2 cm!), but forget that the reference system must keep that accuracy.

In this talk I will explain what is and what is not WGS84, when it is not a good idea to use it, and how we should be suspicious about data labelled as WGS84. Why people using it don't know or don't care about those problems (with or without a good reason).
Also I will talk about the pros and cons of such a CRS. For sure, not everything is bad.

FOSS4G in education and research
QFieldCloud (246)
17:30
17:30
30min
A critic analysis of the CRA
ivansanchez

The EU Commission is introducing the "Cyber Resilience Act", a legislation aimed at improving the security aspects of software.

This talk is an overview and criticism of the CRA: How it has been developed, what it aims for, what are come likely outcome scenarios, and how all of this affects the OSGeo Foundation and its obligations as "Open Source Steward".

FOSS4G ‘Made in Europe’
Van46 ring
17:30
30min
Building the modern GIS Web Stack
Jashanpreet Singh

Using the technologies:

  • Database - Postgres + postgis
  • Backend - Python + fastAPI
  • Frontend - React + maplibre
  • Deployment - Docker

We will delve into the cutting-edge landscape of Geospatial Information Systems (GIS) through the lens of a modern custom web stack. The backbone of our system lies in the backend, where Python's fastAPI takes center stage, providing a seamless and efficient foundation for handling geospatial data. From routing to authentication, fastAPI ensures optimal performance.

On the frontend, we embrace the power of React, creating a dynamic and interactive user interface. Maplibre, an open-source mapping library, is our choice for rendering stunning maps, delivering a captivating user experience. The combination of React and Maplibre transforms data into meaningful visualizations, making complex geospatial information easily accessible which can be further extended to native mobile apps seamlessly as well.

The heart of our GIS web stack is the robust database system, featuring PostgreSQL and its spatial extension, PostGIS. This powerful combination allows for efficient storage, retrieval, and analysis of geospatial data, unleashing the full potential of location-based insights. We will explore the rich ecosystem surrounding PostgreSQL and PostGIS, with extensions like mobilityDB, and uber h3 showcasing the versatility and extensibility they bring to the table.

Our entire GIS web stack is encapsulated in Docker containers to ensure seamless deployment and scalability. This containerization facilitates deployment on any cloud platform, providing flexibility and ease of management. We will guide you through the Dockerization process, empowering you to deploy your custom GIS solution effortlessly in the cloud.

Use cases & applications
GEOCAT (301)
17:30
5min
Geospatial Go
Jakob Miksch

The programming language Go has established itself in various IT areas. This lightning talk offers a brief overview of Go with a focus on the existing ecosystem for processing geodata.

Go is known for its speed and accessibility. Numerous geospatial projects like pg_featureserv, pg_tileserv, and tegola already make use of Go. This presentation showcases additional tools and libraries in this language.

Community & Foundation
LAStools (327)
17:30
30min
Towards better data platforms with semantic metadata
Olivia Guyot, Florent Gravin

For data platforms, where thousands of datasets are stored and documented, interoperability is essential. These platforms often gather records from external sources, and they above all want to make their own data widely exploitable.

In the world of INSPIRE and geospatial data, rigid XML standards have been the foundation of interoperability for many years. This is now changing as we notice a strong push towards another kind of standards: semantic metadata.

DCAT is a very good example: at its core, it is a list of concepts and relations that can be used to describe multiple collections of datasets. Because it does not impose a formal way to set up those concepts, metadata expressed in DCAT can have many different forms.

This trend imposes great challenges to long-standing solutions such as GeoNetwork, which are built on strictly-structured XML formats.

In this talk we will showcase a promising approach made by leveraging the versatility of the GeoNetwork-UI toolkit, a sister project of GeoNetwork built using modern technologies. GeoNetwork-UI has a different way of reading and outputting metadata, and implementing a semantic-capable module opens up many new and exciting perspectives: wider interoperability outside of the geospatial ecosystem, description of relationships between resources across the network, better indexation of the catalog content by search engines etc.

This talk will first give a general overview of the changes ongoing in the INSPIRE ecosystem and the push for new interoperability standards, and then showcase the existing implementation in GeoNetwork-UI and what it is capable of.

Please keep in mind that the talk will be quite technical, and that the word "metadata" will be pronounced more than once! ;)

Looking forward to see you there!

Open standards and interoperability for geospatial
QFieldCloud (246)
17:35
17:35
5min
QGIS eesti - Creating a community.
Chris Nichterlein

"Building up a community has never been so easy. Just find some like minded people and things will start to roll." - Nobody ever said that.

Apparently, there is a lot more to it and we are absolute newcomers when it comes to that too. Join us for a bumpy little ride about how the idea of setting up a user group was born, how it is going, lessons learned and where it's heading.

The idea of setting up a QGIS user group in Estonia, and the first ever in the Baltics, was born in the aftermath of the 2022 Baltic GIT conference in Tallinn. Randomly poking people and asking "Do you know any QGIS user group in Estonia or the Baltics?" came all back with the same reaction "Nope, sorry" and sometimes followed by a "...it would be cool to have one!". Well, some water has flown down the Emajõgi, but here we are now!

As a user group, we want to bring people together sharing the same enthusiasm, interest and experience around GIS and its open source solutions. We come together, chill & talk about GIS, how we approach our challenges in our workplaces and what we have learned and maybe also what went wrong.

Community & Foundation
LAStools (327)
17:40
17:40
5min
Lounaistieto – a unique regional GI network and service
Maiju Kähärä

Lounaistieto is a regional information service in Southwest Finland, representing a unique combination of reginal network builder and data service provider. Over time, our key strategy has centered on sustained adaptability through ongoing development. This approach has not only contributed to our success but has also enabled us to navigate the dynamic landscape of the continuously evolving ICT sector, shifting local requirements, and national development priorities. The lightness and agility of Lounaistieto’s operations afford us the capability to customize solutions that resonate with local nuances. This flexibility encourages even smaller stakeholders to participate. Furthermore, the recognition of regional data is often appreciated, complementing the wealth of information derived from national or global sources.

Lounaistieto GI cooperation network was established back in 2002 and has been promoting the use of geographical information since. In recent years, open data – also other than spatial – has become increasingly significant. We maintain a regional development monitoring service based on statistical data, Oskari based map service and open data services among others. Lounaistieto's services cater to a diverse audience, including authorities, decision-makers, researchers, residents, organizations, media, and businesses.

The main goals of Lounaistieto are threefold: strengthen the cooperation, skills and networking in open data and geographic information; endorse the quality, interoperability and availability of open data; and offer high value information services in Southwest Finland. We provide data services and actively work on projects to make geospatial and data services more open and interoperable.

We receive support from regional and national geographic information and open data networks, which help us to develop our activities and services. Nowadays Lounaistieto is also a partner of Location Innovation Hub. European Digital Innovation Hubs (EDIHs) are part of the EU’s new Digital Europe Programme, focusing on enhancing digital investment, especially in the digitalization of SMEs, but also the public sector. As a partner of Location Innovation Hub, Lounaistieto is at the forefront of advancing the latest innovations in the spatial data field.

Community & Foundation
LAStools (327)
17:45
17:45
5min
Creating GIS Rest APIS using Geodjango under 30 minutes
krishna lodha

We're living in the world of APIs. CRUD operations are base of lot of operations. Many smart frameworks such as Django, Flask, Laravel provides out of the box solutions to filter the data, which covers almost all needs to separate data based on column values.
When it comes to Geospatial data, we expect to filter data based on their location property instead of metadata. This is where things get complicated, if you are using framework that doesn't have package, library built to handle such use cases, you are likely to be dependent on either database or any external package to handle it.

Fortunately Geodjango[https://docs.djangoproject.com/en/4.0/ref/contrib/gis/] (Django's extension) allows us to create databases which understands geometry and can process it[https://docs.djangoproject.com/en/4.0/ref/contrib/gis/geoquerysets/#gis-queryset-api-reference]. It also provides support to write APIs using Rest Framework extension [https://pypi.org/project/djangorestframework-gis/] which takes this to next level allowing user to output the data in various formats, creating paginations inside geojson, create TMSTileFilters, etc.

In this talk we'll scratch the surface of this python package and see how to build basic CRUD APIs to push, pull GIS data along with filtering it to the PostgreSQL database

State of software
LAStools (327)
17:50
17:50
5min
GeoNode at work: how do I do this, how do I do that?
Giovanni Allegri, Emanuele Tajariol

GeoSolutions has been involved in a number of projects, ranging from local administrations to global institutions, involving GeoNode deployments, customizations and enhancements. A gallery of projects and use cases will showcase the versatility and effectiveness of GeoNode, both as a standalone application and as a service component, for building secured geodata catalogs and web mapping services, dashboards and geostories. In particular the recent advancements in data ingestion and harvesting workflows will be presented, along with the many ways to expose its secured services to third party clients. Examples of GeoNode’s builtin capabilities for extending and customizing its frontend application will be showcased.

Use cases & applications
LAStools (327)
18:00
18:00
60min
BoF
GEOCAT (301)
18:00
60min
BoF
LAStools (327)
18:00
60min
BoF
QFieldCloud (246)
18:00
60min
European Cyber Resilience Act: status and impact

This is a Birds of a Feather session for discussing the European Cyber Resilience Act (CRA).

Details on OSGeo wiki

Birds of a Feather (BoF)
Van46 ring
10:00
10:00
30min
Coffee
Van46 ring
10:00
30min
Coffee
GEOCAT (301)
10:00
30min
Coffee
LAStools (327)
10:00
30min
Coffee
QFieldCloud (246)
10:00
30min
Coffee
Omicum
10:30
10:30
30min
Crowdmapping That Works
Ilya Zverev

Most geographers look to OpenStreetMap for the data. It is indeed unique, with many attempts at duplicating the idea failed. At its core is crowdmapping: making regular people improve the map. Not having the data, geographers look into that too. Tempting idea, isn't it — engage a crowd into collecting data you need, not spending a cent on teaching and salaries.

Have them walk around and collect building entrances for you! Why not make cyclists review cycling lanes? Everybody has phones, let them measure signal quality all over Estonia. We don't have yellow pages, but sure people could help building a POI dataset? At least pointing things on a map should be easy and draw in a crowd?

We have seen many businesses toying with this, and many not-for-profit projects. Most failing. The Humanitarian OpenStreetMap Team still unmatched. Why does that happen? How do we ensure the data collected can be trusted? How do we get people to stay with us, and not leave after a few clicks?

In this talk we'll look at a few past crowdmapping projects, learn what went well and what didn't, and derive a few pointers at how to get the data we need out of thin air (and people we don't know).

Use cases & applications
QFieldCloud (246)
10:30
30min
Leave no one behind - UNDP GeoHub. Spatial data visualization and analytics for policy making
Jin Igarashi

United Nations Development Programme (UNDP) is a United Nations agency tasked with helping countries eliminate poverty and achieve sustainable economic growth and human development.

Recent advances in technology and information management have resulted in large quantities of data being available to support improved data driven decision making across the organization. In this context, UNDP has developed a corporate data strategy to accelerate its transformation into a data-driven organization. Geo-spatial data is included in this strategy and plays an important role in the organization. However, the large scale adoption and integration of geo-spatial data was obstructed in the past by issues related to data accessibility (silos located in various country offices), interoperability as well as sub-optimal hard and soft infrastructure or know-how.
All this issues have been addressed recently, when UNDP SGD integration team started developing GeoHub to provide geospatial data visualization and analytical tools to UNDP staff and policymakers.

UNDP GeoHub is a repository of a wide array of data sets of the most recent time span available at your fingertips! It is a centralized ecosystem of geospatial data and services to support development work. It allows non-geospatial users to upload, search and visualize datasets, compute dynamic statistics and download the data. In addition, GeoHub provides a feature to create and share their maps with the community easily. Satellite imagery, spatial-temporal model generated data, as well as regular spatial datasets, can be streamed into various analytical tools to create new insights leveraged by policymakers and regular users.

Geohub ecosystem consists of sveltekit & maplibre based frontend web applications and various FOSS4G software on the backend side. PostgreSQL/PostGIS, titiler, pg_tileserv are deployed in Azure Kubernetes (AKS) to provide advanced visualization and analysis for users. All source code is published in GitHub repository with an open-source license.

We have presented GeoHub at FOSS4G 2023 Prizren. We are going to update the latest state of GeoHub since we have developed a lot of new features and UI/UX improvements. The major changes since Kosovo are as follows.
- Introduced a landing page to direct users to each page of GeoHub
- Implemented social login (GitHub authentication and UN B2C authentication)
- Data upload pipeline
- User permission management for datasets and maps
- UI improvement for data visualization
- Raster analytical features (titiler algorithms)
- Static image API (generate static map image from a saved Maplibre style)

Use cases & applications
Omicum
10:30
30min
State of OGC APIs
Joana Simoes

OGC APIs are a family of modern standards, which bring interoperability to anyone who wants to share geospatial data, using mainstream web technologies (e.g: REST, JSON, HTML). Some of these APIs come to replace and extend the legacy OGC Web Services (OWS), like WFS or WMS.
In this talk, we’ll highlight the state of OGC APIs and their current roadmap. We’ll also look at the adoption of these APIs within OSGeo projects and discuss compliance and certification. Finally, we will point out some resources, available to anyone who wants to develop and validate OGC API implementations.

Open standards and interoperability for geospatial
Van46 ring
10:30
30min
State of Oskari (for end-users)
Sini Pöytäniemi

Oskari is a beautiful open source map framework which is based on the idea of creating map applications utilizing distributed Spatial Data Infrastructures, i.e. standardized map APIs such as WMS and OGC API Features. Publishing customized maps with Oskari and embedding them on a website is easy. Using Oskari as an administrator or an end-user doesn’t require any programming skills. Oskari is used by dozens of different organisations, mainly in the public sector in Finland, to create hundreds of web map applications.

Development and improvement of Oskari is continuous. Some of the new features include improvements in mobile use, as well as new feel and look in map publishing, thematic statistical maps and other smaller improvements. In addition, the website for Oskari community (oskari.org) has been completely renewed. And last but not least, the Oskari logo has received a fresh new design – we are happy to introduce you to the new Oskari Otter!

In this presentation we will share with you practical examples of how to use Oskari. Our target audience is the current or future administrators and end users of Oskari-instances. Whether you are new to Oskari or know it from before, welcome to listen and discuss how you could get the best out of Oskari.

Learn more from our new website oskari.org.

State of software
LAStools (327)
10:30
30min
Tartu: Pioneering the Future of Self-Driving Technology with Open Source and Open Data
Tambet Matiisen

Self-driving vehicles promise to revolutionize transportation, making it safer and more affordable. While driverless taxis are a reality in San Francisco, their global expansion presents significant challenges. Tartu, Estonia, is rising to meet these challenges, aiming to become Europe's premier testing ground for autonomous vehicles. This ambitious project is not without its hurdles, encompassing a range of legal and technological complexities. Crucially, open-source software and open data are at the forefront of overcoming these challenges.

Estonia's unique position makes Tartu an ideal candidate for establishing an international self-driving vehicle testing center. The country offers the opportunity to test in diverse seasonal conditions, a feature absent in regions like California. Estonia has also shown agility in adapting legislation to safely permit the testing of autonomous vehicles on its streets. Furthermore, the nation boasts a dynamic ecosystem of companies specializing in autonomous technology, including Starship, AuveTech, Clevon, and Milrem Robotics. The University of Tartu's Autonomous Driving Lab serves as a central hub for self-driving technology research and development.

Our vision for Tartu includes several key components:

  1. Designated testing zones for autonomous vehicles, encompassing both specialized closed areas and marked public city spaces.
  2. A comprehensive high-definition map of Tartu, featuring a detailed spatial point cloud and lane-level road network.
  3. A digital twin, or simulation, of Tartu, facilitating pre-arrival testing.
  4. Machine-readable traffic lights throughout Tartu, enhancing autonomous system safety beyond traditional light signals.

Open-source tools like QGIS, Shapely, Blender, and the CARLA autonomous driving simulator, alongside open datasets from the Estonian Land Board and the City of Tartu, have been instrumental in achieving the high precision required for our high-definition map and digital twin. These resources have been vital in realizing our vision with decimeter-level accuracy.

In this talk, I will provide an overview of how Tartu leverages open-source software and open data in developing the high-definition map and digital twin, key components in our journey to become a leader in self-driving technology.

Use cases & applications
GEOCAT (301)
11:00
11:00
30min
Bridging Worlds: Integration of Wikidata and OpenStreetMap
Edward Betts

Discover the synergy between Wikidata and OpenStreetMap, two monumental open data repositories. This talk unveils innovative web-based tools facilitating the linking of these platforms, enhancing the richness and accuracy of geospatial data.

OpenStreetMap's editors face a unique challenge: accurately mapping the vast tapestry of global locations. This presentation introduces Web-based solutions streamlining this process. Attendees will learn how these tools empower users to effortlessly identify and correlate Wikidata entries with OpenStreetMap locations.

This integration, however, navigates complex waters of differing licenses, sparking lively debates within the community. The talk delves into these intricacies, exploring the ethical and legal considerations of cross-platform data sharing.

Expect an engaging walk-through of the tool's latest iteration, insights into the matching algorithm, and an honest reflection on community responses, including the contentious aspects. The session concludes with a call to action, inviting attendees to contribute and further this pioneering work.

Open Data
QFieldCloud (246)
11:00
30min
Routing for Golf-Carts and other Low-Speed Vehicles using Valhalla
Luke Seelenbinder

From Bolt scooters to golf carts, the future of short distance travel is full of interesting surprises! In this talk, we’ll discuss how the Valhalla routing engine can be used for low-speed vehicle routing. As a motivating example, we’ll discuss a real problem faced by two municipal governments in the US who needed to offer safe routing for golf carts, how this led to the development of a low-speed vehicle profile for Valhalla, and the challenges we faced along the way.

State of software
GEOCAT (301)
11:00
30min
State of deegree: A mature server-side software for spatial data infrastructures
Dirk Stenger

The OSGeo project deegree is open source software for spatial data infrastructures (SDI) and the geospatial web which mainly focuses on the server-side. It implements standards of the Open Geospatial Consortium (OGC) and the ISO Technical Committee 211. The project provides 9 official Reference Implementations of OGC Standards such as GML, WFS, WMS, and OGC API - Features.

This talk will give an overview of the latest stable release of deegree. It will highlight the recent developments of version 3.6 which provides support of Java 17 and Tomcat 10. Also, the deegree implementation of the OGC API - Features standard will be presented and how it can be used with existing deegree configuration.

Finally, the future directions of the project will be highlighted and what developments are currently planned.

State of software
LAStools (327)
11:00
30min
Unlocking Uralic Heritage and Diversity: URHIA's Open Data Journey in Spatial Exploration
Meeli Roose, Msilikale Msilanga

The University of Turku's interdisciplinary collaboration spanning geographers, biologists, linguists, and archaeologists has yielded a rich tradition of studying language evolution and human diversity. Over 15 years, our efforts have culminated in the creation of the Uralic Historical Atlas (URHIA, meaning “brave” in Finnish dialect), a dynamic spatial platform that provides open access to spatial databases focusing on human diversity in Finland and Northern Eurasia. This platform emphasizes the commitment of the University of Turku to making data accessible that contributes to transparent science and effective collaboration for a wider range of insights and perspective
URHIA, built on open-source spatial infrastructure GeoNode by GeoSolutions and integrated into UTU's spatial infrastructure (https://geospatial.utu.fi/resources/utu-geospatial-data-service/), goes beyond being a conventional data repository. It is designed as an interactive spatial platform (https://sites.utu.fi/urhia/) for researchers and lay audiences. Currently hosting the Uralic Language Atlas and the Archaeological Artefact Atlas of Finland, URHIA transforms into a live data showroom, presenting thematic spatial datasets through interactive online maps. In addition to these achievements, the impact of other similar initiatives that use UTU’s spatial infrastructure is noticeable worldwide, especially on those that aim to improve the skill of university students which provide more employment opportunities, build the capacity of university staff, promote open access to digital e-assets and improving student digital skills and competences.
This presentation delves into the development of the framework and sharing of groundbreaking new open data through an online spatial data platform, emphasizing platform development and data-specific challenges. The presentation showcases a) The Uralic Language Atlas (distribution of Uralic language speaker areas), an initiative digitized by the interdisciplinary BEDLAN (https://bedlan.net/) research team, and b) the Archaeological Artefact Atlas of Finland, an initiative from over 10-year effort of collaborative research groups to digitize Finnish archaeological artefacts, made available just in the end of 2023. The participatory design and user-centric principles adopted during the URKO project (2020-2022) laid the foundation for this inclusive approach. and c) a list of initiatives that use UTU’s spatial infrastructure for digital skills development for Geospatial Employment
The first interactive map URHIA's Language maps, represents a significant leap in digital linguistics. In a landscape where extensive databases of geographical language distributions are often missing, URHIA stands out as a pioneering initiative. Developed collaboratively at the University of Turku, the Uralic Language Atlas provides a groundbreaking open dataset. The second showroom is a workflow in progress, the Archaeological Artefact Atlas of Finland. Presenting data from the Archaeological Artefact Database of Finland (AADA), this atlas provides comprehensive information on over 49,000 collection entries of Finnish archaeological materials. AADA represents a pioneering effort, being the first database of its kind in Finland and possibly globally. Its creation marks a milestone in the digitization and accessibility of archaeological data, setting the stage for similar initiatives worldwide. The third showroom is the list of initiatives with their process and cases that use UTU’s spatial infrastructure for enhancing digital skills for geospatial employability.
URHIA has evolved into a dynamic tool that caters to diverse research needs through collaborative teamwork and a commitment to user-centric design. Highlighting the unique open dataset within URHIA, this presentation underscores the concept of following scientific FAIR (Findable, Accessible, Interoperable, and Reusable) principles. The Uralic Language Atlas and the Archaeological Artefact Atlas of Finland exemplify the commitment to making data findable and accessible, contributing to the larger vision of transparent science and promoting effective collaboration, leading to a broader and more diverse range of insights and perspectives.

Open Data
Omicum
11:00
30min
ZOO-Project: from OGC WPS to OGC API - Processes Part 1 and Part 2
Rajat Shinde, Gérald Fenoy

The ZOO-Project is an open source processing platform, released under MIT/X11 Licence. It provides the polyglot ZOO-Kernel, a server implementation of the Web Processing Service (WPS) (1.0.0 and 2.0.0) and the OGC API - Processes standards published by the OGC. It contains ZOO-Services, a minimal set of ready-to-use services that can be used as a base to create more usefull services. It provides the ZOO-API, initially only available from the JavaScript service implementation, which exposes ZOO-Kernel variables and functions to the language used to implement the service. It contains the ZOO-Client, a JavaScript API which can be used from a client application to interract with a WPS server.

State of software
Van46 ring
11:30
11:30
30min
DuckDB with Geodata
Jakob Miksch

DuckDB has established itself in the data science community as a lightweight tool for data analysis of all kinds. It now has an official extension that can work with geospatial data. In this talk we will introduce the basic features.

DuckDB can read data from various sources, such as files (CSV, JSON, ...), the Internet or other databases. The imported data can be combined and processed using SQL. The "spatial" extension of DuckDB now also supports spatial data types such as points, lines or polygons. In addition, the GEOS library integrated in the background provides geographic analysis functions such as area calculation, intersection or buffer calculation. GDAL is also integrated in the background and allows reading and writing of many other formats from the geographic world.

State of software
Omicum
11:30
30min
Our journey into an OGC-compliant Processing Engine
Miguel Delgado

We have recently adopted the OGC API-Processes specification as we modernize our legacy processing platform at UP42. The legacy platform was plagued by poor compatibility among linked processes, high rates of failure, and unnecessary complexity when handling multi-data inputs. The OGC API-Processes specification offers a standard interface that makes complex computational processing services accessible via a RESTful API.

Behind the scenes we built a well choreographed set of micro-services providing substance to the standard endpoints: a process registry service (ProcessList, ProcessDescrition), a job registry service (Status, JobList, Result), a task execution service (Execute) and more. To avoid the failure rate experienced in our legacy platform, we enabled a very restrictive validation of every job ahead of execution, leveraging our STAC-in/STAC-out paradigm (widely relying on extensions like proj, eo etc). Our job-registry-service also leverages STAC to ensure full traceability of jobs and items.

Now we would like to look back and share our journey with the community, showing how embracing
community specifications like STAC and OGC-Processes API enabled us to transition into a more reliable and scalable processing engine.

More information: https://up42.com/blog/pansharpening-an-initial-view-into-our-advanced-analytic-capabilities

Open standards and interoperability for geospatial
Van46 ring
11:30
30min
State of GeoNode
Giovanni Allegri, Emanuele Tajariol

This presentation will introduce the attendees to those which are GeoNode's current capabilities and to some practical use cases of particular interest in order to also highlight the possibility of customization and integration. We will provide a summary of new features added to GeoNode in the last release together with a glimpse of what we have planned for next year and beyond, straight from the core developers.

State of software
LAStools (327)
11:30
30min
Use of FOSS4G Technologies in the Management of Railway Infrastructure Data
Akhil Jayant Patil

Railways has always been looked at as the best public transport option since its invention. Even a single freight railway trip along with all the surrounding railway environment produces huge amount of data like the routing data, train schedules, on-board sensor data, wayside field unit data, etc. Such data are normally temporally and spatially referenced. This data helps in correct routing of trains, maintaining and monitoring the condition of the infrastructure, to expand the existing infrastructure and many more purposes. The use of free and open source geospatial software is greatly helping us with the management and processing of these datasets. With digitalization and rise of Internet of Things (IoT) that is based on a sensor ecosystem, we are looking at data that is generated at very high rate and is crucial for analysis both in short and long terms. The background digital infrastructure that handles such data should be state-of-the-art, fault-tolerant, scalable and easy-to-operate. This talk explains how we use FOSS4G technologies to build our digital infrastructure platform.
We at Institute of Transportation Systems (TS) of the German Aerospace Center (DLR) started with this idea in mind and developed an infrastructure platform called Transportation Infrastructure Data Platform (TRIDAP). It is provisionally operational and is being further developed . DLR-TS conducts research into technologies for the intermodal, connected and automated transport of the future on road and rail. Research into new systems in rail and road transport domain requires digital twins. The digital twin structure helps to draw a holistic picture of the infrastructure of road and rail in connection with the vehicles, people and goods moving within the infrastructure. This is realized using distributed system architectures and artificial-intelligence methods. The TRIDAP platform is developed using various FOSS4G technologies. This platform is capable of making the data available to researchers within the DLR as well as project partners and other stakeholders over a long period of time for analysis and visualization. The platform development is a part of the DLR-funded cross-domain project called “Digitaler Atlas 2.0”.
The datasets handled in TRIDAP vary to a great degree in terms of their size, nature and format (numerical sensor measurements, images from visual sensors, streams of data from a single geo-location and many such other variations). TRIDAP has storage feature of these types of structured datasets in a PostGIS database or the non-structured data in file-folders. A mammoth data model is developed to accommodate different datasets in databases, along with a possibility to track changes. Also, provision is made to store non-structured data in a hierarchy of storage space using a NetApp base. TRIDAP supports the analysis and sharing of georeferenced as well as non-georeferenced datasets. For condition monitoring applications, information on changes in the railway infrastructure and management activities (such as repair and improvement of existing infrastructure) carried out in the past, is also stored in the platform. In order to make these datasets Findable, Accessible, Interoperable and Reusable (FAIR), the system stores sufficient metadata as well as supports the publication of datasets through the use of open-source software GeoServer and GeoNetwork. Most of the data are georeferenced and are stored in a common space and time reference frame – World Geodetic System / WGS84 and Coordinated Universal Time (UTC).
The platform contains instances of various big data open-source software, such as Apache Kafka, Apache Flink, Apache Spark, to process and analyze the data through the development of stream and batch processing applications. To carry out fusion of measurement and weather datasets, we are currently developing a python-based tool to download data from Deutscher Wetterdienst (DWD) for a user-defined region and time period directly into the data processing application. Weather data from other internal and external sources are planned to be integrated in the future. In order to provide a high-quality service to the researchers at DLR-TS as well as our project partners, it is inevitable to ensure high availability and optimal performance of the platform. To achieve this, we are integrating all components of TRIDAP into a monitoring framework that uses a monitoring tool called Prometheus and a visualization tool called Grafana. TRIDAP also has a python-based tool in development to validate the data being stored in the system. For this purpose, we define a set of validation rules together with the team of researchers / data owners / data generators. The validation tool deals with dynamic live data received from railway locomotives and wagons in the field and infrastructure data stored in databases. When validation errors are identified, the team of data owners and generators are immediately informed, in order to take further actions.
The geo-datasets stored in TRIDAP are shared with stakeholders in standardized data formats through the use of GeoServers. GeoNetwork is being used to setup a geodata catalog that enables easy search and access to datasets stored in the platform. The GeoNetwork uses metadata standards such as Dublin Core and ISO/TS 19139 to document metadata. It is also planned to connect GeoNetwork with the research data repository (FDR) of the DLR to obtain a persistent ID (PID) for the datasets on-demand. Certain datasets stored in the platform are confidential and have restricted access. This is currently being implemented through the definition of multiple users, roles and data security rules in the GeoServer as well as in the data storage layers.

Use cases & applications
GEOCAT (301)
11:30
30min
Where is the free, very high-resolution imagery?
Batuhan Kavlak

Massive [Earth Observation] imagery datasets are now available publicly and freely. However, free, very high-resolution imagery (<1m/pixel) is expensive and not easily accessible. Such imagery is particularly critical to aid disasters, but still, we are struggling to find and use it. Why is this the case? Where is very high-resolution imagery, and where is the one that enables us to create solutions freely to contribute to society?

Starting from the Türkiye & Syria Earthquake on February 6, 2023, I've contributed to OpenAerialMap (openaerialmap.org) by processing and uploading Maxar imagery for disasters, such as Morocco and Nepal.

I'd like to critically discuss the current state of VHR imagery and present my experience, challenges, and suggestions for improved availability in the context of disaster mapping. I want to touch upon the following topics:

  • Where is very high-resolution imagery?
  • How is it being used?
  • Why are they not free to use?
  • Still, where can we reach them?
  • Why do we need better access?
Open Data
QFieldCloud (246)
12:00
12:00
30min
Demystifing OGC APIs with GeoServer: introduction and status of implementation
Andrea Aime

The OGC APIs are a fresh take at doing geo-spatial APIs, based on WEB API concepts and modern formats, including:

  • Small core with basic functionality, extra functionality provided by extensions
  • OpenAPI/RESTful based
  • JSON first, while still allowing to provide data in other formats
  • No mandate to publish schemas for data
  • Improved support for data tiles (e.g., vector tiles)
  • Specialized APIs in addition to general ones (e.g., DAPA vs OGC API - Processes)
  • Full blown services, building blocks, and ease of extensibility

This presentation will provide an introduction to various OGC APIs and extensions, such as Features, Styles, Maps and Tiles, STAC and CQL2 filtering.
Some have reached a final release, some are in draft: we will discuss their trajectory towards official status, as well as how good the GeoServer implementation is tracking them, and show examples based on the GeoServer HTML representation of the various resources.

Open standards and interoperability for geospatial
Van46 ring
12:00
30min
Disaster Management GIS in Action: Leveraging Open-Source Software for Rapid Response
Polina Krukovich, Vasili Bondar

In the face of natural disasters, response time is critical. Mapping and geospatial insights play a pivotal role in understanding the impact and coordinating efforts. This presentation will delve into the capabilities and benefits of open-source disaster management software, focusing on Disaster Ninja, an innovative tool developed by Kontur. This critical event management solution, now open-source, enhances situational awareness by visualizing mapping gaps and facilitating connections with local mappers for ground truth verification.

Disaster Ninja streamlines the preparation of mapping tasks, enabling emergency cartographers to work efficiently, often reducing task preparation from hours to minutes.

Our talk will explore how open-source tools like Disaster Ninja can empower disaster response efforts by providing actionable insights, demonstrating the tool's application in real-world scenarios, and discussing its development in collaboration with the Humanitarian OpenStreetMap Team (HOT). We aim to foster the development of FOSS4G by offering our experiences and the capabilities of Disaster Ninja, to enhance collaboration, innovation, and the practical application of these resources during disaster events.

State of software
QFieldCloud (246)
12:00
5min
Enhance your MapServer Workflows with mappyfile
Seth Girvin

mappyfile became an OSGeo Community project in 2023. This talk gives an overview of the project, new plugins, and how it can help you improve your MapServer development and deployments.

State of software
LAStools (327)
12:00
30min
Managing airport data with Open Source Software
Pekka Sarkola

Airport is a very demanding environment to build, maintain and operate. Busy airports are operated 24/7 every day. Safety and security of the passengers, crew and aircrafts are crucial for airport operators. Almost all activities in airports are also regulated by international and national officials. Nowadays the importance of geospatial data is growing for airport operators to efficiently manage airports inside and outside. In this presentation, we will show how FOSS4G software is used today to manage geospatial airport data and what are near-future challenges.

First impression of smooth air travelling will start with when a passenger arrives at the airport: how to arrive with public transport or where I can park my car? Before entering the aircraft, passengers like to easily check-in, pass security checks and then use various services, like restaurants, shopping, restrooms and other services. Airport outdoor and indoor maps are key tools for passengers to travel from outside the airport to the gate of the aircraft. We will show how to maintain a PostGIS database with QGIS, how to share necessary information with Geoserver and how maps are delivered to passengers to different devices.

Airport operators are mandated to collect, maintain and deliver aeronautical data of the airport. Aeronautical data is a key part of the creation of aeronautical information products which include both digital data sets and a standardised presentation in paper and electronic media. We will show how airport operator will collect and maintain aeronautical data in PostGIS database with QGIS.

Airports are constantly developing and airport data management is under constant development. New regulations are coming and airport operators need to manage their operations cost effectively. We will discuss possible future development projects, like Foreign Object Debris (FOD), aerodrome mapping database (AMDB) and Obstacle management.

Use cases & applications
GEOCAT (301)
12:00
30min
QGIS as a Tool in Planning Optical Fiber Networks
Chris Nichterlein

Planning an optical fiber network is a complex process. Early draft versions of the networks are usually used to give a rough cost estimation. As this process is already very work intensive, things are getting especially serious when construction is scheduled to start. Permission documents, digging permits and all kind of forms need to be submitted to local authorities.

As in many engineering projects, time is key. Whatever helps to simplify and automate work steps, is a big game changer. Especially the creation of permission documents and maps can be a very time consuming process. To create such documents, we have turned to QGIS and after an intensive (always ongoing) process of customization, we are now able to produce dozens of documents with just a few clicks.

In our talk we will show a real use case for real projects which are currently in the execution phase in Saxony/Germany. Possible was this development by a strong cooperation of Estonian Fiber OÜ (EST), aastrix GmbH (GER) and Yellow Arrow OÜ (EST).

Use cases & applications
Omicum
12:05
12:05
5min
OpenLayers and Vue
Jakob Miksch

There are many ways to include an OpenLayers map into a Vue web application. This presentation will explore a few techniques such as vue3-openlayers, vue-ol-comp, and Wegue. The primary emphasis is on how the state of the OpenLayers map and its layers can be reactively accessed across all components of the web application.

Use cases & applications
LAStools (327)
12:10
12:10
5min
ol-stac: STAC and OpenLayers combined
Matthias Mohr

OL STAC makes it easy to put STAC resources on a dynamic OpenLayers map. It can display geometries, GeoTIFF files, web map layers and more in an "automagical" way. It is completely free, Open Source JavaScript, released under the Apache 2.0 license. This talk introduces you to the project and shows some use cases.

State of software
LAStools (327)
12:15
12:15
5min
Cloud-Native Asset Model with STAC
Batuhan Kavlak

STAC is a well-known and acknowledged spatiotemporal metadata standard within the community. There are many applications with open-source data; however, there are few adoptions by premium satellite imagery providers. At UP42, we adopted STAC as the core metadata system within our applications that defines how we store data. Last year, we presented how we designed a standard data management system with STAC:
https://www.youtube.com/watch?v=WVE5VZzoOqM&t=1s

As of last year's talk, we developed another catchy concept to standardize data access and processing. We designed a Cloud-Native Asset Model combining existing concepts such as STAC and COG, where we transform all files delivered by providers into a somewhat standard-ish format using GDAL extensively. We want to continue updating the community about our experience and share the takeaways.

More information:
- https://up42.com/data-management
- https://up42.com/blog/introducing-a-cloud-native-asset-model

Use cases & applications
LAStools (327)
12:20
12:20
5min
Paituli STAC - experiences of setting up and using own STAC catalog
Külli Ek

In this presentation we discuss our experiences from setting up Paituli STAC, which contains open Finnish raster datasets. We did not do any software development, but decided to use GeoServer with STAC extension. Own code was only written for populating the PostGIS database with information about ~100 collections and ~250 000 items. Paituli STAC catalog is mainly targeted for data analysis use cases, but can be used also from web applications.

We have also prepared public example scripts for using Paituli STAC with Python and R. We will also show the results of some scaling tests of using data from STAC on a supercomputer with Dask and xarray.

More information: https://paituli.csc.fi/stac.html

Open standards and interoperability for geospatial
LAStools (327)
12:25
12:25
5min
SATILADU – AN OPEN WEB MAP FOR ESTONIAN LAND USE AND LAND COVER DATA
Martin Menert

We live in the era of growing demands for natural resources due to the growth of population on the global scale. Therefore it becomes more urgent to plan more responsibly the land use at the global scale as well as in the European level. In the path towards public agreement on decreasing human induced environment stress Estonia among other countries has joined with the Paris Agreement as well as with the “Fit For 55”.

For serving public needs for land use and land cover (LULC) data the open web map Satiladu (https://satiladu.maaamet.ee/en) is fully operational on its 4th year. The application was released by the Estonian Land Board on 17th of February 2021 initially for providing as easy as possible access to the Copernicus Sentinel data. During the active use period by different stakeholders Satiladu has grown to a convenient platform for providing user friendly access to other important LULC data as well – the data produced by Estonian Land Board, Estonian Environment Agency and Estonian Agriculture Registers and Information Bureau.

By sector Satiladu has found its importance in:
(1) public sector institutions (incl. offices related with environment and real estate planning);
(2) open communities related to environmental planning activities;
(3) educational and research institutions.

As a major future perspective, the platform is designated to address more citizen science needs. As being an easy demonstrator for use cases of LULC data covering Estonian area, the vision is to apply the best practices and more settled algorithms at the European level as well.

Use cases & applications
LAStools (327)
12:30
12:30
90min
Lunch
Van46 ring
12:30
90min
Lunch
GEOCAT (301)
12:30
90min
Lunch
LAStools (327)
12:30
90min
Lunch
QFieldCloud (246)
12:30
90min
Lunch
Omicum
14:00
14:00
30min
Empowering Rapid Disaster Response with OpenAerialMap
Milvari Alieva

In the face of disasters, timely access to accurate geospatial data is critical for effective response efforts. This presentation explores the pivotal role of OpenAerialMap (OAM) in enhancing open maps and facilitating swift disaster response. We delve into the evolution of OAM, from its inception to the latest advancements, highlighting its use as a comprehensive repository of openly licensed satellite and UAV imagery. The session will showcase OAM Mosaic Map's features. Join us to discover how OAM, through collaboration with the Humanitarian OpenStreetMap Team, is shaping the future of open maps and geospatial response in times of crisis.

Use cases & applications
LAStools (327)
14:00
30min
Geocint: Open Source Geospatial Data Processing Pipeline
Andrew Valasiuk

For data-driven organizations, it is critical to have reliable ETL processes. As an open-source tool, Geocint can help organizations and individuals who work with geospatial data and need to process it efficiently.
Geocint is an ETL pipeline for processing geospatial data. At Kontur we have been using Geocint internally for a long time – to build the Kontur Population and Kontur Boundaries datasets. We also used it to prepare data for the Disaster Ninja app before deciding to make it reusable by other organizations in the GIS field.
We built the Geocint pipeline around PostgreSQL, PostGIS, and h3-pg (PostgreSQL bindings for H3). Thus, Geocint combines the powerful data processing features of PostgreSQL with the efficient geometric operations from PostGIS and the key benefits of using the H3 grid system, such as high-performance lookups and a compact storing format.

State of software
GEOCAT (301)
14:00
30min
State of STAC
Matthias Mohr

The SpatioTemporal Asset Catalog (STAC) specifications are a flexible language for describing geospatial information across domains and for a variety of use cases. This talk will present the current state of the specifications, which includes the core STAC specification and the API specification built on top of OGC APIs. The core specification is planned to release version 1.1 before FOSS4G Europe and this talk is meant t guide you through the changes. This presentation also digs into additions to STAC extensions and the latest community developments. We survey the updates to the open-source STAC ecosystem, which includes software written in Python, JavaScript, and more.

Open standards and interoperability for geospatial
Van46 ring
14:00
30min
Using external dependencies in QGIS plugins
Antero Komi

This talk presents different methods to handle dependencies to external libraries in QGIS plugins.

Compared to for example web development world there is no wide adoption of general-purpose QGIS libraries available nor a way to easily integrate such libraries into own plugin or library development. Also, some widely-used non-QGIS-specific libraries like pandas for data manipulation might be beneficial for QGIS plugins or libraries to use as well.

Built-in QGIS features include declaring dependency plugins, but the usage must rely on either accessing the plugin instance and its API, importing code of the plugin package in a guarded way, or using only for example the processing providers installed by such dependency plugins. For example, sharing and using general purpose GUI components, simple functions etc. with an external, possibly pip-installable dependency library is not straightforward and has many obstacles.

Some methods used include requesting dependency install manually from the user, using subprocess calls to install the dependencies automatically, shipping dependencies together with the plugin code and using import paths manipulation, or bundling the dependencies into the code and using replaced imports to point to the bundled library. Difficulties in some or all these approaches include possible version conflicts between different plugin requirements, version mismatches with the expected runtime and platform incompatibility.

This talk compares these different methods pros and cons, possible use cases for each, effect on the development workflow, and shows available tools for helping to use some of these methods.

Use cases & applications
Omicum
14:00
30min
Your Geoportal F***ing Sucks
Ian Turton

Many national and regional governments have in the past few decades created GeoPortals to meet their
obligations to provide citizen access to their spatial data. This spatial data is collected, in many cases, at
tax payer expense. Indeed the EU (2024) says:

The publication of data is driven by the belief that it brings enormous benefits to citizens, businesses,
and public administrations, while at the same time enabling stronger co-operation across Europe. Open data
can bring benefits in various fields, such as health, food security, education, climate, intelligent
transport systems, and smart cities - and is considered "an essential resource for economic growth, job
creation and societal progress".

But even now nearly a quarter century after the introduction of the first Open Geospatial Consortium (OGC)
standards for interoperability there seems to be a wide spread failure to make use of OGC standards to provide
access to the underlying data that is needed by citizens create economic growth.

This paper will detail the author's experiences with attempting to acquire spatial data and their observations
of relatively inexperienced students trying to navigate some examples of geoportals. The paper will then make
some suggestions to help data providers serve data with the modern methods and formats that users actually
want, using open source tools such as GeoServer.

Open Data
QFieldCloud (246)
14:30
14:30
30min
Can we use Nix as a default way of distributing geospatial software ?
Ivan Minčík - @imincik

Nix provides the largest collection of software packages on the planet including geospatial software maintained by Nix Geospatial Team. It is multi platform, runs on any Linux and even on a Mac. In addition, Nix can build per-project isolated environments, container images, run services and provide many other unique features not found anywhere else. Nix environments and services are configured declaratively (tell me what you want, Nix will know how to get there). They are reproducible and Nix provides full control of the dependency graph from kernel level up.

Currently, adoption of Nix doesn't correlate with the number of unique features it provides, mostly because of the steep learning curve.

Fortunately, with new tools in a Nix ecosystem it is not a problem anymore. I want to demonstrate geospatial-nix.today, a web interface which provides user friendly access to all packages and features of Nix mentioned above and will present the arguments why we should seriously consider using this technology for our software.

This talk is follow up on my previous FOSS4G talk about features and potential benefits of using Nix technology stack for geospatial use cases.

State of software
GEOCAT (301)
14:30
30min
Finnish National Geoportal Paikkatietoikkuna turns 15 years!
Sini Pöytäniemi

Finnish National Geoportal Paikkatietoikkuna was first launched in 2009 and is now the home of over 3000 open map layers from nearly 70 different data producers in Finland, and is used by 3 - 6 000 users every day. Its background is largely in the INSPIRE-directive, which expects the spatial data to be accessible, reusable and interoperable: the Geoportal functions as a national service for the data producers to demonstrate and display their open datasets to the public.

Paikkatietoikkuna is regularly referred to in social media, news articles, blogs as well as teaching materials in education sector, as it has become well known in the society. Today, Paikkatietoikkuna is essential for thousands of professionals in variety of different fields, such as in forestry and environmental fields. These people need in their daily work an easy to use map applications with national or local datasets for viewing and overlaying map layers for comparison, for creating their own map data for simple analysis or for embedding a map on their website without any programming skills.

Paikkatietoikkuna, soon after its birth, initiated what is now known as open source Oskari map framework (oskari.org), and today many other map applications are based on Oskari.

In this presentation you will hear how Paikkatietoikkuna gained its central role within Finnish spatial data infrastructure today.

Use cases & applications
QFieldCloud (246)
14:30
30min
How to set up a QGIS plugin project and development environment in minutes
Lauri Kajan

Creating a new QGIS plugin and setting up a working development environment from scratch can be daunting, especially for beginners or occasional developers. In this talk, I present a templating tool that simplifies and streamlines the plugin development process. The tool is based on Cookiecutter, a well-known command-line utility that generates projects from templates. The template (https://github.com/GispoCoding/cookiecutter-qgis-plugin) we at Gispo developed:
- is highly customizable and follows the best practices for QGIS plugin development
- includes features such as testing, documentation, internationalization, packaging, continuous integration and development environment creation
- allows anyone to quickly start a new plugin project in minutes with minimal effort and consistent structure

I demonstrate how to use the tool, how to modify the template options, and how to publish the plugin to the QGIS plugin repository. I also share some tips and tricks for developing and maintaining QGIS plugins. This talk targets anyone who is interested in creating or improving QGIS plugins, regardless of their experience or expertise.

State of software
Omicum
14:30
30min
Publishing INSPIRE and other rich data models in GeoServer made easy with Smart Data Loader and Features Templating
Andrea Aime, Nuno Oliveira

This presentation will cover the support GeoServer provides to publish rich data models (complex features with nested properties and multiple-cardinality relationships), through OGC services and OGC API - Features, focusing on the recent Smart Data Loader and Features Templating extensions, covering in detail ongoing and planned work on GeoServer.

As far as the INSPIRE scenario is concerned, GeoServer has extensive support for implementing view and download services thanks to its core capabilities but also to a number of free and open-source extensions; undoubtedly the most well-known (and dreaded) extension is App-Schema, which can be used to publish complex data models and implement sophisticated download services for vector data.

We will also provide an overview of how those extensions are serving as a foundation for new approaches to publishing rich data models: publishing them directly from MongoDB, embracing the NoSQL nature of it, and supporting new output formats like JSON-LD which allows us to embed well-known semantics in our data.

Real-world use cases from the organizations that have selected GeoServer and GeoSolutions to support their use cases will be introduced to provide the attendees with references and lessons learned that could put them on the right path when adopting GeoServer.

Open standards and interoperability for geospatial
Van46 ring
14:30
30min
Vector Tiles: Spatial Selection with PyQGIS
Helene Steiner, Lukas Nebel

As transmission operator (Austrian Power Grid) we need up to date information on parcels and land use during building projects. The Austrian data provider in this field (Federal Office of Metrology and Surveying - kataster.bev.gv.at) provides an open data vector tile cache with this information on a daily basis. The vector tile format is good for visualisation, but for the export of distinct multiple parcel-geometries, there is no out-of-box solution in QGIS so far.

We present a methodology of downloading data from vector tiles based on a defined spatial selection with PyQGIS. Based on a geometry (e.g., a power line) we are able to select the parcels of interest. One challenge is the fragmented provision of data by vector tiles. Using GeoPandas we combine the tiles into distinct geometries which can be postprocessed.

The result are precise parcels including attribute data and metadata. The challenge is to reduce the amount of data in the spatial selection process to find the parcels of interest.

Use cases & applications
LAStools (327)
15:00
15:00
30min
Challenges for displaying STAC Items and Assets in the browser
Daniel Scarpim

At UP42 we are making extensive use of map visualizations of STAC Items and Assets, including vector files and very-high resolution imagery in our main React application. We will show our journey from simply displaying geometries on a using React Leaflet and HERE maps, to eventually adding high resolution previews and interactions. We will go through the main issues we had during this process and what we did to solve them, as well as presenting what we learned during this process.

On this talk we will present how we handled displaying and interacting with GeoJson and GeoTIFF previews with the current open source tools. We will present some of our challenges like rendering COGs directly, handling different projections, authentication with Leaflet, performance, error handling, and integrating dynamic tiling services such as TiTiler.

More information:
https://up42.com/data-management

State of software
Omicum
15:00
30min
Improving interoperability between OpenDRIVE HD map data and GIS using GDAL
Michael Scholz

Our new vector driver for GDAL offers the possibility to convert highly detailed HD map data from the complex road description format OpenDRIVE into common geodata formats such as GeoPackage, GeoJSON, Shapefile, KML or spatial databases. Finally, this makes OpenDRIVE easily usable in established GIS applications.

Within the domains of automotive and transportation, highly detailed road network datasets (HD maps) emerged as a core component for development, testing, function validation and also for later production use. Applications such as autonomous driving, driving simulation and traffic simulation often rely on special engineering data formats, of which ASAM OpenDRIVE [1] evolved as an open industry standard. This domain-specific data model bundles mathematical, continuous track geometry modelling with all necessary topological links and semantic information from traffic-regulating infrastructure (signs and traffic lights).

OpenDRIVE’s complexity makes data acquisition a sophisticated task, often financed by the automotive industry and conducted by third-party mobile mapping providers. Recently, governmental institutions have also shown increased interest in such data, particularly in the context of urban transport planning and road infrastructure maintenance. However, even though such OpenDRIVE data often covers the institutions’ own urban space, it is often "inaccessible" because tool support for OpenDRIVE is mostly limited to expensive commercial software and — even worse — lacks integration into popular Geographic Information Systems (GIS). Our free software contribution [2] extends the common Geospatial Data Abstraction Library (GDAL) [3] and transforms OpenDRIVE road elements into OGC Simple Features [4] which can be loaded and processed ad hoc by all commercial and free GIS tools! This way, OpenDRIVE data can directly be loaded in QGIS, for example, which involves less overhead than having to intermediately convert it to CityGML using r:trån [5] beforehand.

By bringing the domains of automotive engineering and GIS closer together, we hope to stimulate interdisciplinary knowledge transfer and the creation of an interconnected research community. With our open software extension, we empower public authorities and research institutions with easier access to highly-detailed road data, which might otherwise be limited to just industrial players. Vice versa, the (automotive) industry benefits from established tools and data provisioning services of the spatial data domain, with which it does normally not interact.

Based on our experience with extending GDAL, other domain-specific data formats such as railML, RoadXML and the NDS Open Lane Model could be made accessible for the greater audience of GIS users as well.

[1] ASAM OpenDRIVE: https://www.asam.net/standards/detail/opendrive/
[2] Git branch of OpenDRIVE driver: https://github.com/DLR-TS/gdal/tree/libopendrive
[3] GDAL: https://gdal.org
[4] OGC Simple Feature Access: https://www.ogc.org/standard/sfa/
[5] r:trån: https://doi.org/10.5281/zenodo.7702313

Open standards and interoperability for geospatial
Van46 ring
15:00
30min
Kart: A Practical Approach to Geospatial Data Versioning
Robert Coup

In today's data-rich environment, geospatial professionals struggle with a lack of versioning tools compared to other software domains. Kart (https://kartproject.org) is a practical solution designed to address this gap and support data collaboration.

In this session, we will introduce Kart and demonstrate its core functionalities across raster, vector, table, and point cloud datasets. Through practical examples, attendees will gain a clear understanding of Kart's capabilities, including its integration with QGIS via the Kart plugin, and support datasets that live on S3-compatible object stores, without any duplication. Additionally, we'll talk through our roadmap, highlighting what we're working on next.

With Kart, managing history, branches, data schemas, and synchronisation becomes straightforward, regardless of software ecosystem. Kart empowers teams to collaborate better, keeping teams aligned and facilitating easy review and tracing of changes.

Join us to explore how Kart is reshaping geospatial data management, offering practical solutions for collaboration and productivity.

State of software
GEOCAT (301)
15:00
30min
Stadia x Stamen: A New Era for Stamen Map Tiles
Luke Seelenbinder

The renowned Stamen Map Tiles, after more than a decade of being used and loved by digital cartographers the world over, have received a facelift. Together, Stamen and Stadia Maps created all new vector versions of Toner and Terrain based on the modern mapping stack of open data and an open source toolbox of vector tiles and styles, while preserving backward compatibility for existing users. We will discuss the technical challenges to creating an affordable map tiling service at scale and provide some perspective on how OSM-based digital cartography has changed since these tilesets were originally created.

Building a business with FOSS4G
LAStools (327)
15:00
30min
State of Oskari
Sami Mäkinen

Oskari is used world wide to provide web based map applications that are built on top of existing spatial data infrastructures. Oskari offers building blocks for creating and customizing your own geoportals and allows embedding maps to other sites that can be controlled with a simple API. In addition to showing data from spatial services, Oskari offers hooks for things like using your own search backend and fetching/presenting statistical data.

This presentation will go through the improvements to existing functionalities and new features introduced in Oskari during the last year including:

  • Combining different types of user generated content
  • UI rewrite progress
  • Supporting mobile devices

You can try some of the functionalities Oskari offers out-of-the-box on our sample application: https://demo.oskari.org.

State of software
QFieldCloud (246)
15:30
15:30
30min
Coffee
Van46 ring
15:30
30min
Coffee
GEOCAT (301)
15:30
30min
Coffee
LAStools (327)
15:30
30min
Coffee
QFieldCloud (246)
15:30
30min
Coffee
Omicum
16:00
16:00
60min
Spontaneous growth of the 'geocompx' FOSS4G community
Jakub Nowosad

In 2016 two early-career researchers met and discussed the lack of open-access materials related to spatial data analysis with vector and raster geo data in R. A few months later, they started writing a book together which, from the first commit onwards, was done in the open. The book source code was publicly available at GitHub, updated regularly, and reproduced on every commit by continuous integration. Due to this approach, it initially attracted several contributors, one of whom became an author. Writing the book using many FOSS tools allowed us to contribute suggestions, leading to dozens of improvements upstream. The first version of Geocomputation with R (abbreviated to ‘geocompr’) was completed and published in early 2019.

‘Geocompr', started as a two-person book project. However, it not only attracted many readers, but also enabled online discussion through online platforms, such as GitHub and social media. In the last few years, the book has had a few hundred thousand readers online, gained a few official and community translations, and has been used in many academic courses and research papers. We also started working on its second edition and its sibling project: Geocomputation with Python.

It became clear that the 'geocompr' name was no longer appropriate for the more multilingual nature of the project, and we started using the 'geocompx' name. We hope it captures the essence of the project: eXchanging information about geocomputation, cross (X) pollination of ideas from one programming language to another, and the possibility of hosting additional content on geocomputation with (X) other languages.

Currently, the main entry point for this project is the https://geocompx.org website. It contains links to other books and materials and also hosts a blog with posts related to geocomputation, which is also open to guest writers. The 'geocompx' project is also a Discord server with discussions about various FOSS4G topics, from tools and methods to applications to solve real-life problems.

In this talk, we will share our experiences of writing an open access book, show the tools we use, and provide suggestions on how to start to contribute or create FOSS4G materials on your own.

Keynote
Van46 ring
17:00
17:00
30min
Closing ceremony

Concluding the FOSS4GE 2024 conference. Some reflections on the past days, thank yous to sponsors and the people who have helped with the organization of the event. Open microphone for announcements of next FOSS4G events (but please contact us beforehand)

Plenary
Van46 ring