FOSS4G 2023

To see our schedule with full functionality, like timezone conversion and personal scheduling, please enable JavaScript and go here.
No sessions on Monday, June 26, 2023.
11:00
11:00
120min
QGIS - Ask me anything!
Marco Bernasocchi, Matthias Kuhn

QGIS Chairman Marco Bernasocchi and core developer Matthias Kuhn will be available for an hour to answer any QGIS-related questions. With the two of them, interested parties have access to over 20 years of combined expert knowledge in the development, use and organisation of QGIS and QGIS-based products. Questions about specific use cases, upcoming developments or the functioning of the QGIS project with its international contributors will be tackled.

Community & Foundation
KREN 2
15:00
15:00
120min
Open Source Geospatial Tools for Humanitarian Response - HOTOSM
Yogesh Girikumar, Petya Kangalova, Synne Marion Olsen, Kshitij Raj Sharma

Open source mapping: Connect and work with the Tech & Innovation team at Humanitarian OpenStreetMapTeam (HOT)

Come and meet the Tech & Innovation team at the Humanitarian OpenStreetMap Team (HOT), hear about the status of the existing and experimental open mapping tools, and explore how you can get involved!

In this side event, you will hear about the latest open source technology developments ranging from remote mapping using HOT Tasking Manager, field mapping using the Field Mapping Tasking Manager (FMTM) to uploading imagery using Open Aerial Map, exporting OSM data using the HOT Export Tool and AI-assisted mapping with one of the most recent projects we are working on - fAIr.

We want to connect with people who are working on anything from imagery to data visualization and provide a space to get hands-on and deep-dive into the various stages of the open mapping workflow. Participants may choose any workflow they would like to focus on, and in groups, deep dive into the topic of interest. We are looking for YOUR contributions. Whatever your skills/interest might be, there will be something for YOU to input and collaborate on with our team!

Registration: Anyone is welcome to drop in the session!
This event will be held on June 27 from 15:00. Please register here - https://docs.google.com/forms/d/e/1FAIpQLSdAYmcWCdZ2X7Oorx1pj-l698w4jBv6gHhV3O-cRtvqT9G1Tw/viewform - and tell us what you are most interested in working on during the session. We will reach out to you before the side event.
Look forward to meeting you in person!

Whatever your technical background is, we will find an area that matches your interest! Take a look at the HOTOSM Github repos: https://github.com/hotosm

Community & Foundation
UBT B / N 014 - First Floor
09:00
09:00
30min
Opening session

Opening session with institutional greetings.

Outdoor Stage
09:30
09:30
30min
The Importance of Seeding - from 3 ECTS to Shaping a better world
Marco Bernasocchi

In this keynote, we will explore the significance of seeding in the context of open-source software. Using QField as an example, we will explore the steps needed to turn a student's project into the leading fieldwork app that helps hundreds of thousands of people with their work and can help address many of the Sustainable Development Goals.

We will discuss the challenges faced during the initial stages of development and what steps played a crucial role in overcoming them. We will also highlight the importance of community and industry involvement and how these helped QField reach global success and over 800K downloads.

Through this keynote, attendees will gain insights into the role of seeding and commitment in developing and growing open-source software, highlighting its impact on innovation, collaboration, and sustainability.

Join us for an insightful discussion on planting seeds and the potential to drive positive change through open-source software.

Outdoor Stage
10:00
10:00
30min
Coffee-break
Outdoor Stage
10:00
30min
coffee-break
Lumbardhi
10:00
30min
COFFEE
Drini
10:00
30min
coffee-break
Mirusha
10:00
30min
coffee-break
UBT E / N209 - Floor 3
10:00
30min
coffee-break
UBT F / N212 - Floor 3
10:00
30min
coffee-break
UBT C / N109 - Second Floor
10:00
30min
coffee-break
UBT C / N110 - Second Floor
10:00
30min
coffee-break
UBT C / N111 - Second Floor
10:00
30min
coffee-break
UBT D / N112 - Second Floor
10:00
30min
coffee-break
UBT D / N113 - Second Floor
10:00
30min
coffee-break
UBT D / N115 - Second Floor
10:30
10:30
30min
Digitizing the french railway network - an open source endeavour
Guilhem Villemin

From detecting vegetation hazard to measuring catenary geometry, it all begins with three trains equipped with LiDaR mapping systems, roaming the french railway network.

Let us see how geospatial opensource softwares enabled us to build a cost-effective and comprehensive solution, starting from basic raw data processing up to setting up a geographic information system, full of relevant data that brings up railway maintenance to another level.

Use cases & applications
Drini
10:30
30min
Dynamic Tiling: From Cloud Optimized Raster to Map tiles
Vincent Sarago

Over the recent years, Cloud Optimized Raster format have gain popularity not only because they ease access but also because the enable fast visualisation of the data. During this talk I'll go over the principles of dynamic tiling and talk about the different cloud optimized raster format. I'll also present the latest news about TiTiler.

State of software
Mirusha
10:30
30min
GeoServer Orientation and Demo
Ian Turton, Jody Garnett

Welcome to GeoServer, a popular web service for publishing your geospatial data using industry standards for vector, raster and mapping.

If the previous sentence made no sense to you, or if you are new to foss4g, or even just new to GeoServer, attend this talk to get pointed in the right direction!

This presentation provides a gentle introduction to FOSS4G and we will do our best to say the quiet part out loud:

  • Demo: We have learned from experience, and will introduce GeoServer using a demo.
  • Usage: Concepts using both a demo, and diagrams to connect to your data and publish as a spatial service.
  • Checklist: Preflight check-lists capturing common oversights when deploying GeoServer for the first time.
  • Value: What role GeoServer plays in your organization and what value the application provides.
  • Community: How the project is managed, and a discussion of the upcoming activities.
    Attend this presentation to get a running start on using GeoServer in your organization!

Attend this presentation to get a running start on using GeoServer in your organization!

Transition to FOSS4G
UBT F / N212 - Floor 3
10:30
30min
Open resources and open standards for multi source marine twinning
Piotr Zaborowski

Spatial data interoperability has been on the spot among the Open Geospatial Consortium members for almost 30 years, but the current moment is notable for several reasons. An enormous amount of data is growing exponentially due to the novel sensors that bring observations from previously inaccessible areas in such resolution. We can observe and explore the global ocean with modern computational resources and AI models. Federated Data Spaces initiatives emerge with the paradigm of multi-source data integration harmoniously supporting heterogeneous models.
Speakers will present recent advancements in the data mesh methods based on two environments endorsing open source implementations used for the integrations.
First is the Federated Marine SDI (FMSDI) Pilot, which focuses on advancing the implementation of open data standards, architecture, and prototypes for use with the creation, management, integration, dissemination, and onward use of marine and terrestrial data services for the Arctic. Use cases developed in the recent phase of the FMSDI pilot further demonstrated the capabilities and use of OGC, IHO and other community standards in response to a grounding event and the evacuation of a cruise ship or research vessel in the Arctic.
The approach collated with Iliad - Digital Twin of the Ocean and its interoperability patterns model. Based on the specific requirements for data transfer, access and computation, it looks to generalise core architectural patterns with standard implementations. These patterns address the core issues of data publishing, aggregation and extensive analyses close to the data. Together, they enable a viable overall digital twin ecosystem. Data mesh of observations with data lakes and assembly are essential building blocks that allow the flow and synchronisation of data between different data owners. A open, common information model, defined on the domain-specific and well-known generic ontologies, Analysis Ready Data, and Essential Variables concepts, allows for the traceability of provenance and various expressions. It is a critical prerequisite to achieving data interoperability and explainable AI. Application packaging of processing chains allows for seamless compute-to-data, remote computation, or even mobile control when data is too big to flow. The computation is executed in a controlled environment, and the results harmonised for further use or available as decision-ready information.
Presenters will describe these patterns and illustrate them with OGC and partners' open implementations (like OGC-NA, EDR, geoXACML, HubOcean sync API) from the projects.

Open Standard
UBT D / N112 - Second Floor
10:30
30min
QGIS.org - The vision and mission for the next 20 years of QGIS awesomeness
Marco Bernasocchi

QGIS turned 20 years old last year. The first lines of code were written in mid-February 2002 and when the programme was first compiled and run, it could do precisely one thing:
Connect to a PostGIS database and draw a vector layer.

Nowadays, QGIS is the go-to GIS solution for millions of users, and to make sure that QGIS's future is as bright as its past, we did a lot of work on communication, strategy and outreach.
In this talk, I’ll overview all the work done, the current status and the future of QGIS and its community.

State of software
Outdoor Stage
10:30
30min
SMASH and the new survey server - state of the art
Andrea Antonello, Silvia Franceschi

SMASH , the digital field mapping application for android and IOS that superseded the well known app geopaparazzi has been around for some years now. The last two years were a positive development storm after a quite calm year and brought many fixes as well as enhancements. Examples are better postgis and geopackage support, but also some hidden gems like geocaching.
The big news is on the serverside though. A new survey server has been developed in tight cooperation with a local government agency to best create effective surveying workflows and tools for survey teams. To attract a wider developer community to contribute to the project, the django framework was chosen for the server backend.
This presentation will give an overview of everything happened lately in the SMASH field mapping world.

State of software
UBT C / N109 - Second Floor
10:30
30min
Site Calibration with PROJ and WKT2
Javier Jimenez Shaw

In many surveying projects in construction, civil engineering, mining, surveying, etc. it is common to work on coordinates referenced to a local coordinate reference system (CRS) that is established ad hoc for the project site. These CRSs are necessary for applications with requirements that cannot be fulfilled by more common and affordable GNSS surveying techniques, for example millimetre accuracy, controlled distortion, etc. In these systems, assigning coordinates to an on-site location with the highest accuracy or solely relying on the control points that define the system is a laborious process that requires specialised and expensive tools and skills (for example, knowing how to perform a point triangulation using a total station, a device commonly used in land surveying).

On the other hand, in the execution of a project not all tasks involving geolocation have such strict requirements. In many cases, geolocation can be performed by less skilled staff by means of a GNSS receiver with real time kinematics (RTK) or post processing kinematics (PPK) reducing costs and work time. Geolocation can be done even without real time or post processing kinematics if the provided accuracy is enough, requiring much cheaper equipment. However, coordinates still need to be referenced to the site local system used in the project. In georeferencing terms, the local system is completely arbitrary and disconnected from any well known CRS. A site calibration (or site localization) is the process of finding a bijection between coordinates in a well known CRS and a site local system with a minimal error in the area of interest. The problem is normally formulated as a least squares optimization of the transformation between two sets of points. This transformation allows the geolocation of new positions with cm accuracy at a fraction of the cost of other high-accuracy surveying methods.

Many surveying devices provide a site calibration feature, but the algorithms are proprietary and the computed solution can only be exported to and used by software that is compatible with the closed proprietary formats involved. This effectively ties the user to the vendor ecosystem or requires to perform a new and potentially different calibration for every incompatible software tool used in the project. In this paper we present an complete and interoperable solution that can be implemented purely in terms of open source software and standards. While the mathematical formulation is a well known and solved problem, to the best of our knowledge, the novelty of our approach resides in its complete openness.

Our main contribution is the precise description of the workflow involved in obtaining the mathematical solution of the site calibration problem and its representation as a self-contained coordinate reference system. The mathematical problem can be solved using any linear algebra tool box, but we show how it can be implemented using functionality present in the open source library Eigen. As for the representation, our method relies on the OGC 18-010r7 open standard representation format [1], commonly known as WKT version 2. In this context, self-contained means that the final description of a site calibration embeds a well known CRS definition and the transformation method and parameters to transform coordinates from this system to the site local system. We have tested these coordinate transformations using several possible representations in the open source programming library PROJ version 9.2.0 [2]. The combination of WKT2 and PROJ allow for off-the-shelf interoperability for any application using them in an open and standard manner. The usage of WKT2 as a representation format is particularly convenient because it is a text-based representation that is very easy to store, transmit and process and on top of that, it is human readable. Part of the work carried out in this research has been contributed to PROJ 9.2.0 source code, as previous versions lacked required functionality or suffered from implementation issues.

A site calibration can be solved in different ways ([3], [4]). Another important contribution of this paper is the comparison and accuracy analysis of two mathematical methods that result in two different WKT2 representations. Following the terminology presented in the third version of ISO 19111 standard [5], the first and simpler method produces a derived projected system by solving a 3D problem and relying on a PROJ-specific 3D transformation. The second one splits the problem into its horizontal and vertical components. The output is a compound coordinate reference system made of a derived projected horizontal system and vertical system with a vertical and slope derivation. This other method relies only on well known transformations registered in the EPSG Geodetic Parameter Dataset. We discuss the merits and disadvantages of each approach in terms of self-explainability of the solution and sensitivity to different types of measuring errors, in particular in the vertical axis, where GNSS receivers are known to have less accuracy.

Academic Track
UBT E / N209 - Floor 3
10:30
30min
State of Oskari
Sami Mäkinen

Oskari is used world wide to provide web based map applications that are built on top of existing spatial data infrastructures. Oskari offers building blocks for creating and customizing your own geoportals and allows embedding maps to other sites that can be controlled with a simple API. In addition to showing data from spatial services, Oskari offers hooks for things like using your own search backend and fetching/presenting statistical data.

This presentation will go through the improvements to existing functionalities and new features introduced in Oskari during the last year including:

  • Theme support
  • UI rewrite progress
  • Cloud compatibility improvements

You can try some of the functionalities Oskari offers out-of-the-box on our sample application: https://demo.oskari.org.

Link: https://oskari.org

State of software
UBT C / N111 - Second Floor
10:30
30min
State of STAC
Matthew Hanson, Matthias Mohr

The SpatioTemporal Asset Catalog (STAC) specifications are a flexible language for describing geospatial information across domains and for a variety of use cases. This talk will present the current state of the specifications, which includes the core STAC specification and the API specification built on top of OGC APIs. While the core specification has been stable for roughly two years and doesn't need a lot of updates, the API specification got numerous updates and is finally close to a stable release. This presentation digs into additions to STAC extensions and the latest community developments. We survey the updates to the open-source STAC ecosystem, which includes software written in Python, Node.js, and more. Finally, let's also look into the near future.

State of software
Lumbardhi
10:30
30min
Tools for linking Wikidata and OpenStreetMap
Edward Betts

Editors of OpenStreetMap can use my software to search for a place or region, generating a list of candidate matches from Wikidata, which can then be checked and saved to OpenStreetMap.

Linking the two projects isn't without controversy. They use different licenses which raises questions about what information from one project can be copied to the other.

In the presentation I will give details of a new version of the editing tool.

I will talk about the benefits of linking, the process of finding matches, the community response - including the controversy - and how people can get involved.

Open Data
UBT C / N110 - Second Floor
10:30
30min
Using Nix to build development environments as you always wanted
Ivan Minčík

This talk is going to reveal the secret of building and running development or
user environments as you always wanted. Each of your projects can run in
isolated, fully self contained environment, using the latest, or really old, or
heavily customized geospatial packages regardless of Linux distro or Mac version you
use. You can have as many environments as you want, and the environment will change as you
change between your projects, branches or commits.

No, we are not going to run containers, Flatpaks of Snaps for that. We are going
to enjoy the most advanced package manager Nix, the
largest collection of software in the world called Nix packages
(nixpkgs), unique tooling they provide and
Geonix Devenv projects built on top of that.

State of software
UBT D / N113 - Second Floor
11:00
11:00
30min
A Contemporary Nolli Map: Using OpenStreetMap Data to Represent Urban Public Spaces
Ester Scheck

More than 250 years ago, Giovanni Battista Nolli, an Italian architect, engineer and cartographer, was concerned with how and where space is or is not publicly accessible. In his map 'La nuova topografia di Roma Comasco', he mapped publicly accessible interior and exterior spaces of Rome with an impressively high level of detail as a figure-ground map. Since Nolli’s time, both the character and diversity of public spaces as well as cartographic technology have changed. In my Master thesis, I aim to adapt Nolli's underlying idea for today’s circumstances on the basis of open data, and seek to develop methods for processing volunteered geographical information from OpenStreetMap (OSM) to identify, categorize, and map public spaces based on thematic and geometric information.

First, it has to be clarified what is considered public space and what is not. Given the data available via OSM as well as in terms of feasibility, I focus on the aspect of public accessibility and exclude indoor spaces. Data processing is implemented as a Python script based on existing OSM and geospatial Python packages. The code is available as Open Source on GitHub. The application of the framework and methods is tested in two case studies in Vienna, Austria. The result can be visualized as 'contemporary Nolli map'.

In my talk, I will give insights into the methodology and framework for data analysis I developed as part of my Master thesis

Open Data
UBT C / N110 - Second Floor
11:00
30min
An end-to-end deep learning framework for building boundary regularization and vectorization of building footprints
Simon Šanca

With increasing digitalization and automation, there is a need to develop automatic methods to maintain and update public information stored in spatial databases. The building register stores public, building-related information and is the fundamental record for storing information and other relevant data necessary for taxation, public planning, and emergency services about buildings. Up-to-date building footprint maps are essential for many geospatial applications, including disaster management, population estimation, monitoring of urban areas, updating the cadaster, 3D city modelling, and detecting illegal construction cases (Bakirman, et al., 2022.). There are many approaches for building extraction from various data sources, including satellite, aerial, or drone images and 3D point clouds. However, there is still a demand for developing methodologies that can extract segment, regularize and vectorize building footprints using deep learning in and end-to-end workflow.

Today, automatic and semi-automatic methods have achieved state-of-the-art results in building footprint extraction by combining computer vision and deep learning techniques. Semantic segmentation is a method for classifying each pixel in an image and extract building footprints from remote sensing data. In the case of building segmentation, the goal is to classify each pixel on an image belonging to its corresponding class. Recent advances in deep learning for building segmentation have drastically improved the accuracy of the segmented building masks using Convolutional Neural Networks (CNNs).

Recently proposed semantic segmentation architectures include the application of advanced vision transformers for semantic segmentation. GeoSeg is one of the open-source semantic segmentation toolboxes for various image segmentation tasks. The repository has 7 different models, that can be used for either multi-class or binary semantic segmentation tasks, including four vision transformers: U-NetFormer, FT-U-NetFormer, DCSwin, BANet and three regular CNN models: MANet, ABCNet, A2FPN.

Deep learning methods for building segmentation involve training the neural network on a labeled image dataset, referred to as supervised learning. Semantic segmentation aims to distinguish between semantic classes in an image but does not individually label each instance. On the other hand, instance segmentation aims at distinguishing between semantic classes and the individual instances of each class. Many popular instance segmentation architectures exist, such as Mask R-CNN and its predecessors, R-CNN, Fast R-CNN, and Faster R-CNN. While the implementation of instance segmentation can be more challenging, the approach can be more effective in densely populated urban areas, where buildings may be close or overlapping.

A common problem with these methods is the irregular shape of the predicted segmentation mask. Additionally, the data contains various types of noise, such as reflections, shadows, and varying perspectives, making the irregularities more prominent. Further post-processing steps are necessary to use the results in many cartographic and other engineering applications (Zorzi et al., 2021).

The solution for the irregularity of the building footprints is to use regularization. Regularization is a technique in machine learning that applies constraints to the model and the loss function during the training process to achieve a desired behaviour (Tang et al., 2018). Applying regularization constrains the segmentation map to be smoother, with clearly defined and straight edges for buildings. As a result, the building footprint becomes less irregular when occluded and visually more appealing. Most studies apply regularization after image segmentation.

We propose an end-to-end workflow for building segmentation, regularization and vectorization using four different convolutional neural network architectures for binary semantic segmentation task: (1) U-Net, (2) U-Net-Former, (3) FT-UNet-Former and (4) DCSwin. We further improve the building footprints by applying the projectRegularization method proposed by (Li et al., 2021). The technique uses a boundary regularization network for building footprint extraction in satellite images combining semantic segmentation and boundary regularization with an end-to-end generative adversarial network (GAN). Our approach will perform semantic segmentation with our trained models and then perform boundary regularization on the segmentation masks. We aim to prove the scalability of projectRegularization on a different segmentation task, including aerial images as the data source. The last step in our approach is to develop a methodology for efficient vectorization of the segmented building mask using open-source software solutions. We aim to make the results practically applicable in any GIS environment. The dataset used for testing our developed method will be the MapAI dataset used for the MapAI: Precision in Building Segmentation competition (Jyhne et al., 2022) arranged with the Norwegian Artificial Intelligence Research Consortium in collaboration with the Centre for Artificial Intelligence Research at the University of Agder (CAIR), the Norwegian Mapping Authority, AI:Hub, Norkart, and The Danish Agency for Data Supply and Infrastructure.

We aim to produce better representations of building footprints with more regular building boundaries. After successful application, our method generates regularized building footprints, that are useful in many cartographic and engineering applications. Furthermore our regularization and vectorization workflow is further developed into a working QGIS-plugin that can be used to extent the functionality of QGIS. Our end-to-end workflow aims to advance the current research in convolutional neural networks and their application for automatic building footprint extraction and, as a result, further enhance the state of open-source GIS software.

Academic Track
UBT E / N209 - Floor 3
11:00
30min
Implementation of Statistical Geoportals in Latin America and the Caribbean
Ariel Anthieni, Walter Shilman

Based on the implementation of the Global Statistical and Geospatial Framework (GSGF) proposed by the UN and implemented in Latin America and the Caribbean by the Economic Commission for Latin America and the Caribbean (ECLAC), a set of specific technological components were developed, such as a geoportal, a statistical manager and an API with the possibility of consuming information from different applications. At the same time, components already existing in the community were implemented such as Kobo Toolbox, GeoNode, Airflow, MapLibre, Nominatim and Metabase for the integration of information from the collection in the territory to the publication of the data. The project was initially carried out with a group of countries: Argentina, Paraguay, Honduras, Guatemala, Dominican Republic, Costa Rica and Ecuador.

Use cases & applications
Drini
11:00
30min
OGC API standards for geospatial data cubes
Jerome St-Louis

A presentation and demonstration of data cube functionality implemented based on OGC API Standards and draft Candidate Standards.

Including:

  • OGC API - Tiles,
  • OGC API - Maps,
  • OGC API - Coverages,
  • OGC API - Discrete Global Grid Systems,
  • OGC API - Processes - Part 1: Core, and Part 3: Workflows and Chaining ("Nested Processes", "Collection Input", "Collection Output"),
  • OGC Common Query Language (CQL2)

with a focus on providing efficient access to analysis-ready sentinel-2 data and additional processing close to the data, in the context of wildfire risk assessment.

Open Standard
UBT D / N112 - Second Floor
11:00
30min
Openness as a strategic advantage in modern geospatial
Will Cadell

In the last decade, 5 complementary assets have intersected, creating a series of new capabilities for our community. Modern geospatial did not exist even five years ago, and openness - the combination of open standards, open data all glued together with open source code is a key contributing factor.

This talk will present the case for openness being a competitive advantage for a modern, innovative technology company. We will discuss why we have been right all along, and why we will end up being even righter in the future. If you want a solid dose of confirmation bias, this is the talk for you!

State of software
UBT C / N109 - Second Floor
11:00
30min
Packaging Geospatial Software for Debian and Ubuntu Linux
Felix Delattre

This presentation will delve into the intricacies of packaging geospatial software for Debian Linux and its derivatives, including Ubuntu, OSGeoLive, and others.

It will begin by contrasting the differences between packaging for an operating system and application-level package managers. The presentation will then provide an introduction to the Debian GIS Team and their established practices for packaging, including resources for finding information. The focus will then shift to the crucial steps involved in preparing the software for distribution, such as creating metadata and dependencies, building the package, testing its functionality, and ultimately making it available to end-users for easy installation and use.

State of software
UBT D / N113 - Second Floor
11:00
30min
Serverless Planet-scale Geospatial with Protomaps and PMTiles
Brandon Liu

Protomaps is a simple, self-hostable system for tiled vector datasets. In the year since last FOSS4G, we've rolled out a new compressed specification (V3), added support for tile generation tools, and open sourced key integrations with content delivery networks. This talk will give an overview of:

  • Why you might want to, or not want to, deploy Protomaps for your application
  • PMTiles write support in the popular Tippecanoe and Planetiler tools
  • The new open source integrations of Protomaps with AWS Lambda and Cloudflare
  • Overview of real-world deployments for users in web GIS, journalism and the public sector
State of software
Mirusha
11:00
30min
Standardized Data Management with STAC
Batuhan Kavlak, Sam Eglington

STAC is a well-known and acknowledged spatiotemporal metadata standard within the community. There are many applications with open-source data; however, there are few adoptions by premium satellite imagery providers. At UP42, we adopted STAC as the core metadata system within our applications and provided STAC API for users to manage their data easily. The ongoing adoption challenges with multiple data providers taught many takeaways that we would like to share with the community.

  • UP42: a short introduction
  • Data management challenges at UP42
  • Solution with STAC & cloud-native asset format
  • STAC implementation: lessons learned
  • Current state and way forward
Use cases & applications
Lumbardhi
11:00
30min
State of GRASS GIS
Anna Petrasova, Vaclav Petras, Veronica Andreo, Markus Neteler, Martin Landa

This talk gives an overview of the current state of the GRASS GIS project for both users and developers. Latest version of GRASS includes even more tools parallelized using OpenMP to speed up massive data processing. The graphical user interface is changing as the single-window layout matured and is becoming the number one choice and a default setting. This adds to a quicker startup without a need for a welcome screen and streamlined data management. The code quality of C and C++ code improved significantly in the last year, the code compiles with strict compiler settings and we are heading towards pedantic compliance. Last but not least, this summer GRASS GIS celebrates its 40th birthday!

State of software
Outdoor Stage
11:00
30min
Suomi.fi-maps - national service implementation with Oskari platform
Arto Sinkkonen

Suomi.fi-maps offers to the public administration and government agencies a centralized service for utilizing maps and location data. In the Suomi.fi-maps service, a user may compile their own map views from the map layers available in the service, as well as from their own objects and materials provided by service interfaces of their own organization.
Oskari platform is used to implement the Suomi.fi-maps system. Suomi.fi-maps is used to enable all the Finnish residents to use maps and the location data to find about the services they are interested in.
In addition to other open data the open materials of the National Land Survey may also be used: various terrain and background maps, property boundaries and aerial photographs. User may connect their own interfaces to the Suomi.fi-maps service or add their own objects to the map to be published.
This presentation describes with examples, how the Oskari platform and its features are used used to implement the Suomi.fi-maps service and lessons learned.

Use cases & applications
UBT C / N111 - Second Floor
11:00
30min
geoserverx - a new CLI and library to interact with GeoServer
Francesco Bartoli, krishna lodha

geoserverx is a modern Python package that provides an efficient and scalable way to interact with Geoserver REST APIs. It leverages the asynchronous capabilities of Python to offer a high-performance and reliable solution for managing Geoserver data and services.
With geoserverx, users can easily access and modify data in Geoserver, such as uploading and deleting shapefiles, publishing layers, creating workspaces, styles, etc. . The package supports asynchronous requests along with synchronous method to the Geoserver REST API, which enables users to perform multiple tasks simultaneously, improving performance and reducing wait times.
Apart from being implemented in Python Projects, geoserverx also provides CLI support for all of it's operations. Which makes it useful for people who want to avoid Python all-together.
In this talk we discover for the very first time about how geoserverx work and underlying code ideology. Along with that we'll also spread some light on upcoming modules to be integrated in geoserverx

Transition to FOSS4G
UBT F / N212 - Floor 3
11:30
11:30
30min
Adding Quality Assurance to open source projects: experiences from GeoTools, GeoWebCache an GeoServer
Andrea Aime

Working in large open source projects, with several people contributing to the code, can be challenging, especially trying to keep everyone on the same page, and generating code that has enough similarities to allow shared maintenance.

The advent of platforms like GitHub also made it easier for one time contributors to donate small and large bits of code to the platform, generating in the process a fair amout of “review stress” in the project maintainers.

The presentation covers how pull request checks, formatting and static analysis tools have been used to streamline basic checks in the code:

  • Testing the code on a variety of operating systems, Java versions and integrations with data sources before the code can be contributed to the project
  • Enforcing common formatting
  • Adding basic checks with CheckStyle
  • Locating obvious errors, leftover code, basic optimization issues using the Java compiler linting, ErrorProne, PMD and SpotBugs
  • Improving readability of the code as well as enforcing best practices and common approaches with the same tools.
  • Effects on the dynamics of code reviews

The presentation will cover all those aspects, with examples from the author’s experience with the GeoTools, GeoWebCache and GeoServer projects.

State of software
UBT D / N113 - Second Floor
11:30
30min
Bulldozer, a free open source scalable software for DTM extraction
Dimitri Lallement

This paper introduces a scalable software for extracting Digital Terrain Models (DTM) from Digital Surface Models (DSM), called Bulldozer. DTMs are useful for many application domains such as remote sensing, topography, hydrography, bathymetry, land cover maps, 3D urban reconstruction (LOD), military needs, etc. Currant and incoming LiDAR and spatial Earth Observation missions will provide a massive quantity of 3D data. The spatial mission CO3D will deliver very high resolution DSMs at large scale over emerging landscapes. The IGN LiDAR HD mission is currently delivering high density point clouds of French national territory. This trend motivated the French spatial agency (CNES) to focus on the development of tools to process 3D data at large scale. In this context, we have developed a free open-source software, called Bulldozer, to extract a DTM from a DSM at large scale without any exogenous data while being robust to noisy and no-data values. Bulldozer is a pipeline of modular standalone functions that can be chained together to compute a DTM. A pre-processing step contains specific functions to clean the input DSM and prepare it for a future DTM extraction such as disturbed area detection, hole filling and outer/inner nodata management. Then the extraction of the DTM is based on the original Drape Cloth principle, which consists of an inversion of the Digital Surface Model, followed by a multiscale representation of the inverted DSM on which an iterative drape cloth computation is applied to derive the DTM. Finally a post-processing step is achieved to obtain the final DTM.

We have addressed a number of limitations that this type of algorithm may encounter. Indeed, when we do 3D stereoscopic satellite reconstruction, we can observe areas of residual noise in the DSM. They mainly come from uniform areas (shadows, water), or occlusions. These outliers disturb the drape cloth: it sticks to the edges of those disturbed areas and no longer fits the relief. This results in an underestimation of the DTM and generates pits in these noisy areas. To solve this problem, we have implemented a series of pre-processing steps to detect and remove these outliers. Once these areas are removed, we use a filling function that is more elegant than a basic interpolation method (e.g. rasterio fill nodata function). In addition, after the DTM extraction, we detect and remove potential residual sinks in the generated DTM. In order to keep track of the areas that we have interpolated or filled in, Bulldozer also provides a pixel-wise quality mask indicating whether it was detected as disturbed (and therefore removed and filled in) or as interpolated following a pit detection.

Current stereo and LiDAR DSMs have a centimetric spatial resolution. However, the need to have such a high spatial resolution for DTM is not always relevant for numerous downstream applications. The multi-scale approach in Bulldozer allows to produce a coarser DTM by just stopping the process earlier in the pyramid. A final resampling of the DTM is done to fulfill the user-specific resolution. One main advantage of this feature is the potentially short execution time to produce a high-quality DTM depending on the DTM coarseness.

Another main contribution of our work is the adaptation of the original Drap Cloth algorithm to process DSMs of arbitrary size and from arbitrary sources. As explained in our previous paper, we introduce the concept of a stability margin in order to use a tiling strategy while ensuring identical results to those obtained if the DSM were processed entirely in memory. This tiling strategy allows a memory-aware extraction of the DTM in a parallel environment. This scalable execution is heavily based on the concept of shared memory introduced in Python 3.8 and the multi-processing paradigm.

Since our previous version we have been working on the accessibility of Bulldozer. Bulldozer can handle any input DSM as long as it is in a raster format. We have set up several interfaces to allow users of different levels to use it. A QGIS plugin was developed in order to allow novice users to use Bulldozer. For more advanced users, there is a Command Line Interface (CLI) to launch the tool from a terminal. And finally, for developers, they can use the Python API to launch the complete pipeline or call the standalone functions from the pipeline.

The efforts to improve the algorithmic performances allow the management of large DSMs while guaranteeing stability in the results, memory usage, and runtime. Currently, we achieve to extract a DTM from a 40000*70000px input DSM in less than 10 minutes on a 16 core/64-GB RAM node. We believe that its ability to adapt to several kinds of sensors (high and low resolution optical satellites, LiDAR), its simplicity of use, and the quality of the produced DTMs may interest the FOSS4G community. We plan to present the tool during a workshop dedicated to CNES 3D tools, but we think that the method and the algorithmic optimizations could also interest the FOSS4G Academic Track audience through an academic paper.

The project is available on github (https://github.com/CNES/bulldozer), and we are currently trying to provide access to LiDAR and satellite test images in order to allow the community to reproduce the results.

Academic Track
UBT E / N209 - Floor 3
11:30
30min
DigiAgriApp: the app to manage your agricultural field
Andrea Antonello, Luca Delucchi

DigiAgriApp is a client-server application to manage different kinds of data related to farming fields. It is able to store information about crops (specie, farming forms/system...), any kind of sensor data (included sensors and device hardware, weather, soils...), irrigation information (system type, openings...), field operations (pruning, mowing, treatments...), remote sensing data (taken from different devices as mobiles, drone, satellites) and production quantities.

The DigiAgriApp server is composed of a PostgreSQL/PostGIS database and a REST API service to interface with it. The server is developed using Django and the Django REST framework extension with other minor extensions are used to create the REST API. This service plays the key interface between the database and the client. We choose a nested way to create the API, of which the main element is the farm; this way the user can see only the farms related to him and from there he can look to other nested elements, first of all the farm’s fields and later other elements like sensor and remote data or other sub-fields like rows and plants. The REST API is using JavaScript Object Notation as input and output format to simplify and standardize the communication with it.

To obtain data from the sensors the server is also composed of a growing number of services to work with data providers, of which currently only a few are implemented. The Message Queue Telemetry Transport provider is a demon listening continuously to a broker (backend system to coordinate different clients) and several topics to obtain data as soon as they are provided; the second provided that is already implemented is related to remote sensing data and uses the SpatioTemporal Asset Catalogs specification to obtain the data. STAC is a common language to describe geospatial information, so it can more easily be worked with, indexed and discovered.

The client side instead is developed using Flutter, an open-source UI software development kit based on dart, a programming language designed for client development. Flutter is able to create cross-platform applications and it was chosen precisely because of its ability to realize cross platform applications.

All the code is released as Free and Open Source software with a GNU General Public License Version 3 license; it is available in the DigiAgriApp repository on GitLab and the client application will be published also in the main stores for mobile apps.

State of software
UBT C / N109 - Second Floor
11:30
30min
ESA-NASA-OGC Open Science Persistent Demonstrator
Anca Anghelea, Piotr Zaborowski, Manil Maskey

The Open Science Persistent Demonstrator (OSPD) is a long-term inter-agency initiative aiming to enable and communicate reproducible Earth Science across global communities of users and amplify inter-agency Earth Observation mission data, tools, and infrastructures. This talk will highlight the status and roadmap of the initiative (kicked off in 2023) and will provide an outlook on the first pilot activities of the demonstrator, as well as opportunities for participation for the FOSS4G community.
In the scope of this activity, ESA, NASA and OGC work together on the development of a long-term Open Science framework (e.g., a permanent open science demonstrator) in which participating organisations provide data, tools, and infrastructure in a coordinated approach, building on existing investments where appropriate.
In the frame of this activity, the OGC supports the Open-Source and Open Science Community by developing a persistent demonstrator that makes Open Science more tangible to a bigger audience, helps in exploring new forms of communication of scientific results to stakeholders, and helps develop the necessary standards to ensure the highest levels of interoperability across participating organizations. At the same time, it makes Earth Observation results available to other disciplines and communities, creates attention beyond the Earth Observation community, and directly impacts decision makers and political agendas.
The goal here is to demonstrate interoperable, collaborative research that allows reuse of existing components. These other resources are either offered as part of emerging Open Science Environments or in the form of either directly accessible “cloud-native” data/functions or by means of Web APIs. To reach this goal, it is essential to empower communities of practice to share FAIR (Findable, Accessible, Interoperable, Reusable) descriptions of their resources and capabilities. To allow this system to scale, it is crucial to avoid infinite combinations of community and application specific metadata, functions, data and products.
One focus is the facilitation of direct participation of the scientific community as the primary users of this framework, and of the open-source for geospatial community as essential contributors to the activity. To handle modelling complexity, OGC, NASA and ESA will define manageable processes and best practices for communities conducting geoscience research in multiple domains using heterogeneous data and tools on a distributed infrastructure. These agreements will include, but not limited to, standards, vocabularies, and ontologies for data and workflows and develop community-wide open source science mechanisms, modeling considerations and design patterns.

Use cases & applications
Drini
11:30
30min
Geo-Spatial meets Linked Data: open source solutions for semantic spatial data exchange
Luís M. de Sousa

The Ontology discipline made its way into the Computer Science domain in the
1990s, filling a gap in the architecture aspect of a still infant engineering
domain. Its most visible impact happened around the industry consortium Object
Management Group (OMG), leading first to the Unified Modelling Language (UML)
and later to the Model Driven Architecture (MDA). MDA became the base
infrastructure of data architectures and exchange mechanisms specified by
institutions such as the Open Geo-spatial Consortium (OGC) or the European
Commission (through the INPIRE directive).

However, a parallel path has been treaded by the World Wide Web Consortium
(W3C). First with the specification of the Resource Description Framework (RDF),
a new paradigm for data encoding leveraged on the WWW, and later with the Web
Ontology Language (OWL), a pragmatic approach to ontology encoding, building on
RDF. This infrastructure developed by the W3C became known as the Semantic Web,
and also as Linked Data, for the innovative paradigm through which it connects
disparate data sources and data domains.

The OGC would eventually approach the semantic web, specifying GeoSPARQL in
2013, an ontology and query language for linked geo-spatial data. However,
technologies supporting this new standard were slow in materialising.

More recently, the specification by the OGC of a new set of data standards based
on the OpenAPI technology set out a clear path for the convergence of
geo-spatial data with the Semantic Web. New software is emerging, opening
an entirely new world to geo-spatial data provision, a clear step forwards in
practically, usability and semantics.

This address starts by reviewing the core concepts of the Semantic Web and
then reviews state-of-the-art software for the management, publication
and exploration of linked geo-spatial data. This addressed is targeted at SDI
professionals and data scientists wishing to upgrade the semantics of the data
they create and use.

Open Data
UBT D / N112 - Second Floor
11:30
30min
GeoServer Cloud in depth
Gabriel Roldan

A typical GeoServer deployment involves exposing it as a front service to publish a number of layers directly to the internet, where a single instance, or even a couple, and an on-premise deployment model is enough.

Within larger companies though, more often than not GeoServer is a critical component of a more significant infrastructure, used to host tens of thousands of layers to accommodate organization requirements across various departments and workflows that involve several other systems, and complex cloud deployments.

These scenarios are where GeoServer Cloud shine, enabling devOps teams to set up clusters of GeoServer pods that are scalable, have improved resiliency, security, and resource utilization; and increased observability and integration with telemetry systems for monitoring, debugging, and tracing.

In this talk, we'll explore in depth how GeoServer Cloud achieves these goals, from technology and design choices to detailed overviews of technical improvements that were required, supported by success stories of current CampToCamp customers that got GeoServer Cloud in production.

State of software
UBT F / N212 - Floor 3
11:30
30min
MapStore, a year in review
Stefano Bovio, Matteo Velludini

MapStore is an open source product developed for creating, saving and sharing in a simple and intuitive way maps, dashboards, charts and geostories directly online in your browser. MapStore is cross-browser and mobile ready, it allows users to:

  • Search and load geospatial content served using widely used protocols (WMS, WFS, WMTS, TMS, CSW) and formats (GML, Shapefile, GeoJSON, KML/KMZ etc..)
  • Manage maps (create, modify, share, delete, search), charts, dashboard and stories directly online
  • Manage users, groups and their permissions over the various resources MapStore can manage
  • Edit data online via WFS-T with advanced filtering capabilities
  • Deeply customize the look&feel to follow strict corporate guidelines
  • Manage different application contexts through an advanced wizard to have customized WebGIS MapStore viewers for different use cases (custom plugins set, map and theme)

You can use MapStore as a product to deploy simple geoportals by using the standard functionalities it provides but you can also use MapStore as a framework to develop sophisticated WebGIS portals by reusing and extending its core building blocks.

MapStore is built on top of React and Redux and its core does not explicitly depend on any mapping engine but it can support both OpenLayers, Leaflet and Cesium; additional mapping engines could be also supported (for example MapLibre GL) to avoid any tight dependency on a single engine.

The presentation will give the audience an extensive overview of the MapStore functionalities for the creation of mapping portals, covering both previous work as well work for the future releases. Eventually, a range of MapStore case studies will be presented to demonstrate what our clients (like City of Genova, City of Florence, Halliburton, Austrocontrol and more) and partners are achieving with it.

State of software
Mirusha
11:30
30min
Oskari Embedded Maps and integrations with RPC API
Timo Aarnio

Oskari (https://www.oskari.org, https://github.com/oskariorg) provides a super-easy-to-use tool for creating mobile friendly maps that can be embedded onto websites or used as is. When embedding the maps on existing websites one can utilise the RPC API to further leverage the capabilities of Oskari. The API allows for integrating with existing services and external data sources so that the end result will be a seamless spatially enabled service running on any modern web browser.

While creating maps with Oskari requires no expertise in programming, utilising the RPC API requires basic knowledge of JavaScript. This talk will present the possibilities of Oskari RPC API among with some examples of live services created using it.

Use cases & applications
UBT C / N111 - Second Floor
11:30
30min
State of GDAL (versions 3.6 and 3.7)
Even Rouault

We will give a status report on the GDAL software, focusing on recent developments and achievements in the 3.6 and 3.7 GDAL versions released during the last year, but also on the general health of the project.
The discussed topics will be as various as the scope of GDAL is, covering the new single CMake build system, the full open source write vector support for the Esri FileGeodatabase format, a Arrow-based columnar oriented read API for vector layers implement in the Arrow, (Geo)Parquet, GeoPackage and FlatGeoBuf drivers, new vector layer API for table relationsihp management, new raster drivers for the JPEG-XL, KTX2, BASISU, NSIDCbin formats, multi-threaded read capabilities in the GeoTIFF driver, multiple performance improvements in the GeoPackage driver, advanced API to read raster compressed data, a new vector driver for the General Transit Feed Specification (GTFS), support for the new Seek Optimized ZIP (SOZip) specification, etc.

State of software
Outdoor Stage
11:30
30min
The STAC JavaScript Ecosystem + CNG Excursion
Matthias Mohr

The SpatioTemporal Asset Catalog (STAC) (and Cloud Native Geospatial ecosystem) for/in JavaScript has evolved in the last year. This talk will update you on the current state of the ecosystem and gives an outlook on what is missing. For STAC talk will cover libraries such as stac-js, stac-layer, stac-browser, stac-node-validator, and more. We'll dive into what the libraries do, how they relate to each other and give you some hints how you get started. At the end, a short excursion into the cloud-native geospatial ecosystem in JavaScript for COG, geoparquet, geozarr and other file formats will be provided as well.

State of software
Lumbardhi
12:00
12:00
120min
lunch
UBT E / N209 - Floor 3
12:00
90min
lunch
UBT C / N110 - Second Floor
12:00
90min
lunch
UBT C / N111 - Second Floor
12:00
90min
lunch
UBT D / N112 - Second Floor
12:00
90min
lunch
UBT D / N113 - Second Floor
12:00
90min
lunch
UBT D / N115 - Second Floor
12:00
30min
Demystifying Re:Earth: A Technical Examination of Nocode WebGIS Platform
Shinnosuke Komiya

Join us for an in-depth exploration of the technological foundations of Re:Earth, Eukarya Inc.'s open-source WebGIS platform. This 30-minute session will provide a comprehensive analysis of the underlying mechanisms that empower Re:Earth's no-code, user-friendly interface. We'll dissect the core architecture, illustrate its data handling and visualization processes, and elucidate the robust framework that facilitates plug-in development. Aimed at both technology professionals and enthusiasts, this talk offers a rigorous, detailed insight into the groundbreaking engineering that positions Re:Earth at the forefront of geospatial data interaction.

Use cases & applications
Outdoor Stage
12:00
30min
Earth-Search: A STAC API of Open datasets on AWS
Matthew Hanson

Earth-Search is a publicly available SpatioTemporal Asset Catalog (STAC) API providing an index for some of the public datasets available through the AWS Registry of Open Data (RODA) and has been shown to be a valuable resource for accessing the Sentinel-2 archive as Cloud-Optimized GeoTIFFs. A new version of Earth-Search is an update and enhancement of the Sentinel-2 metadata as well as new Collections of data available on AWS, including Landsat Collection 2, NAIP, and Sentinel-1.

This talk will include a summary of the STAC catalog, what STAC extensions are used and how the data is best accessed based on file formats. We will also dive into the datasets that are available through the API and will present the architecture for indexing including a discussion of data latency. We will provide resources and tutorials for how to get started with public geospatial datasets on AWS.

Open Data
Lumbardhi
12:00
30min
GeoServer Feature Frenzy
Andrea Aime, Jody Garnett

GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping. It powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale.

What can you do with GeoServer? This visual guide introduces some of the best features of GeoServer, to help you publish geospatial data and make it look great!

GeoServer has grown into an amazing, capable and diverse program - attend this presentation for:

  • A whirl-wind tour of GeoServer and everything it can do today.
  • A visual guide to some of the best features of GeoServer.
  • Our favorite tricks we are proud of!

New to GeoServer - attend this talk and prioritize what you want to look into first. Expert users - attend this talk and see what tricks and optimizations you have been missing out on.

State of software
UBT F / N212 - Floor 3
12:00
30min
Open Data Analytics API in GeoNetwork
Olivia Guyot, Florent Gravin

In the OGC world, you have a catalog to look for metadata/datasets, and the OGC API Features to fetch the data, paginate, filter and so on.
The use cases have evolved since then and data consumers expect more complete abilities from their data catalogs. Nowadays we want to analyze, understand and reuse our datasets and providing such tools is a great way to encourage data owners to share and open their warehouse. A data API could then offer:
Full text search on data points
Data fetching, paging, sorting and filtering
Data analytics, aggregation, computation
Data joining
And those operations should perform in an optimized and scalable manner.
It's what GeoNetwork has offered for decades now, and GeoNetwork is taking the move to opendata to address all those use cases.

You might have heard about columnar formats, and columnar vector formats such as Arrow, Parquet… After an introduction of the context and the expectation of a well shaped data API, we’ll present different approaches and types of flow architectures
- Warehouse formats
- Static files (parquet)
- Index
- Databases (PostGIS, Cytus)
- Api models and implementation
- OGC API Features limitation
- Duck DB
- Pure SQL
And compare the different stack in terms of efficiency depending on various use cases.

The final goal is to provide an API which serves search, analytics and dataviz purposes.

Open Data
Drini
12:00
30min
Tiling big piles of raster data using open source software and MapTiler Engine
Jachym

When publishing (raster and vector) data in the form of a web mapping application, the first step is always to prepare a cache of the data. Currently, tiled images seem to be the industry standard - and the internal format of the tiles is either PBF (for vector data) or PNG/JPEG/WebP or similar raster data formats supported by current web browsers and desktop mapping applications (e.g. QGIS).

Most of the tools out there are going to store the raster tiles in a file-system structure, using directories for the Z and X tile coordinates and file names for the Y coordinate. This is limiting for practical purposes as on some filesystems you can exceed the maximum number of files easily. While for the vector data, the OpenMapTiles project seems to be well established, along with Tippecanoe and Planetiler, for the raster data tiles, the field of tiling possibilities is wide open.

The tiling process can be very demanding on hardware resources and time-consuming. Having the possibility to parallel process the data or even use a cluster of machines for faster tiling could be crucial for some applications.

In this talk, we will give an overview of the current possibilities for tiling, focused (but not exclusively) on the raster data tiles. Gdal2tiles, QGIS tile generating tools, mapproxy-seed, mapcache_seed, and others. Each of the tools has its place in the geospatial data provider ecosystem, and so does MapTiler-Engine. With MapTiler-Engine, users can process large amounts of geospatial data and store them in various output tile formats. It supports many input data formats and adds modifications such as output color, resolution, and more. It also supports different tile matrix sets. MapTiler-Engine has a graphical user interface for easy usage, but it also has a command line interface, so you can make it part of a larger toolchain.

Use cases & applications
UBT C / N109 - Second Floor
12:30
12:30
90min
lunch
Outdoor Stage
12:30
90min
lunch
Lumbardhi
12:30
90min
lunch
Drini
12:30
90min
lunch
Mirusha
12:30
90min
lunch
UBT F / N212 - Floor 3
12:30
90min
lunch
UBT C / N109 - Second Floor
13:30
13:30
30min
Algorithm Talk: How to Re-project a Raster at Warp Speed
Daniel J. Dufour

We will discuss the algorithms inside geowarp, a high-performance and very low-level JavaScript library for reprojection, resampling and cropping of data from GeoTIFFs and other rasters. This talk will be at the abstract algorithmic level and is suitable for everyone. Here are some of the various algorithms that we will discuss:
- proj-turbo: fit an unknown reprojection function to a simple affine transformation
- fast-min/fast-max: calculating the range of your raster data leveraging the theoretical limits of the data types
- near-vectorize: automatically determining the optimal resampling algorithm based on relative pixel size
- dufour-peyton-intersection: calculate the pixels of an arbitrary raster inside an arbitrary polygon
- various resampling techniques including nearest, bilinear, vectorization, and box-based statistical methods

State of software
UBT D / N113 - Second Floor
13:30
30min
Cartographic generalization with Open Source
Mathias Gröbe

Generalization is a crucial topic in the map production process, describing the derivation of a map of a smaller scale from another one. It combines maintaining essential features and removing less important ones to offer a readable map. Often, this complex topic is reduced to a selection of attributes, creating label geometries, and simplifying line and area geometries.

The presentation shares the knowledge of the cartographer's toolkit by introducing the whole set of available generalization operators and showing less-known approaches for creating better maps. The entire collection of operators consists of simplification, smoothing, aggregation, amalgamation, collapse, merging, refinement, exaggeration, enhancement, and displacement, which can be implemented by algorithms.

The goal is to go behind the standards of creating centroids for labelling and using a Douglas-Peucker Algorithm for line simplification. A showcase of polygon simplification and creating label geometries are shown, demonstrating how to implement the operators using PostGIS with OpenStreetMap data. Several existing and working solutions for simplifying geometries and labels are presented to showcase possibilities.

Use cases & applications
UBT C / N111 - Second Floor
13:30
30min
Geo enabling your APIs with the location building blocks
Joana Simoes, Alejandro Villar, Rob Atkinson, Piotr Zaborowski

The need to integrate geospatial data into products and services has resulted in a proliferation of Free and Open Source web APIs which often do not adopt any standards, thus requiring more development time and a lack of interoperability between solutions. For instance a bounding box has been written in multiple ways, depending on whether developers use the coordinates of the four corners, only upper left and lower right, latitude or longitude first, or some other variation.

The good news is that the Open Geospatial Consortium, a neutral, consensus-based organization, has been developing open standards for geospatial information. These standards are developed as building blocks, which means they could be easily incorporated into existing applications in order to enable a piece of geospatial functionality. The location building blocks are freely available to anyone to download and use.

In this presentation, we describe the conceptual model for the existing building blocks, which uses semantic annotations to define the different components. We also describe a practical example of how a building block could be integrated into an application and provide some resources for developers who want to build applications with the location building blocks.

Community & Foundation
UBT D / N112 - Second Floor
13:35
13:35
5min
eoAPI - The Earth Observation API
Vincent Sarago

eoAPI is an open source project which aim to create a full Earth Observation API, combining STAC metadata API (stac-fastapi), a Raster dynamic tile service (TiTiler) and a Vector Tiles service (TiPg).

Using eoAPI AWS CDK template you're almost two command lines away of setting your own Earth Observation services.

State of software
UBT C / N110 - Second Floor
13:40
13:40
5min
MIERUNE BASE: The geospatial service for serving and sharing datasets
IGUCHI Kanahiro

MIERUNE is a geospatial tech company in Japan. We set FOSS4G as a foundation of us and continuously join the communities as an user, a developer or a contributor. Thesedays we have been committing our new service - MIERUNE BASE. MIERUNE BASE is focussing on easily serving and sharing datasets on a simple architecture based on serverless and FOSS4G. In this talk, we will introduce the architecture or techniques of MIERUNE BASE.

State of software
UBT C / N110 - Second Floor
13:45
13:45
5min
Evaluation of the geometric accuracy of the base map 1 : 100 000 of Madagascar compared to the CASEF ortho image
Marie Anna BAOVOLA

The quality of geospatial data is generally measured by its logical consistency, completeness,
positioning quality, semantic quality, temporal quality and genealogy [1]. In fact, concerning the
situation of geospatial data in Madagascar in the past, since 1992, the old orthophotos had been
attached to the national reference system which is the international 1924 with Laborde as a
projection. The first old orthophotos were achieved during the environmental program in 90s. In
other hand, the remain old orthophotos were produced with the mission as national securing land
tenure. However, the geometric accuracy and details of all the old orthophotos are different as well
as they do not cover the national territory. If they cover a large area for about 60 000 km2, some
users have noticed discrepancies of a few meters or even more than a dozen meters on certain
points, even though the field of application is land. In December 2019, a ministerial order was
developed to define the technical specifications of photogrammetric work in the country. In this
specification, according to Chapter 4, Section 14, the accuracy of the orthophoto / orthoimage is
estimated by the planimetric root mean square deviation (emqXY) calculated from the differences
between the ground coordinates and measured orthoimage coordinates of certain clearly identifiable
topographic features. For the orthophoto / orthoimages in urban areas, the emqXY must be better
than 1 m CE90 which is the circular error at the 90th percentile. For the rest of the territory other
than the urban area, it must be better than 3 m CE90 [2].
Therefore, not only is it crucial to be able to measure this quality, but also to control, to improve,
and finally to guarantee it [3]. The basic map in Madagascar is the topographic map at the scale of 1
: 100 000. However, the average age of these maps is 60 years. Consequently, the contained
information no longer meets the needs of most users. On the other hand, orthoimages produced later
seem to be much more accurate. To evaluate the accuracy of the 1 : 100 000 topographic map, we
first identified an orthophoto that could be used as a reference. Furthermore, we considered the
orthobase elaborated in 2014 from the SPOT5 image and the control result of the CASEF
(Agricultural Growth and Land Security) project orthoimage. The 2014 orthobase was produced
within the framework of our cooperation with the La Reunion (France) region, while the CASEF
orthoimage was developed for the purpose of land tenure security in Madagascar.
In order to conduct this study, we tried to answer the following series of questions : 1) what is the
most accurate orthoimage to serve as a reference; 2) what is the average value of the deviations of
the objects on the 1 : 100 000 topographic maps as well as those of these derived products (SCAN
100 and BD 100) compared to those of the reference orthophotos. 3) Finally, is there a set of
parameters to reposition the SCAN 100 / BD 100 on this orthoimage?
To achieve this study, several steps were taken including literature reviews, collection of a few
samples and observations of results from previous work. We also made researches on the reference
data from which the BD 100, the topographic maps at 1 : 100 000 and SCAN 100 will be evaluated.
From this comparison, we could see that the attachment to the national reference system of the
CASEF orthoimage is more accurate than that of the orthobase. In addition to that, coordinate
pointing of identifiable geographic objects on both datasets were made with statistical evaluation of
the differences. Related to tools that we are adopting, since that our budget has been limited in
terms of software license, so that we are using open source geospatial software to make our
organization better with QGIS during the evaluation process.
After evaluating four (04) sheets on the 1 : 100 000 map of Mahajanga, Antalaha, Manjakandriana
and Toamasina, we quantified the root mean square errors at 109.3 m, 108.6 m, 128.4 m and 51.9 m
respectively. The deviations are disparate, therefore there is no single set of parameters to reposition
the 1,100,000 topographic map. We concluded that the BD 100 should be left as it is, and that a newset of geographic databases should be developed at different scales, in particular the new version of
the BD 100.

Open Data
UBT C / N110 - Second Floor
13:50
13:50
5min
Catasto-Open: open-source tools for the Italian Cadastre
Francesco Bartoli, Antonio Cerciello, Louis Nantenaina Andrianaivo

Catasto-Open is an open-source set of tools for the Italian Cadastre that manages geospatial data in a user-friendly and efficient manner. The tool is designed to store, retrieve and manipulate cadastral data, including property boundaries, ownership information, and other relevant details. By leveraging GeoServer and MapStore technologies, it allows for the integration with existing GIS systems, making it a versatile and valuable resource for managing geospatial data in an OGC-compliant pipeline. The tool is accessible to a wide range of users, including government agencies, private companies, and individual property owners, also Catasto-Open can be easily customizable to meet the specific needs of different users.

Open source geospatial ‘Made in Europe’
UBT C / N110 - Second Floor
14:00
14:00
30min
Cartographic design for vector tiles: Best practices and open-source recipes for beautiful maps
Nicolas Bozon, Petra Duriancikova

Vector tiles are changing the way we create maps. Client-side rendering offers endless possibilities to the cartographer and has introduced new map design tools and techniques. Let’s explore an innovative approach to modern cartography based on simplicity and a comprehensive vector tiles schema.

Take a tour of vector tiles cartography basics and learn about the latest trends through a number of examples illustrated with the MapTiler maps. Get an overview of best practices and learn about simple open-source recipes, towards advanced combinations of fills, patterns, fonts, and symbols. Selected layer parameters and style expressions will be discussed in a visual way and explained with basic syntax that you can take away.

Use cases & applications
UBT C / N111 - Second Floor
14:00
30min
Demystifing OGC APIs with GeoServer: introduction and status of implementation
Andrea Aime

The OGC APIs are a fresh take at doing geo-spatial APIs, based on WEB API concepts and modern formats, including:

  • Small core with basic functionality, extra functionality provided by extensions
  • OpenAPI/RESTful based
  • JSON first, while still allowing to provide data in other formats
  • No mandate to publish schemas for data
  • Improved support for data tiles (e.g., vector tiles)
  • Specialized APIs in addition to general ones (e.g., DAPA vs OGC API - Processes)
  • Full blown services, building blocks, and ease of extensibility

This presentation will provide an introduction to various OGC APIs and extensions, such as Features, Styles, Maps and Tiles, STAC and CQL2 filtering.
Some have reached a final release, some are in draft: we will discuss their trajectory towards official status, as well as how good the GeoServer implementation is tracking them, and show examples based on the GeoServer HTML representation of the various resources.

Open Standard
UBT D / N112 - Second Floor
14:00
30min
Elephant in the room
Dennis Bauszus

There are no Free (as in Beer) and Open Source Cloud Datastores. Let's have an opinionated look at some of the better alternatives to store and modify, private and public data for spatial applications.

Having build FOSS cloud interfaces 4 Geo since forever I decided to look at the current state of data stores.

We have pretty much figured out how to do serverless in the cloud. Data at rest though is a completely different beast. The going gets tough the closer you work to the metal. There is an overwhelming multitude of formats, models and standards to chose from. Should we consider relational, document, and/or [column orientated] data files?

With too many to discuss we put the spotlight on some exciting new players such as bit.io and geoparquet.

A recent Panorama (BBC) report asked; Is the cloud damaging the planet? Is it?

Is there anything we can do? We want to share some best practices in regards to building data store interfaces as well as running these services at scale, and in production.

Use cases & applications
UBT D / N113 - Second Floor
14:00
30min
Faster, easier, more powerful map tile creation with Tippecanoe 2.0
Erica Fischer

Development of Tippecanoe, a widely-used open-source C++ tool for creating vector map tilesets, has moved to Felt, where it is a key component of the zero-configuration data ingestion pipeline that processes Felt’s public data library layers as well as uploads from external users.

Version 2 of Tippecanoe improves its automatic choice of zoom levels, and makes visual improvements to coordinate rounding, small polygons, and the distribution of points in low zoom levels. It now runs faster and uses less memory and disk space. There are new options to generate label points for polygons, to order features by attributes, and to use Visvalingam line simplification. Tippecanoe now accepts FlatGeobuf input as well as GeoJSON and CSV, and can generate output in PMTiles as well as MBTiles.

State of software
Mirusha
14:00
30min
GeoMapFish Project Status
Yves Bolognini, J. Wolfgang Kaltz

GeoMapFish is an open source WebGIS platform developed in close collaboration with a large user group. It targets a variety of uses in public administrations and private groups, including data publication, geomarketing and facility management. OpenLayers and an OGC architecture allow the use of different cartographic engines: MapServer, QGIS Server, GeoServer. Recently new features have been added such as vector tiles integration, from raw data to visualization. In order to get rid of the AngularJS dependency, a roadmap has been established for a migration to a web components architecture. K8S support is evolving with the implementation of the necessary tools for Azure environments. A highly integrated platform, a large number of features, fine grained security and a mature reporting engine are characteristics of the GeoMapFish solution. In this talk, we will present the key usages, the state of the migration process to web components and latest functional developments, including backend - frontend decoupling allowing to plug in multiple front-end WebGIS clients.

State of software
Lumbardhi
14:00
5min
Get your own OpenStreetMap dataset running in a Geoserver instance in 2 steps
Jose Macchi

Get your preferred OSM dataset (ie. country) running in a local Geoserver instance with only 2 commands and avoid any dependence on an external provider.
Simple, fast, clean solution. Lowering the barrier to entry to geospatial technology use and development.

Docker-compose setup which assembles the necessary components to implement a Geoserver instance that publishes the OpenStreetMap (OSM) layers locally on a single host/machine (Postgis is required to store the OSM layers).
Instructions for this project are based on this repository OSM-Styles, but make a much simpler execution plan.
The steps and scripts are intended to run in the context of Linux, Mac and Windows environments.

Use cases & applications
UBT C / N110 - Second Floor
14:00
30min
HOW OPEN SOURCE TOOLS ENRICHES OSM WITH COMMUNITY MAPPING IN TANGA-TANZANIA
Antidius Kawamala

Open source tools have played a significant role in enriching OpenStreetMap (OSM) with community mapping in Tanga, Tanzania. These tools have enabled local communities to actively participate in mapping their own areas, which has led to a more accurate and detailed representation of the community on OSM. The use of open source tools in community mapping has also allowed for increased collaboration and sharing of data between community members, as well as with other organizations and researchers.

One such open source tool that has been used in community mapping in Tanga is QGIS. This tool has been used to create detailed maps of the community, including roads, buildings, and other infrastructure. The use of QGIS has also allowed for data analysis, which has helped community members identify areas in need of improvement and target resources more effectively.

Another open source tool that has been used in community mapping in Tanga is OpenDataKit (ODK). ODK has been used to collect data in the field, such as information on the availability of healthcare facilities and services. This data has been used to create detailed maps of the community, which has helped community members identify areas in need of improvement and target resources more effectively.

The use of open source tools in community mapping in Tanga has also led to increased collaboration and sharing of data between community members, as well as with other organizations and researchers. For example, community members have been able to share their data with organizations working on healthcare and education projects, which has helped these organizations target resources more effectively.

Overall, the use of open source tools in community mapping in Tanga has been a significant factor in the success of OSM in the area. These tools have enabled local communities to actively participate in mapping their own areas, which has led to a more accurate and detailed representation of the community on OSM. The use of open source tools in community mapping has also allowed for increased collaboration and sharing of data between community members, as well as with other organizations and researchers, which has helped to improve the community and target resources more effectively.

Open Data
UBT C / N109 - Second Floor
14:00
30min
Monitoring Landfill sites from space
Saptarshi Pal

Landfill sites are for storing waste in a secure and secluded manner but they can cause a lot of damage to the environment by generating greenhouse gases and contaminating soils by releasing heavy metals and toxins. Monitoring the area of landfill sites from space is a challenging problem because of the huge amount of unstructured data and unavailability of standard datasets and procedures. By combining open-source tools with geospatial data, we present a global dataset that monitors the changes in the landfill area. We have achieved this by developing a deep learning based segmentation model that uses multispectral satellite data and segments the landfill areas from them. In order to develop the model, we have labelled landfill sites from optical imagery from all over the world. Our current segmentation model has 31 million parameters and has achieved an accuracy of 77.6% on the test set. Currently, the dataset contains temporal data from 2021 of the major landfill sites from more than 7 countries and it is growing daily as new data is coming in. In future, we aim to enhance this dataset by adding more variables other than the area, for instance height of the landfill and will also explore other higher resolution data for validating our results further.

AI4EO Challenges & Opportunities
UBT F / N212 - Floor 3
14:00
30min
Redmine Geo-Task-Tracker (GTT) Plugin
Taro Matsuzawa

Redmine Geo-Task-Tracker (GTT) Plugin provides geospatial support for Redmine. Redmine is a well-known OSS issue management system. GTT Plugin enables to attach geospatial information(Point, Polyline and Polygon) to each issues. It is effective in management many issues based on geospatial infromation(ex. Road and park management). This talk introduces features and some use cases.

State of software
Drini
14:00
30min
State of GeoNetwork
Florent Gravin, Jeroen Ticheler

The GeoNetwork-opensource project is a catalog application facilitating the discovery of resources within any local, regional, national or global "Spatial Data Infrastructure" (SDI). GeoNetwork is an established technology - recognized as an OSGeo Project and a member of the foss4g community for over a decade.

The GeoNetwork team would love to share what we have been up to in 2023!

The GeoNetwork team is excited to talk about the different projects that have contributed with the new features added to the software during the last twelve months. Our rich ecosystem of schema plugins continues to improve; with national teams pouring fixes, improvements and new features into the core application.

We will also talk about the UI revamp through the geonetwork-ui framework, and the new perspectives it could bring to your catalogs. Progress of our main branches (4.2.x), and release schedule.

Attend this presentation for the latest from the GeoNetwork community and this vibrant technology platform.

State of software
Outdoor Stage
14:00
30min
Traffic speed modelling to improve travel time estimation in openrouteservice
Christina Ludwig

Time dependent traffic speed information on a street level is important for routing services to estimate accurate arrival times and to recommend routes which avoid traffic congestion. Still, most open-source routing machines that use OpenStreetMap (OSM) as the primary data source rely on static driving speeds for different highway types, since comprehensive traffic speed data is not openly available. In this talk, we will present a method to model traffic speed by hour of day for the street network of ten different cities worldwide and its integration in route planning using the open-source routing engine openrouteservice.

Current datasets on traffic speed are either not openly available (e.g. Google traffic layer may be viewed but not downloaded), have very limited spatial coverage or do not follow a consistent data format (e.g. data published by municipalities). In addition, these datasets are often not based on the OSM street network, which means it would require extensive map matching procedures to transfer the traffic speed information to the OSM features. The most promising data set is currently provided by Uber Movement containing hourly traffic speed data along OSM street segments in 51 cities worldwide from 2015 until 2020. Still, this data only covers roads for which enough Uber user data is available.

In recent years, several studies have proposed methods and evaluated different data sources for traffic speed modelling. Most of them model traffic speed using machine learning methods and different indicators such as OSM tags (e.g. highway=*), points-of-interest (Camargo et al., 2020), centrality indicators (Zhao et al., 2017) or social media data (Pandhare & Shah, 2017). All of these indicators proved to be suitable for modelling traffic flow, but none of these studies has evaluated the effect of the modelled traffic speed on route planning and arrival time estimation.

In this study, we modelled traffic speed by hour of day on a street level for 10 cities worldwide based on OSM tags, an adapted betweenness centrality indicator and Twitter data. Uber traffic speed data was used as reference data to train and evaluate a gradient boosting regression model with different combinations of features. The simplest baseline model only used the OSM tags highway= and maxspeed= for prediction. The additional adapted betweenness centrality indicator was calculated to identify highly frequented street segments in each city by simulating several thousands of car trips in each city. In order to consider the geographic context, the original centrality indicator calculation was adapted to consider the spatial configuration of the city by including population distribution and relevant POIs during the calculation. Finally, Twitter data was used to account for the spatio-temporal distribution of human activity within the city. Using only the timestamp and geolocation of the tweets, the number of tweets in the vicinity of a street segment aggregated by the time of day was used as an indicator. The quality of the different models was evaluated with the help of the coefficient of determination (R2), the root mean square error (RMSE) and the mean absolute error (MAE). In all cities, the Twitter indicators improved the model, although this effect was only visible for certain road types. The Twitter indicators improved the accuracy especially for construction sites and motorways. For medium sized roads such as residential streets, the prediction did not improve. The centrality indicator improved the model as well but to a lesser extent. Best results were achieved in Berlin with an RMSE of 6.58 and R2 of 0.82.

To use the modelled traffic speed data in route planning, an experimental traffic integration was implemented in openrouteservice using which traffic speed data can be passed to openrouteservice as a CSV file. Each row contains the traffic speed at a certain hour of the day and for a certain OSM street segment specified by its OSM way id along with a start and end node. The data is structured the same way as the Uber Movement data making it possible to either integrate the raw Uber data or the modelled traffic speed. The effect of using external traffic speed data on the travel time estimation was evaluated by calculating multiple random car trips within different cities and at different times of the day and comparing it to the estimated travel time of the Google Routing API as well as the original openrouteservice implementation. In addition, the raw as well as the modelled traffic data were compared. The comparison between travel times in Google and openrouteservice showed regional differences in the accuracy of estimated travel times. These differences could be partly alleviated by incorporating raw or modeled traffic speed information.

Future research on traffic speed modelling using open data includes further development of the models and their transferability to other cities for which no Uber data is available. In this regard, the potential of deep learning approaches should be evaluated. Since Twitter has stopped providing their API for free, data from other social media platforms needs to be integrated. The potential for this is high though, since only the timestamp and geolocation of each tweet are used making the general approach easily transferable.

The methods, results and software generated within this study are relevant to the FOSS4G community - in particular to the FOSS4G academic track - since it combines scientific analysis in the form of traffic speed modelling with open-source software development by integrating the results in openrouteservice. The source code for the prototypical traffic integration is available on GitHub (https://github.com/GIScience/openrouteservice). The source code of the traffic speed model will be published along with the paper.

Academic Track
UBT E / N209 - Floor 3
14:05
14:05
5min
Leveraging on OpenStreetMap (OSM) for Improved Census Data Delivery
Kumbirai Matingo

The delivery of national census programs to aid nations in coming up with better strategies for serving the population’s needs and better plans for sustainability. While on the other hand, several developing nations around the world have not been able to deliver highly accurate census data and results to aid in these efforts. This leads to the implementation of policies that are not inclusive among other limitations introduced along the way.
By leveraging on open data platforms such as OpenStreetMap, open-source geo applications can be built to aid developing nations in accurate and location-driven data-capturing processes. Having digital location strategies and innovation as the major component for census data collection can potentially lead to vast growth in digital economies across developing nations and unleash endless possibilities and potential innovations which are inclusive and fit for purpose. This also provides a platform and chance to have more contributions towards OSM at the national level while delivering accurate and much-needed data.

Use cases & applications
UBT C / N110 - Second Floor
14:10
14:10
5min
Offline web map server "UNVT Portable"
ShogoHirasawa

UNVT Portable is a package for RaspberryPi that allows users to access a map hosting server via a web browser within a local network, primarily for offline use during disasters. It is designed to aid disaster response by combining aerial drone imagery with OpenStreetMap and open data tile datasets.

"UNVT Portable" is a map server that allows you to freely use web maps from devices such as smartphones even in an offline environment. It is mainly designed to work in an offline environment in the event of a major disaster, and various open data tiles are prepared in advance, such as drone aerial images taken after a disaster, OpenStreetMap, and satellite images released for free by JAXA(Japan Aerospace Exploration Agency), etc. Combine sets to create the maps you need in times of disaster. We envision a use case for municipalities, etc. to understand the situation after a disaster and to respond to disasters. It is built using open source software such as Apache and MapLibre and Raspberry Pi, and is completely open source. Unlike tools such as Google Maps, which are difficult to use for secondary purposes, it is being developed as open source so that it can be released in a form that can be easily used by anyone, including local governments, international organisations and private companies.

Use cases & applications
UBT C / N110 - Second Floor
14:15
14:15
5min
Why is popularity the biggest enemy of WMS?
Marcin Niemyjski

The Web Map Service (WMS) is the most popular standard of sharing data remotely. It is commonly used as a basemaps, a way of presenting governmental spatial data, and as a data source when creating vector datasets. Creating a WMS requires original data to be read and then rendered. This process can be slow, especially if the source data is heavy and not optimized. This is the case, for example, with Sentinel 1 global satellite data, which is a collection of daily revisions with a total volume of 250 GB per one day. Here we demonstrate an efficient way to share such a very large data set as WMS using Mapserver scaled with Kubernetes.

Mapserver is used as engine of our WMS, because of it speed and ease of automation. In order to optimise the performance of the service and therefore the user experience, it is recommended to store the data in the right format, with the right file structure also being aware of limitations of storage, bucket or disk read speed. GDAL provides a set of options that can be executed in a single command to overwrite the original data with new, cloud optimized. It is usually good practice to store selected zoom levels as a cache, but for time series data that is enriched daily, the cache is not overwritten as new data arrives, but is incremented.

Despite its popularity and advantages, WMS as a standard of serving data has its limitations. The potentially large disk read time is multiplied by the number of users sending requests. Tests using JMeter (100 users sending 100 GetMap requests in a loop) have shown that on a relatively strong processor (32CPU), the greatly increased traffic acts as a distributed denial-of-service (DDoS) - the server stops responding.

This problem is solved using Kubernetes (K8s) which allows metric-based automatic horizontal scaling of containerised applications, in this case – Mapserver. Prometheus as a K8s cluster monitoring tool allows custom metrics to be defined e.g., number of http requests per time interval. Prometheus makes it possible to distribute the traffic between newly created pods so that all requests can be answered.

The aim of the talk is to stimulate discussion, confront the idea with experts and demonstrate good practice in creating a publicly accessible WMS, with a focus on optimising speed under heavy source data conditions, supported by a working example and statistics.

Use cases & applications
UBT C / N110 - Second Floor
14:20
14:20
5min
Surface Runoff Processes and Design of Erosion Control Measure in Landscape and Artificial Slopes
Martin Landa, Ondřej Pešek, Petr Kavka

Surface runoff is one of the processes with direct impact on water erosion. Surface runoff has two basic components: a) sheet runoff and b) rill runoff. Observation of these phenomena at various scales and then using mathematical models to describe their observations plays a key role for soil protection. One of the models developed to compute these phenomena is SMODERP, used for example in the flexible and adaptive approach to land management and landscape planning called Model of Living Landscape project. Innovative application of the SMODERP model (https://github.com/storm-fsv-cvut/smoderp2d) named SMODERP Line is presented in this contribution. SMODERP Line is accessible through various interfaces including OGC Web Processing Service (WPS) which can be easily integrated into user-defined processing pipelines or web applications. Usage of SMODERP2D Line will be demonstrated in the QGIS environment through a new OWSLib-based QGIS WPS Client Plugin (https://github.com/OpenGeoLabs/qgis-wps-plugin).

This contribution was supported by grant RAGO - Living landscape (SFZP 085320/2022) and Using remote sensing to assess negative impacts of rainstorms (TAČR - SS01020366).

Use cases & applications
UBT C / N110 - Second Floor
14:30
14:30
30min
Collaborative mapping without internet connectivity
Volker Mische

This talk is about a prototype that enables collaborative mapping without the need of any internet connectivity, only a local network is required. It runs fully in the browser, hence is cross-platform, it basically runs on any smartphone. The users form a peer-to-peer network in order to exchange their data.

It can be used in situations where there either is no internet infrastructure, it's spotty or it was destroyed. In the disaster response case, only a local network, without any server infrastructure, would be needed.

In the talk you'll learn about content-addressing, WebRTC and peer-to-peer networks and of course experience a live demonstration of the prototype.

The tech-stack is [Svelte] for the application, OpenLayers for displaying the map, IPFS for the storage, libp2p for the networking. The project is licensed under the Apache/MIT licenses.

Use cases & applications
UBT D / N113 - Second Floor
14:30
30min
Evaluation and assesment of open source projects
Tero Rönkkö

The National Land Survey of Finland (NLS) is a government agency that maintains finnish property register and uses various administrative information systems that handle crucial data. To develop, manage, and maintain these systems, NLS follows a Business Technology Standard model and aims to publish its own production applications as open source software and use open source applications in development when possible.

During the development of new information systems, NLS follows an agreed and approved management model and uses only components and software that meet development guidelines. Examples of such components are QGIS and PostgreSQL. However, if NLS needs to adopt and evaluate components that are not yet included in the development guidelines, it must evaluate associated open source projects, record and process considerations, and accept them in accordance with the change management process.

To evaluate the maturity of open source projects, NLS has developed a tool that continuously evolves to reflect the needs of the organization. The tool is a checklist of criteria that can be used to assess the maturity of a project and compare it to similar products. The presentation explains the items in the tool and their significance as part of the metrics.

The tool that NLS has developed could be valuable for individuals and companies in similar positions when evaluating open source projects for their needs. The experiences gained by NLS can also help improve weak points that open source software producers may not have considered in their own projects.

Transition to FOSS4G
UBT F / N212 - Floor 3
14:30
30min
IdeaMap Sudan - Building a geodata community in a data scarse context
Andre da Silva Mano

IDeAMapSudan is a 2.5-year project finishing in March 2023. The project aims to develop a community-led geospatial database for mapping deprived urban areas (e.g., informal settlements) that will support the decision-making process for displacement and socio-economic reconstruction in Khartoum, Sudan. To that end, nine trainers from different governmental and non-governmental organizations were selected to be trained by a team of international experts from the Faculty ITC of the University of Twente, The Netherlands; the Universite Libre de Bruxelles, Belgium; and from the African Population and Health Research Center Kenya. These nine trainers were taught the essential competencies in using Free, and Open Source Geospatial Software to produce, compile, curate and distribute spatial data. Once the training of the nine trainers was completed, a series of community workshops were organized so that the trainers could train local community actors in tasks related to spatial data curation in close relation to their communities. The datasets produced from this process were then used to create a deprivation model and additional open data sets that can be used to help local communities and actors to take actions to mitigate several types of deprivations:
Unplanned urbanization - e.g. small, high-density, disorganized buildings
Social risk - e.g. no social safety net, crime
Environmental risk - e.g. flood zone, slopes
Lack of facilities - e.g. schools, health facilities
Lack of infrastructure - e.g. roads, bus service
Contamination - e.g. open sewer, trash piles
Land use/rights - e.g. non-residential zoning

This talk will describe three significant aspects of the project: the curriculum of competencies and the software tools used to teach these competencies; the phases and challenges of assembling a team and infusing it with a sense of community and participation; and the importance of disseminating results and evaluate the social impact open source software and open data can have.

Community & Foundation
UBT C / N109 - Second Floor
14:30
30min
MapComponents for your React application
Mathias Gröbe

MapComonents is an open-source framework extending React for mapping applications. It can be used to develop browser-based applications that do not require any backend, as well as web clients that use an arbitrary number of backend services. MapComponents uses MapLibre for rendering, raster, and vector tiles.

It provides working defaults wherever possible enabling the usage with minimal parameters. At the same time, it exposes the entire MapLibre API allowing very granular control of the result where it is needed. Solutions for more complex and common requirements such as PDF export, a feature editor, layer tree, WMS loader, measure tools, or bookmarks are provided as ready-to-use and highly configurable drop-in components. Exotic requirements include the swipe tool, the magnifying glass that partially shows two synchronized MapLibre instances or components that render 3D meshes or deck.gl.

Layers on the map are covered by several components and example codes in our lab repository. It can be combined with a backend for managing a more extensive data set. In addition, it also works as a progressive web app offline with most functions. Creating dashboards and complex user interfaces that combine maps and diagrams MapComponents is more straightforward than traditional approaches, given the declarative nature of React and its vast ecosystem of existing components.

The presentation will show and explain an actual example and its function. MapComponents framework is available under the MIT license and developed by WhereGroup GmbH.

State of software
Drini
14:30
30min
MapTiler SDK, the MapLibre experience on steroids
Jonathan Lurie

MapTiler SDK is a TypeScript layer that adds new capabilities on top of MapLibre GL, both in terms of UI and core features. It also comes with an interface to MapTiler Cloud REST API.
The features we have added on top of MapLibre are of two kinds: many convenient helpers to make the developers' life easier, and plenty of built-in defaults that are specially made to use MapTiler data without having to specify annoying URLs or {ZXY} patterns, yet keeping it 100% backward compatible with MapLibre. In addition, all our services from MapTiler Cloud API, such as geocoding, IP geolocation, coordinate transforms, or static maps generation, are now easily accessible with well-documented TypeScript functions. All this with an open-source license.

In the talk, we are going to present the library, showing practical examples and outputs. We believe, that the SDK is going to make the life of the web mapper easier not only by providing a close integration of MapTiler services but also by the new components and library itself.

The demo will feature some nice weather visualization we’ve been working on lately!

State of software
Mirusha
14:30
30min
Methods and challenges in time-series analysis of vegetation in the geospatial domain
Agata Elia

The increasing availability and ease of access of global, historical and high-frequency remote sensing data has offered unprecedented possibilities for monitoring and analysis of environmental variables. Recent studies in the field of ecosystem resilience relied on indicators derived from time-series analysis, such as the temporal autocorrelation and the variance of a system signal (Dakos et al., 2015). The aforementioned availability of global, temporally and spatially dense time-series of indicators of biomass and greenness of vegetation, such as the normalized difference vegetation index (NDVI) among others, has boosted ecosystem resilience scientific applications to forests as well. The ecological definition of resilience corresponds with the capacity of a system to absorb and recover from a disturbance. When dealing with ecosystems increasingly affected by natural and anthropogenic pressures such as forests, monitoring their health is particularly relevant.

Forest ecosystems play a crucial part in the global carbon cycle and in any climate change mitigation strategy, despite being increasingly affected by natural and anthropogenic pressures. While anthropogenic action on forests is mainly represented by stand replacement, natural perturbations include wind throws and fires, as well as extended insects and disease outbreaks, such as the recent outbreak affecting Central Europe. These natural disturbances are strictly interconnected with the change in climate. A forest ecosystem with decreased resilience will be more susceptible to external drivers and their change and likely to shift into an alternative system configuration by crossing a tipping point.

However, remote sensing data quantifying vegetation and forests properties inherently carry information related to the climate as well. If not accounted for, these confounding factors, such as short-term climate fluctuations, may hide the actual vegetation anomalies focus of a study and the importance of other drivers in vegetation itself. In addition, the comparison of the same vegetation property between different geographical areas naturally affected by different climates is hindered.

In order to explore the relationships of a set of environmental metrics with an indicator of the resilience of forests and their relative predictive importances, a machine learning (ML) model is implemented. In this paper, we aim to present the general workflow and the challenges encountered in processing and analyzing the time-series of vegetation, climate and the other environmental variables data. Rather than focusing on the scientific outcomes of the implemented model, the focus of this paper will be on a workflow implemented to analyze the aforementioned time-series and on the methods and tools implemented to account for the climate effects on vegetation. Deseasonalization, detrending, growing season identification and removal of climatic confounding effects will be targeted by the presented tools and methods, being aware of the variety and heterogeneity of methodologies existing in the field of time-series analysis.

All data leveraged for this study are open. The long-term kNDVI is retrieved by processing the full time-series of daily MODIS Terra and Aqua Surface Reflectance at 500m from 2003 to 2021. The kNDVI is a nonlinear generalization of the NDVI that shows stronger correlations than NDVI and NIRv with forest key parameters. kNDVI is also more resistant to saturation, bias, and complex phenological cycles, and it is more robust to noise and more stable across spatial and temporal scales (Camps-Valls et al., 2021). Hourly ERA5-Land data with the same timespan at 10km are used to retrieve the set of climatic and environmental predictors including temperature, precipitation, etc. Most data are computed as 8 days averages or sums in order to retrieve resilience metrics from high temporal resolution time-series.

The data processing takes place mainly within Google Earth Engine (GEE) and the Joint Research Centre (JRC) Big Data Analytics Platform (BDAP). Google Earth Engine is a cloud-based geospatial analysis platform providing a multi-petabyte catalog of satellite imagery and geospatial datasets coupled with large analysis capabilities (Gorelick et al., 2017). The JRC Big Data Analytics Platform is a petabyte-scale storage system coupled with a processing cluster. It includes open-source interactive data analysis tools, a remote data science desktop and distributed computing with specialized hardware for machine learning and deep learning tasks (Soille et al., 2018). GEE is mainly used to pre-process MODIS data. The ERA5 pre-processing and the core time-series analysis are performed within the JEODPP, where main tools include R, Climate Data Operator (CDO) and netCDF Operators (NCO). The whole machine learning model is instead trained and run in R. The different platforms and tools implemented in the study highlight as well the heterogeneity of data involved, data availability and data formats, ranging from TIFF, netCDF and R objects.

The final aim of this paper is to present one of the many workflows that can be implemented when dealing with time-series of vegetation-related data in the geospatial domain, where climate plays a crucial role as a confounding effect. The importance of the availability of open data and open source tools and platforms in making this big data analysis possible is also strongly highlighted.

Academic Track
UBT E / N209 - Floor 3
14:30
30min
Standardizing Satellite Tasking for Consumers
Matthew Hanson

One decade ago, we saw the launch of the first earth observation cubesats by Planet Labs. In the years since we have seen hundreds of satellites launched, and dozens of startup companies launching taskable satellites. While this has led to incredible opportunities to leverage multiple sensors and sensor modalities, the massive increase of data has also created challenges in data management, discovery, and usage. The community driven SpatioTemporal Asset Catalog (STAC) specification was an important step forward in exposing data to users in a standard way that enables cloud-native workflows and has been successful across government and industry.

The process of actually tasking satellites, however, is still very much non-standard; each data provider exposes a unique API, if at all. Some data aggregators have created a single tasking API that proxies and translates to multiple data provider APIs, but this is still non-standard, and proprietary.

Element 84 has been leading an effort to create a community standard API around how users order future data and how providers respond to those requests. Working with government groups, commercial satellite operators, and data integrators, we have hosted working sprints to develop a specification and open-source tooling demonstrating the power of a tasking API specification.

This talk will cover the current status of the community tasking API specification, future plans, and a demonstration of how to use the API to order data.

Open Standard
UBT D / N112 - Second Floor
14:30
30min
State of GeoServer
Andrea Aime, Jody Garnett

GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping. Choose additional extensions to process data (either in batch or on the fly) and catalog records.

GeoServer is widely used by organizations throughout the world to manage, disseminate and analyze data at scale. GeoServer web services power a number of open source projects like GeoNode and geOrchestra.

This presentation provides an update on our community as well as reviews of the new and noteworthy features for the latest releases. In particular, we will showcase new features landed in 2.22 and 2.23, as well as a preview of what we have in store for 2.24 (to be released in September 2023).

Attend this talk for a cheerful update on what is happening with this popular OSGeo project, whether you are an expert user, a developer, or simply curious what GeoServer can do for you.

State of software
Outdoor Stage
14:30
30min
Valhalla Routing Engine
Nils Nolde

Valhalla proved, since its inception in 2015, to be a valuable part of the OSM software universe, occupying an important niche in the routing section. It's arguably one of the most feature-rich open-source routing engines, serving many different use cases and integrations/deployments.

However it's a fairly complex system which is hard to comprehensibly document and new users or developers are often overwhelmed. So, I'd like to introduce its general architecture, capabilities and showcase "new" features (the last talk was given in 2016 on FOSS4G NA), as well as the accompanying open-source software, like various libraries, clients and docker image(s).

State of software
Lumbardhi
14:30
5min
Visualization of accidental chemical release simulation
Sanghee Shin, Kim Jinho

Chemical incidents, such as accidents at heavy chemical plants or large-scale toxic gas leaks, are difficult to assess accurately because of the large spatial extent of the damage and the rapidly changing scope/level/target of the damage over time. These characteristics also make it hard to conduct experiments to recreate or simulate large-scale chemical incidents in real world. In the case of large-scale chemical accidents or release, post-incident damage assessment is as important as prevention, but spatial ambiguity makes it difficult to assess the extent of damage to victims, and there is little way to identify fake victims from real ones. 

In this 5 year-long study, we aim to combine the results of a chemical diffusion model and the location data of mobile service subscribers on the incident spot over time. For this, FOSS4G based 3D geospatial web service using GeoServer, Postgresql/PostGIS, Cesium, etc. will be developed to assess the level of chemical exposure of each victim and calculate the level of damage based on it. 

In 2022, the first year of the study, we developed a prototype that combines the time-dependent output of the chemical diffusion model with the time-dependent location data of individuals and successfully visualized it in a Web 3D globe. In the coming year, we'll further develop this system into an integrated risk assessment platform for chemical accidents by combining chemical exposure assessment model and damage calculation model.

Use cases & applications
UBT C / N110 - Second Floor
14:30
30min
“Let’s put it on the map!” a manifestation about : Cartographic interaction, a two-way dialogue between a user and a map mediated by a computing device.
Niene Boeijen

These days we have an incredible amount of (open-source) geo-spatial data, remote sensing data and insights, plus the tools to share them with the world! But when building a web map application or dashboard we often end up with too cluttered visualizations, confusing jargon, scary technology or struggle in communicating with the geo-data illiterate. GIS technology can be hard to understand.

How do we design and build a map application showing a huge amount of geo-data accompanied by the elaborate functionality to discover it?
As GIS experts we think from a technological perspective, adding more and more buttons, layers, panels, pop-ups, legends, draw tools, scale-bars. But these GIS terms makes an application confusing, scary and technically hard to understand for the user..
On the other hand, UX and IX designers think about usability, smooth experiences and helping users to easily navigate, see, use and interpret an application. But they lack the understanding of specific map related design requirements and map related interactivity. Here, the map is taken for granted and is often not well designed..

I often find myself mediating between the GIS and cartographic professionals, web-developers, UX and IX designers and data-designers. I believe there is still a lot we can improve with each other!
So let’s bridge the gap and join the conversation about Interactive Cartography! In this talk I will give some clear useful examples. What is Interactive Cartography and what can we learn in this? Be amazed with some simple examples which can quickly improve your web map application!

Use cases & applications
UBT C / N111 - Second Floor
14:35
14:35
5min
How I discovered, tested and fixed a bug in GeoDjango
Stefan Brand

Here comes a developer story about contributing to GeoDjango.

An unfortunate combination of a valid, but unconventional spatial reference on the one hand, and "smart" logic for a mixed-geometry dataset: Geometries supposed to be located in Austria ended up in the Near East.

Investigation showed that GeoDjango's behaviour for returning the SRID of the dataset was not according to its documentation (see Django ticket #34302). While fixing the issue, additionally, an incorrect type cast from None to string was discovered.

In this talk you will also learn:
1. How to set up the GeoDjango test suite with a PostGIS docker container
2. How the Django code review process looks like

Use cases & applications
UBT C / N110 - Second Floor
14:40
14:40
5min
A QGIS plugin for local weather sensor data
Emanuele Capizzi

Ground-based weather sensor networks are essential in monitoring local weather patterns and climate. Integration of such data into GIS environments is critical to supporting manifold applications including urban planning, public health studies, and weather forecasting.
These networks use scattered geolocalized sensors to measure multiple atmospheric variables (e.g. air temperature, wind speed, precipitations). Often, data is distributed online by network managers which can be either local/national authorities, private companies, or volunteers. Due to the diversity of data providers, both formats and access patterns of meteorological sensor data are heterogeneous and the preprocessing tasks (e.g. temporal aggregations, spatial filtering) are generally time-consuming.
Given the above and to increase end-users exploitation of such sensor data, we present the development of an experimental QGIS plugin facilitating access and preprocessing of openly available data from ground-based sensor networks and enabling their direct use in QGIS. The plugin is designed to implement REST APIs connections and HTTP requests to download data. A user interface allows for selecting time intervals and types of observation to be downloaded. Once data is retrieved, the plugin provides options for filtering, outliers removal, time aggregation with summary statistics as well as observation mapping into a standard GIS layer. These functionalities are only partially available in similar existing QGIS plugins. The plugin leverages FOSS Python libraries for data handling including Pandas. The Dask parallel computing library is also exploited to speed up I/O operations on raw data.
The current version of the plugin is developed to retrieve and process weather sensor data provided by the Environmental Protection Agency of Lombardy Region (ARPA Lombardia), Northern Italy. The data retrieval is based on the Sodapy Python library, a Python client for the Socrata Open Data API. The plugin's work-in-progress source code is available at (https://github.com/gisgeolab/ARPA_Weather_plugin) released under MIT license. The plugin is being developed within the LCZ-ODC project (agreement n. 2022-30-HH.0) funded by Italian Space Agency (ASI), which aims to identify Local Climate Zones within the Metropolitan City of Milan.
Ongoing work includes the extension of the plugin functionalities to incorporate additional data providers, starting from other Italian regional ARPAs. The goal of this project is to provide a reproducible framework to access and handle weather data into QGIS, thus extending the capability of the software to support a wider range of practitioners and applications.

Use cases & applications
UBT C / N110 - Second Floor
14:45
14:45
5min
Organising Mapathons
Awania Morish

OpenStreetMap is an open source data which any one can access it free. This data is contributed by the local communities or individuals voluntarily. For them to gather together, we use mapathons to bring them for the mapping. A lot of data is added during these mapathons to help vulnerable people around the Globe.

Open Data
UBT C / N110 - Second Floor
15:00
15:00
30min
A Soft Transition to FOSS in a Decentral Administration
Thomas Marti, J. Wolfgang Kaltz

The public administration of the Swiss canton Aargau chose to use OSS for the publication of all open WMS, using GeoServer-Cloud and PostgreSQL. Meanwhile, the decentral offices, which gather geographical data and style this data are used to using proprietary software for this purpose. The strategy chosen was to provide a soft transition to OSS, by providing automated conversion processes based on a new FOSS project and by improving existing OSS with regards to styling conversions towards SLD.

Transition to FOSS4G
UBT F / N212 - Floor 3
15:00
5min
GeoAI for marine ecosystem monitoring: a complete workflow to generate maps from AI model predictions
Justine Talpaert

The world's oceans are being affected by human activities and strong climate change pressures. Mapping and monitoring marine ecosystems imply several challenges for data collection and processing: water depth, restricted access to locations, instrumentation costs and weather availability for sampling. Nowadays, artificial intelligence (AI) and GIS open source software could be combined in new kinds of workflows, to generate, for instance, marine habitat maps from deep learning models predictions. However, one of the major issues for geoAI consists in tailoring usual AI workflow to better deal with spatial data formats used to manage both vector annotation and large georeferenced raster images (e.g. drone or satellite images). A critical goal consists in enabling computer vision models training directly with spatial annotations (Touya et al., 2019, Courtial et al., 2022) as well as delivering model predictions through spatial data formats in order to automate the production of marine maps from raster images.
In this paper, we describe and share the code of a generic method to annotate and predict objects within georeferenced images. This has been achieved by setting up a workflow which relies on the following process steps: (i) spatial annotation of raster images by editing vector data directly within a GIS, (ii) training of deep learning models (CNN) by splitting large raster images (orthophotos, satellite images) and keeping raster (images) and vector (annotation) quality unchanged, (iii) model predictions delivered in spatial vector formats. The main technical challenge in the first step is to translate standard spatial vector data formats (e.g. GeoJSON or shapefiles) in standard data formats for AI (e.g. COCO json format which is a widely used standard for computer vision annotations, especially in the object detection and instance segmentation tasks) so that GIS can be used to annotate raster images with spatial polygons (semantic segmentation). The core process of the workflow is achieved in the second step since the large size of raster images (e.g. drone orthophoto or satellite images) does not allow their direct use into a deep learning model without preprocessing. Indeed, AI models for computer vision are usually trained with much smaller images (most of the time not georeferenced) and do not provide spatialized predictions (Touya et al., 2019). To train the models with geospatial data, both wide geospatial raster data and related vector annotation data have thus to be split into a large number of raster tiles (for instance, 500 x 500 pixels) along with smaller vector files sharing the exact same boundaries as the raster tiles (converted in GeoJSON files). By doing so, we successfully trained AI models by using spatial data formats for both raster and vector data. The last step of the workflow consists in translating the predictions of the models as geospatial vector polygons either on small tiles or large images. Finally, different state-of-art models, already pre-trained on millions of images, have been tuned thanks to the transfer learning strategy to create a new deep learning model trained on tiled raster images and matching vector annotations.
We will present and discuss the results of this generic framework which is currently tested for three different applications related to marine ecosystem monitoring dealing with different geographic scales: orthomosaics made of underwater or aerial drone images (for coral reef habitats mapping) and satellite images (for fishing vessels recognition). However, this method remains valid beyond the marine domain. The first use case was done with underwater orthomosaics of coral reef made with a photogrammetry model and annotated with masks. This dataset was composed of 3 different sites of orthomosaic acquisition. The second use case was on the recognition of species and habitat from geolocated underwater photos collected in different Indian Ocean lagoons. The last implementation of this method was done using satellite images of fishing harbors in Pakistan where vessels were labeled with bounding boxes. For the three use cases, model metrics are currently weak compared to similar computer vision tasks in the terrestrial domain but will be improved by using better training datasets in the coming years. Nevertheless, the technical workflow which manages spatialized predictions has been validated and already provides results which proves that AI-assisted mapping will value different types of marine images. Special attention is paid to large objects that can be spread over several tiles when splitting the raster. In this case, the model can indeed make errors by predicting different classes for the parts of the same object. Thus, a decision rule must make it possible to choose the most probable class among the different classes predicted by the model to designate the whole object. The spatialization of the results of the model can then be decisive for reducing the misclassified objects.

The code is implemented with free and open source software for geospatial and AI. The whole framework relies on Python libraries for both geospatial processing and AI (e.g. PyTorch) and will be shared on GitHub and assigned a DOI on Zenodo, along with sample data. Moreover, a QGIS plugin is under development in order to facilitate the use of pre-trained deep learning models to automate the production of maps whether on underwater orthomosaics, simple georeferenced photos or satellite images.

Beyond the optimization of model scores, one of the major perspectives of this work is to improve and ease AI-assisted mapping, as well as to include spatial information as input variables into a multi-channel deep learning model to make the most from spatial imagery (Yang et Tang, 2021, Janowicz et al, 2020).

Academic Track
UBT E / N209 - Floor 3
15:00
30min
GeoNetwork Orientation and Demo
Jody Garnett, Jonna Bosch

Welcome to GeoNetwork and FOSS4G! GeoNetwork is a leading open-source web catalog for keeping track of the spatial information.

This is an orientation session, so if you are new to foss4g we can help explain how everything fits together, and how the pieces of the puzzle form a whole. If you are migrating from ESRI environment this is a critical talk to attend as open source technology is often presented in isolation.

Jody is an experienced open source developer, digging how this technology works. Jonna is part of the QGIS community looking how to successfully use GeoNetwork.

This presentation shares our findings and experience with you, and touches on what makes GeoNetwork succeed:

  • We look at what GeoNetwork is for, the business challenge it is faced with, and the amazing technical approach taken by the technology.
  • We will demo the the publishing workflow to see what is required, and look at how harvesting can jump start your catalog contents
  • We peek under the hood at how the editor works, and discover the central super-power of GeoNetwork
  • Look at examples of how GeoNetwork has been extended by organizations to see what is possible with this technology

GeoNetwork is an established technology - recognized as an OSGeo project and member of the foss4g community for over a decade. We would love to welcome you to the conference and share what this project has to offer.

Transition to FOSS4G
Drini
15:00
30min
How to secure pygeoapi and streamline protected OGC APIs
Francesco Bartoli, Antonio Cerciello

Securing a modern API in an effective way is critical to prevent unauthorized access and ensure the privacy and integrity of data. In general, there are three common mechanisms that can be used for API security: API keys, OAuth2/OpenID Connect, and JSON Web Tokens (JWT). Each of these mechanisms provides a different level of security and flexibility, depending on the requirements of the API. Modern OGC APIs are agnostic and rely completely on the adoption of OpenAPI security schemes so the implementers can use the mechanism that perfectly fits with their requirements.
fastgeoapi is a new open-source tool designed to be an authentication and authorization layer on top of a vanilla pygeoapi that offers out-of-the-box a secured infrastructure easily pluggable and configurable through the a standard OpenID Connect protocol.
This talk aims to describe the recipe to configure and protect a vanilla pygeoapi with Keycloak and Open Policy Agent in order to publish secured OGC APIs in a standard manner.

Open Standard
UBT D / N112 - Second Floor
15:00
30min
Investigating war crimes, animal trafficking, and more with open source geospatial data
Logan Williams

At Bellingcat, a non-profit investigative organization in the Netherlands, we research war crimes, find tiger smugglers, monitor environmental degradation and track extremist hate. To do this, we use "open sources", including public databases, social media posts, and a wide range of geospatial data and tools. The use of these new online sources has dramatically changed investigative journalism and humanitarian accountability research in the past five years, and there remains tremendous potential for further development, especially in the geospatial realm.

In this talk, Bellingcat data scientist Logan Williams will present case studies from our research to illustrate how invaluable open source geospatial tools and data are for "open source" investigative research. Some of the most useful tools for investigators are designed for very different purposes, from academic meterology to outdoor recreation. Additionally, some of Bellingcat's own FOSS geospatial tools, based on Open Street Map and Copernicus satellite data, will be showcased. Finally, the talk will discuss opportunities for deepening the connections between the open source geospatial community and the open source investigation community.

Use cases & applications
UBT C / N111 - Second Floor
15:00
5min
Measuring Water Level Changes in Reservoir using Jason-3 Altimetry Mission
Aman Bagrecha

This talk will describe the usage of Jason-3 Altimeter data, which records the topographic height of the surface of the earth every ~10 days, to help measure the changes in water level of reservoirs across the globe. The use of NASA Common Metadata Repository (CMR) API to download and subset is described along with navigating the maze of various Jason-3 Level-2 Products depending on the use-case.
This talk introduces to this open dataset and various other altimetry missions, to allow for multi-mission monitoring of reservoirs of the world. It further uses Free and Open Source Software (CMR Specification, Xarray) to pre-process the data for use.

Use cases & applications
UBT D / N113 - Second Floor
15:00
30min
State of GeoNode
Alessio Fabiani, Giovanni Allegri

This presentation will introduce the attendees to those which are GeoNode's current capabilities and to some practical use cases of particular interest in order to also highlight the possibility of customization and integration. We will provide a summary of new features added to GeoNode in the last release together with a glimpse of what we have planned for next year and beyond, straight from the core developers.

State of software
Outdoor Stage
15:00
30min
State of GeoRasterLayer (for Leaflet)
Daniel J. Dufour

We will discuss the state of GeoRasterLayer, a JavaScript library that renders GeoTIFFs directly on a LeafletJS map without a server. This will include an introduction of new features, including the following:
- shifting warping off the main thread to a pool of web workers
- improved support for extent calculations by increasing vertex density of polygon representations of bounding boxes
- high-resolution support by using geowarp

We will also look to the future and discuss the following:
- support for raster types other than GeoTIFF/COG
- geozarr support
- similar integrations into other web mapping libraries

Audience feedback and ideas will be most welcome!

State of software
Lumbardhi
15:00
5min
Synchronising data updates with Kart version control
Robert Coup

Do you get regular data-drops from suppliers, and struggle with viewing changes between releases and keeping everything synchronised? In this talk we'll explain how from both a consumer and a publisher point of view you can use Kart to make your life easier.

We’re drowning in data, but the geospatial world lags badly behind in versioning tools compared to our software counterparts. Kart (https://kartproject.org) is solving this with a practical open tool for versioning datasets, enabling you to work more efficiently and collaborate better.

Kart allows you to quickly and easily manage history, branches, data schemas, and synchronisation for large & small datasets between different working copy formats, operating systems, and software ecosystems.

Modern version control unlocks efficient collaboration, both within teams and across organisations meaning everyone stays on the same page, you can review and trace changes easily: ultimately using your time more efficiently.

Open Data
UBT C / N110 - Second Floor
15:00
30min
The template for a Semantic SensorThings API with the GloSIS use case
Luís M. de Sousa

Motivation:
Spatial Data Infrastructures (SDI) developed for the exchange of environmental
has heretofore been greatly shaped by the standards issued by the Open
Geospatial Consortium (OGC). Based on the Simple Object Access Protocol (SOAP),
services like WMS, WFS, WCS, CSW became digital staples for researchers and
administrative bodies alike.

In 2017 the Spatial Data on the Web Working Group (SDWWG) questioned the overall
approach of the OGC, based on the ageing SOAP technology
[@SDWWG2017]. The main issues identified by the SDWWG can be summarised as:

  • Spatial resources are not identified with URIs.
  • Modern API frameworks, e.g. OpenAPI, are not being used.
  • Spatial data are still shared in silos, without links to other resources.
  • Content indexing by search engines is not facilitated.
  • Catalogue services only provide access to metadata, not the data.
  • Data difficult to understand by non-domain-experts.

To address these issues the SDWWG proposed a five point strategy inspired on the
Five Star Scheme [@BernersLee2006]:

  • Linkable: use stable and discoverable global identifiers.
  • Parseable: use standardised data meta-models such as CSV, XML, RDF, or JSON.
  • Understandable: use well-known, well-documented, vocabularies/schemas.
  • Linked: link to other resources whenever possible.
  • Usable: label data resources with a licence.

The work of the SDWWG triggered a transformational shift at the OGC towards
specifications based on the OpenAPI. But while convenience of use has been the
focus, semantics has been largely unheeded. A Linked Data agenda has not
been pursued.

However, the OpenAPI opens the door to an informal coupling of OGC services with
the Semantic Web, considering the possibility of adopting JSON-LD as
syntax to OGC API responses. The introduction of a semantic layer to digital
environmental data shared through state-of-the-art OGC APIs is becoming a
reality, with great benefits to researchers using or sharing data.

This communication lays down a simple SDI set up to serve semantic environmental
data through a SensorThings API created with the glrc software. A use case is
presented with soil data services compliant with the GloSIS web ontology.

SensorThings API:

SensorThings API is an OGC standard specifying a unified framework to
interconnect Internet of Things resources over the Web [@liang2016ogc].
SensorThings API aims to address both the semantic, as well as syntactic,
interoperability. It follows ReST principles [@fielding2002principled],
promotes data encoding with JSON, the OASIS OData protocol
[@chappell2011introducing] and URL conventions.

The SensorThings API is underpinned on a domain model aligned with the ISO/OGC
standard Observations & Measurements (O&M) [@Cox2011], targeted at the
interchange of observation data of natural phenomena. O&M puts forth the
concept of Observation has an action performed on a Feature of Interest
with the goal of measuring a certain Property through a specific Procedure.
SensorThings API mirrors these concepts with Observation, Thing,
ObservedProperty and Sensor. This character makes of SensorThings API a
vehicle for the interoperability of heterogeneous sources of environmental
data.

glrc:

grlc (pronounced "garlic") is a lightweight server that translates SPARQL
queries into Linked Data web APIs [@merono2016grlc] compliant with the OpenAPI
specification. Its purpose is to enable universal access to Linked
Data sources through modern web-based mechanisms, dispensing the use of the
SPARQL query language. While losing the flexibility and federative capacities
of SPARQL, web APIs present developers with an approachable interface that can
be used for the automatic generation of source code.

A glrc API is constructed from a SPARQL query to which a meta-data section is
prepended. This section is declared with a simplified YAML syntax, within a
SPARQL comment block, so the query remains valid SPARQL. The meta-data provide
basic information for the API set up and most importantly, the SPARQL end-point
on which to apply the query. The listing shows an example.

#+ endpoint: http://dbpedia.org/sparql

PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbr: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>

SELECT ?band_label { 
    ?band rdf:type dbo:Band ;
          dbo:genre dbr:Hard_Rock ;
          rdfs:label ?band_label .
} ORDER BY ?band_label

A special SPARQL variable formulation is used to map into API parameters. By
adding an underscore (_) between the question mark and the variable name,
glrc is instructed to create a new API parameter. A prefix separated again
with an underscore informs glrc of the parameter type. The ?band_label
variable in [Listing @lst:1] can be expanded to ?_band_label_iri to create a
new API parameter of the type IRI.

Use case: GloSIS:

The Global Soil Partnership (GSP) is a network of stakeholders in the soil
domain established by members of the United Nations Food and Agriculture
Organisation (FAO). Its broad goals are to raise awareness to the importance of
soils and to promote good practices in land management towards a sustainable
agriculture.

Acknowledging difficulties in exchanging harmonised soil data as an important
obstacle to its goals, the GSP launched in 2019 an international consultancy to
assess the state-of-the-art and propose a path towards a Global Soil Information
System (GloSIS) based on a unified exchange. A domain model resulted, based
on the ISO 28258 standard for soil quality [@SchleidtReznik2020], augmented with
code-lists compiled from the FAO Guidelines for Soil Description [@Jahn2006].
This domain model was then transformed to a Web Ontology, relying on the Sensor,
Observation, Sample, and Actuator ontology (SOSA) [@Janowicz2019], and other
Semantic Web standards such as GeoSPARQL, QUTD and SKOS. The GloSIS web ontology
has been successfully demonstrated as a vehicle to exchange soil information as
Linked Data [@GloSIS].

A prototype API for the GloSIS ontology, formulated in compliance with the
SensorThings API specification, will be presented in this communication. It
demonstrates how the same set of SPARQL queries can be used to query through a
ReST API any end-point available over the internet, sharing linked soil data in
accordance with the GloSIS ontology. Thus providing a clear step towards the
federated and harmonised system envisioned by the GSP.

Use cases & applications
UBT C / N109 - Second Floor
15:00
30min
TiPg: a Simple and Fast OGC Features and Tiles API for PostGIS.
Vincent Sarago

Following the work we did with TiTiler (a python module which is designed to create Raster services), we decided to develop the same kind of project but for Vector. Using Postgres/PostGIS as datasource and FastAPI/Pydantic for the web framework, TiPG is a lightweight application which user can include into their own FastAPI application and easily customized.

During this session I'll go over the design principle of the TiPg python module and also show some of its great features.

State of software
Mirusha
15:05
15:05
5min
Development of maplibre applications in sveltekit
Jin Igarashi

Recently, sveltekit is becoming a more popular framework for developing web application. It has been released as v1.0.0 last December. However, there are still not many use cases of developing maplibre applications in sveltekit compared to other frameworks like react. The author is involved in developing maplibre application with sveltekit in United Nations Development Programme (geohub), and also developing sveltekit based Web-GIS applications for water asset management at Eastern African countries (watergis). Hence, several useful maplibre boilerplate and components were developed in sveltekit during those projects' work. watergis/sveltekit-maplibre-boilerplate is a template which can start developing maplibre application in sveltekit with minimuum source code. Furthermore, watergis/svelte-maplibre-components consists of various useful maplibre components to add more functionality easily to your web application (all components are documented here). For instance, this component library provides you features of exporting maps, adding legends, styling maps, sharing maps, measuring distance and integrating with Valhalla api, etc. In this talk, these maplibre boilerplate and components will be briefly introduced.

Use cases & applications
UBT D / N113 - Second Floor
15:05
5min
GIS-based intelligent decision making support system for the disaster response of infectious disease
MIN YOUNG LEE

[Background and Purpose]
There are currently more than 7.5 million workers worldwide working in the field of fire, medical, and various emergency services with a total budget exceeding 400 billion euros. Additionally, approximately 15 billion euros are spent on equipment and other needs.
Water pollution caused by a downpour and climate change have a fatal impact on our health and the number of waterborne diseases continues to increase domestically and internationally. Therefore, the significance of technology which properly responds to various disasters caused by the climate crisis is increasing. Technologies regarding natural disasters have been widely developed; however, disaster response systems related to medical and biological emergencies are lacking. Many technologies for natural disasters have been developed, but there is no technology development and response system platform related to biological and medical risks, which are considered social disasters.

Furthermore, we aim to develop rapid and accurate pathogen detection technology which contains situational awareness, control/response methods, risk assessment, and epidemiological investigation methods. Eventually, by combining all these methods, we want to establish a user-centered GIS platform.

[Methods]
A decision support system that manages pathogen contamination was developed by using information obtained from sensors and fields. Moreover, risk assessment and epidemiological investigation technology which was developed through artificial intelligence and big data were included. The following three technologies were applied to analyze contaminated areas.

First, it provides a preview of data taken by satellites and collects images of aquatic regions to analyze and inform the pollution degree. Moreover, the turbidity of the water is provided from the data of aquatic regions which are constantly being filmed. Lastly, it also builds a water quality monitoring system based on data analyzed from water samples that are acquired from drones.

These images were taken from regions that humans cannot easily access. The technology provides both the spatial analysis result and images to users. Data and photos on social media are also analyzed to provide the severity of water pollution along with the specific spatial locations. To effectively provide and manage information on the platform, the system consists of seven layers: source management, data collection, interoperability, data harmonization, data application, data process, and security. All components in the data collection, interoperability, data harmonization, and security layers provide geographic information and statistics for users.

[Results]
Considering the functions of the system, the following platform can be applied in three fields: "Detection of pathogens and water pollution/situational response/post-investigation", "Infection management and decision support system " and "Protection and management of the first responder".

Two test locations were selected and the pilot case study was conducted in each location.

Limassol Pilot Case Study
- An earthquake near Limassol caused flash floods and landslides, polluting the Kouris Dam which is a primary reservoir in Limassol.

  • The water pollution over time can be checked by satellite images analyzed through PathoSAT. The turbidity and temperature of water detected by PathoSENSE and the results of satellite image analysis can be checked on the PathoGIS platform.

  • The user can check the areas heavily affected by flood and the magnitude of the tide is visualized on the PathoGIS data panel through graphs.

  • A warning alert appears when the pollution level exceeds the threshold and the user can check. If victims report the location of polluted areas on Twitter, it can be checked through PathoTweet.

Korean Pilot Youth Case Demonstration
A person in close contact with ASF wild pigs visited a farm near Soyang Dam and all pigs on the farm had to be dislocated due to mass infection. Many positive ASF cases were reported near the Soyang River inevitably.

  • Due to the unusually high precipitation in summer, the Soyang River Dam overflowed and caused leaked leachate to flow into the Soyang River Dam.

  • PathoSAT satellite images can be used to identify the boundaries of areas that can be potentially damaged by flooding. The time series visualization shows that water pollution is more severe near the location of ASF-positive cases.

-Since the government needs to respond to the rapidly increasing number of ASF cases, the results of ASF case analysis can be checked using the analysis application. Using such data to prevent African swine fever from spreading south, analysts can determine the optimal distance from SLL (Southern Limit Line), CLL (Civil Defense Limit Line), primary fence, or the need for additional fence installation.

  • As the number of ASF-positive cases increases, pollution in tap water can be easily found in Seoul since a large portion of water originates from Soyanggang Dam.

  • The PathoSENSE turbidity sensor notifies the current situation regarding water pollution. If the turbidity of tap water increases, Twitter reports on health problems also increase in Seoul simultaneously.

[Conclusion]
A platform that contains a database related to the spread of pathogens and provides AI-based information regarding the dangers of the situation will certainly help in responding to infectious diseases.

It will be able to strengthen its ability to respond to infectious diseases and disasters by using it as a tool to improve the capability of the first responder and reduce the time required to detect and respond to the situation.

In particular, there will be an effect of reducing industrial accidents by improving the ability to respond to unidentified risk situations that are likely to be encountered by first-time field responders.

By improving the ability to respond to unidentified situations, the number of industrial accidents will likely decrease. Shortly, when database expansion and the cost of maintenance becomes stable, in-depth data analysis of epidemiologic big data will be possible using pattern recognition and deep learning models.

Academic Track
UBT E / N209 - Floor 3
15:05
5min
QGIS Data Versioning with Kart
Robert Coup

Maybe you've heard of Kart, the great new geodata versioning tool from the team at Koordinates? But did you know that Kart also has a QGIS plugin so you can do real data versioning without needing to leave QGIS?

In just 5 minutes we'll demonstrate how to import data into a new Kart repository, make and review some changes, merge a branch, and push everything to a remote server. All from QGIS!

We’re drowning in data, but the geospatial world lags badly behind in versioning tools compared to our software counterparts. Kart (https://kartproject.org) is solving this with a practical open tool for versioning datasets, enabling you to work more efficiently and collaborate better.

Kart allows you to quickly and easily manage history, branches, data schemas, and synchronisation for large & small datasets between different working copy formats, operating systems, and software ecosystems.

Modern version control unlocks efficient collaboration, both within teams and across organisations meaning everyone stays on the same page, you can review and trace changes easily: ultimately using your time more efficiently.

State of software
UBT C / N110 - Second Floor
15:10
15:10
5min
Adding static type hints to fiona
Stefan Brand, Fabian Schindler-Strauss

Static type hints according to PEP 484 (and its extensions) have been a part of Python since version 3.5, which came out in 2015. Research from 2021 shows that 3 out of 4 Python developers already use optional type hinting at least sometimes in their projects. Time is ripe for static type hints to enter the FOSS4G Python world!

A GitHub issue on fiona's issue tracker to add static type hints to the library recently gained some traction. Currently, it is envisioned to create type stubs for fiona 1.9 and possibly move the type hints into core fiona with the future 2.0 version.

This talk will give an overview on the current status of the effort to add type hints to fiona. Furthermore it will briefly discuss considerations and the reasoning behind design decisions taken up until then. Contributions to the effort are very much welcome – just take part in the discussion on GitHub.

State of software
UBT C / N110 - Second Floor
15:10
5min
Mapping COVID-19 epidemic data using FOSS.
Paolo Zatelli

The recognition of spatial and temporal patterns in pandemic distribution plays a pivotal role in guiding policy approaches to its management, containment and elimination.
To provide information about spatial and temporal patterns of a phenomenon four steps are required: the collection of data, the organization and management of data, data representation as tables, charts and maps, and finally their analysis with geo-statistical tools (Trias-Llimós et al 2020).
The collection of pandemic data poses a challenge: on the one hand, the highest spatial and temporal resolution is required to make the detection of patterns more effective (Carballada et al. 2021), allowing the application of containment tools as local as possible, on the other hand it presents major privacy problems.
For these reasons public COVID-19 datasets and maps are usually available at low spatial and temporal resolutions (Franch-Pardo et al. 2020), because averaging over time and space automatically provides a layer of anonymization by data aggregation.
In this research project, a database has been built and is continuously updated for the COVID-19 pandemic in the Trentino region, in the eastern Italian alps, near the border between Italy and Austria.
The Province of Trento, with a population of about 542,000 inhabitants, represents the primary corridor for transporting people and products between Italy, Austria and Germany. The area has also an intense tourist development, in particular for winter sports, with the presence of ski slopes, ski lifts and hotels.
These two features have been an important role in the diffusion of COVID-19 in the region because the movement of people, both through the main communication routes and the movement of tourists in the lateral valleys, has been the main driver in the virus spread. Therefore, the availability of a reliable database collecting COVID-19 cases, is fundamental to map the pandemic evolution (Mollalo et al. 2020).
At the same time the status of autonomous region of the Provincia Autonoma di Trento, allows greater discretion in the organization of health data, their scientific use and their dissemination. In this context the local government and the University of Trento, in particular its the Geo-cartographic Center (GeCo), have signed an Agreement for sharing COVID-19 data and their analysis (Gabellieri et al. 2021).
The resulting dataset collects the official number of the infected, clinically recovered, deceased people, and their age group. The dataset contains daily data at the municipal level, starting from the beginning of the COVID-19 epidemic in March 2020 until the whole 2022.
Data anonymization has been carried out by aggregating data on a weekly basis and by hiding data with small numbers, with the threshold set to 5.
The sole use of official data created by public agencies tasked with managing public health, specifically the local Health Authority (Agenzia Provinciale per i Servizi Sanitari, APSS), ensures the validity of their production process and strict observance of patient data confidentiality. rules
A database management system and a WebGIS has been created using Free and Open Source Software.
The back end of the system runs a database management system (DBMS), which manages the data, including the spatial components, and a web server, which provides access to the users.
The DBMS runs on MySQL, a relational database management system (RDBMS) available as Free Software under the GNU General Public License. MySQL provides the capability of storing and processing geographic data, following the OpenGIS data model. A custom procedure has been created to update the dataset, with the capability to import data from suitably formatted spreadsheets. A roll back option is provided in case of failure of the import procedure. Data base management and update functionalities are available only to authenticated WebGIS administrators and accessible through a dedicated web page.
The main goals in the design and development of the WebGIS have been the ease of use and clarity of data presentation, both on large screens and on mobile devices. This approach maximizes the user performance while exploring the data, by splitting the processing tasks and load between server and clients.
The system is comprised of a back end, running on a server, and a front end running in the user’s web browser.
Cartographic data include background maps from the OpenStreetMap (OSM) project and a map of the municipalities boundaries for the Province of Trento, which serves as a spatial basis for the dataset. Tabular data are linked to the respective geographic components using the unique municipal code field as key. OSM maps are available with the Open Database License, while the municipalities boundaries have been provided by the Provincia Autonoma di Trento under a CC0 1.0 Universal (CC0 1.0) Public Domain Dedication license.
A virtual machine that houses both software and data powers the system on the server side.
The client side uses the Open Source Leaflet Javascript language libraries, available under BSD 2-Clause License, with custom scripts, which create the user interface and render geographic data into maps. This approach ensures flexibility and responsiveness on on desktop and mobile devices.
The exchange of the data between the server and the client is performed using geojson tables, created on the fly according to the user’s request. In a similar way, the data temporal variation graph is created by the js library, which automatically reads the dates and times of the analyses, extracts the relevant data from the database and display the graph.
As long as they fit within the database structure, the system automatically uses all of the accessible data. To protect the privacy of the patients, WebGIS users cannot access the source data even though maps and graphs can be downloaded as pictures.
The WebGIS is available at http://covid19mappa-trentino.geco.unitn.it/geosmart/index.php
Geo-statistical analysis aimed at the detection of spatial and temporal patters is underway.

Academic Track
UBT E / N209 - Floor 3
15:10
5min
OpenDataCube Fast Deploy using Docker (Fast Cubing)
J. R. Cedeno Jimenez

Geospatial information from satellites is increasingly being used by decision-makers and scientists alike. However, there are two fundamental issues with this kind of data and related handling technologies. Firstly, data processing typically requires long time and a-priori expert knowledge compared to traditional data sources. Second, integrating satellite data into processing pipelines can be expensive in terms of software and application development efforts. The OpenDataCube (ODC) was created to help users solve these issues. Although ODC offers an alternative to being used as a data management application, its deployment is typically challenging for inexperienced users. Therefore, the primary purpose of this work is to provide potential ODC users with a ready-to-use, portable instance of this software.
The software is produced and published in a Docker container. In comparison to the traditional installation and configuration of the ODC, the tool proposed here provides an environment where the ODC database is already set up. It helps to avoid occasional conflicts that are common in SQL and Python installations. Even though other ODC implementations are available as a Docker container, the proposed solution has some advantages. Specifically, Python geospatial libraries are integrated in the container to support data manipulation. While available ODC instances are designed to process satellite images only (mainly Sentinel and Landsat data), the tool contains scripts to automatically adapt and ingest non-satellite data (e.g. raw ground-sensor network data, land cover/soil maps, etc.) by creating also metadata files when they are missing. The proposed solution makes available processing pipelines to re-grid, georeference and import datasets into the ODC. Both scripts and pipelines can be used through Jupyter notebook interfaces, which allow users also to perform exploratory analyses on the ingested data.
The source code is available at (https://github.com/gisgeolab/LCZ-ODC) and is released under a MIT license. The software is being developed within the LCZ-ODC project (agreement n. 2022-30-HH.0) funded by the Italian Space Agency (ASI) and aimed to identify Local Climate Zones within the Metropolitan City of Milan. Given the nature of the datacube development, this tool promotes Open Geospatial Consortium (OGC) compliant data sharing. Ongoing work focuses on the development and integration of additional pre-processing scripts with the aim of supporting the ingestion of additional types of data as well as providing new ready-to-use embedded processing functionalities.

Use cases & applications
UBT D / N113 - Second Floor
15:15
15:15
5min
3D City Model-based Aid Station Operation Visualization and Management using Cesium.js
Sanghee Shin

How do you run an aid station in case of a disaster? Scenarios are planned for each city, but there are limitations in applying them to actual aid station operations. In our presentation, we will present a case study on the development and simulation of a aid station management tool using digital twin technology and share various visualization techniques in a 3D city model environment.

The study site is Ulju-gun, a county of about 220,000 people in southern South Korea, with two nuclear power plants operating within a few kilometers of each other. Moving people to shelters to protect them in the event of a disaster such as a radioactive leak is very essential and crucial part of disaster management.

The aid station management tool presented in this presentation leverages ground-truth 3D modeling data of the shelter buildings that will be operational during a disaster to provide facility placement and editing capabilities. This allows relief tents to be automatically placed or edited based on the scenario. It also provides the ability to monitor the overall changes that may occur at the shelter through a dashboard, including real-time victim status, food, beverage, and medical support, supply status, shelter information, and disaster situation information.

The Cesium platform is used to service the data and the Three.js library is used to handle the viewing and placement of 3D model data in glTF format. Other open source implementations include React, Turf.js, Apache ECharts, and GeoServer.

We believe that the findings mentioned in this study provide a good example of how 3D city model-based shelter operations and visualization techniques can be applied to disaster preparedness systems to support effective decision-making and resource allocation.

Use cases & applications
UBT C / N110 - Second Floor
15:15
5min
Application of FOSS4G for improving the environmental impact assessment process - a noise case
Sanghee Shin

Because environmental impact assessment(EIA) process is a combination of detailed fields that require a lot of expertise (e.g., noise, air pollution, odor, water pollution, ecological environment, living environment, etc.), despite its long history, the process is still complex and slow, and it is not easy to break away from the document/drawing-centered work process. Since the nature of the environment involves many geographic/spatial context, if it can be assisted with a spatio-temporal system, it can be expected to show very high efficiency compared to the current process.

To verify the feasibility of such a system, we adopted a FOSS4G-based approach and developed a pilot system in this study. Specifically, we used GeoServer and Postgresql/PostGIS for handling and providing data spatially, and Cesium for 3D geospatial based visualization. We focused on the design and implementation of APIs to assemble the sub-processes of EIA, as well as the visualization and UI of the pilot system.

This system demonstrates how the noise propagate during and after the construction in an interactive way. We expect the system will increase the non-expert stakeholder's understanding of noise propagation visually.

Through this presentation, we will discuss our findings implemented in a EIA process centered on the noise, from the first step of applying for approval from the civil/construction operator to the last step of deriving the final evaluation opinion by the noise expert in charge, and provide clues to the future of Digital EIA.

In the future, we believe that the expansion to other EIA media and the smooth implementation of current legal and administrative tasks will make it a system that can be used in the field.

Use cases & applications
UBT D / N113 - Second Floor
15:15
5min
JAXA EARTH OBSERVATOIN DASHBOARD WITH COG AND WMS/WMTS
Shinichi Sobue

In May 2020, NASA, ESA, and JAXA initiated a collaborative effort aiming at the establishment of the COVID-19 Earth Observation Dashboard and later in March 2021, extended its scope to global environmental change. Noting the increasing use of the joint Dashboard and the continuous users' requests for more information, NASA, ESA, and JAXA will continue through June 2024 to advance their joint work in the global understanding of the changing environment with human activities. This decision continues the collaboration on the analysis of three agencies' datasets and open sharing of data, indicators, analytical tools, and stories sustained by our scientific knowledge and expertise, to provide a precise, objective, and comprehensive view of our planet as an easy-to-use resource for the public, scientists, decision-makers, and people around the world. The dashboard provides an easy-to-use resource for all kinds of the public from the scientist to the decision-maker, including people not familiar with satellites. Based on accurate remote sensing observations, it showcases examples of global environmental changes on 7 themes: Atmosphere, Oceans, Biomass, Cryosphere, Agriculture, Covid-19, and Economy. The dashboard offers a precise, objective, and factual view without any artifacts of our planet. You can explore countries and regions around the world to see how the indicators in specific locations changed over time.

ESA, JAXA, and NASA will continue to enhance this dashboard as new data becomes available. This session explores this EO dashboard architecture, function, examples of thematic content through storytelling, and its utility amongst the broader EO and Data Science community.

To monitor COVID-19 environmental and economic impacts from space through provision of related indicators to the general public and decision makers, JAXA has developed and implemented earth observation (EO) dashboard jointly with ESA and NASA. In parallel, jointly developed EO dashboard, to provide climate change and earth science information to world wide users, JAXA also develops and operates one stop shopping portal site named “Earth-graphy” as JAXA's website for all news, articles, images related to JAXA's Earth Observation activities. Recently, to enhance JAXA’s “Earth-graphy” with interconnect with EO Dashboard through API, JAXA has developed the “JAXA Earth API” service to provide a wide variety of JAXA Earth observation satellite image data in an easy-to-use format and to promote the efficient and effective use of satellite data.

For earth observation satellite data provision, JAXA develops and operates G-Portal which is a portal system allowing users to search (satellite/sensor/physical quantity), and download products acquired by JAXA's Earth observation satellite including ALOS-2 ScanSAR data. With G-Portal standard product dissemination system, JAXA provides value added product services including Global Satellite Mapping of Precipitation (GSMaP), Himawari Monitor, JASMES, etc. For example, GSMaP provides a global hourly rain rate with a 0.1 x 0.1 degree resolution. JASMES provides the information on the current status and seasonal/interannual variability of climate forming physical quantities including solar radiation reaching the earth’s surface (photosynthetically available radiation), cloudiness, snow and sea ice cover, dryness of vegetation (water stress trend), soil moisture, wildfire, precipitation, land and sea surface, etc. (https://kuroshio.eorc.jaxa.jp/JASMES/index.html)

To provide easy access of JAXA’s earth observation data and information, JAXA developed JAXA Earth-graphy with API. The JAXA Earth API service consists of three main components shown in Figure 1: an API (Python language version, popular in fields such as data science, and a JavaScript language version (under development) for browser applications), a database, and a web application. First one is “JAXA Earth Data Explorer” which is a browser application that allows you to check various satellite data stored in a database. Second one is “JAXA Earth API for Python” allows users to acquire and use satellite data for any area, without being aware of differences in satellite, sensor type, resolution, etc. efficiently, effectively, and freely The API also has an IF function with the free GIS software QGIS, allowing for immediate acquisition and display of data. Final one is “JAXA Earth Database” which contains 74 types of data including elevation, surface temperature, vegetation index, precipitation, and land cover classification maps. The database contains data by cloud optimized GeoTIFF (COG) format and metadata by STAC format named “CEOS Analysis Ready Data for Land (CARD4L)”. And also, JAXA implement prototyping system of OGC WMS/WMTS system as a frontend system of JASMES and Earth-graphy to provide data and information to Trilateral cooperation EO dashboard since 2022.

Through this WMS/WMTS and JAXA Earth API, JAXA’s EO dashboard is linked with jointly developed EO dashboard. Thus, worldwide users can access JAXA’s data and information through joint developed EO dashboard. Furthermore, JAXA also started to develop Japanese language version of UI for joint developed EO dashboard with increasing products and information. This paper describes overview of JAXA’s EO dashboard system development. Especially Japanese Advanced Land Observing Satellite-2 (ALOS-2) L-band SAR data for forest monitoring.

Academic Track
UBT E / N209 - Floor 3
15:20
15:20
5min
A Comparative Study of Methods for Drive Time Estimation on Big Geospatial Data: A Case Study in the U.S.
Xiaokang Fu, Devika Kakkar

Travel time estimation is used for daily travel planning and in many research fields such as geography, urban planning, transportation engineering, business management, operational research, economics, healthcare, and more (Hu et al., 2020). In public health and medical service accessibility studies it is often critical to know the travel time between patient locations and health services, clinics, or hospitals (Weiss et al., 2020). In support of a study aiming to characterize the quantity and quality of pediatric hospital capacity in the U.S., we needed to calculate the driving time between U.S. ZIP code population centroids (n=35,352) and pediatric hospitals, (n=928) a total of over 32 million calculations. There currently exist numerous methods available for calculating travel time including (1) Web service APIs provided by big tech companies such as Google, Microsoft, and Esri, (2) Geographic Information System (GIS) desktop software such as ArcGIS, QGIS, PostGIS, etc, and (3) Open source packages based on program languages such as OpenStreetMap NetworkX (OSMnx) (Boeing, 2017) and Open Source Routing Machine (OSRM) (Huber & Rust, 2016). Each of these methods has its own advantages and disadvantages, and the choice of which method to use depends on the specific requirements of the project. For our project, we needed a low-cost, accurate solution with the ability to efficiently perform millions of calculations. Currently, no comparative analysis study evaluates or quantifies the existing methods for performing travel time calculations at the national level, and there is no benchmark or guidance available for selecting the most appropriate method.

To address this gap in knowledge and choose the best drive time estimator for our project we created a sample of 10,000 ZIP/Hospital pairs covering 49 of the 50 U.S. states with variable drive times ranging from a few minutes to over 4 hours. With this sample, we calculated the drive time using the Google Map API, Bing Map API, Esri Routing Web Service, ArcGIS Pro Desktop, OSRM, and OSmnx and performed a comparative analysis of the results.

For the Google, Bing, and Esri web services we used the Python requests package to submit requests and parse the results. Within ArcGIS Pro, we manually used the Route functions to calculate routes on a road network provided by Esri and stored locally. For OSMnx we utilized Python to perform the street network analysis using input data from OpenStreetMap. For OSRM we utilized C++ through the web API. OSRM provides a demo server to enable testing the routing without loading the road network data locally, and we used this for calculating drive times for our 10,000 samples. For generating visualizations we used Networkx and Igrah to display the shortest path of the drive time routing result, and graphs of our comparative analysis.

When comparing drive time estimations using these 6 technologies we found: (1) There are very little differences among Google, Bing, OSRM, ESRI web service, and ArcGIS Pro when the route drive time is less than roughly 50 minutes (2) For travel time estimations of routes greater than 50 minutes the Google and Esri methods were extremely close. The OSRM estimates produced travel times about 10% longer than other methods, and Bing’s estimates were about 10% lower than Google and ESRI. (3) Overall, OSmnx estimates travel times lower than any other method because it estimates the shortest distance using the maximum velocity. In general, the different methods employ different strategies for considering traffic conditions. When long-distance travel is estimated the use of highways is required, and each method employs specific parameters to account for traffic and resulting travel speed. Because of the complexity of modeling traffic conditions, it is difficult to say which method provides the most accurate and realistic driving times without empirical data being collected. Regarding cost, the OSmnx and OSRM are both open-source, while the other methods have a cost for API usage (Google, Esri, Bing) and desktop software (ArcGIS Pro). For processing efficiency, Google, Esri and Bing were all efficient, each able to process the dataset in roughly one hour. We found the processing power of OSMnx was limited in the size of the road network it could handle, so we had to divide the ZIP/Hospital pairs into subsets by state, and calculate them separately, which was a laborious process. We found OSRM to be the most efficient, able to handle 10,000 requests in less than a minute. We ran OSRM in a high-performance cluster computing environment. This process included one hour of setup to download the OpenStreetMap data for the entire U.S. onto the cluster. Then we used Python requests to calculate the drive times and parse the result for analysis. The total processing time for the 32 million calculations ended up being 12 minutes.

Using OSRM provided us with a low-cost, accurate, efficient solution to calculating drive times between 32M origin/destination pairs. We feel our study provides valuable guidance on calculating drive time in the United States, offering a benchmark comparison model between 6 different methods. We encourage others to utilize the code produced for this project; all of it is in the process of being published on GitHub as open-source. Our analysis was just for the U.S., and performing similar analyses in other countries will provide more insight into how useful the different methods are globally. In summary, this comparative study allowed us to produce drive times in the most efficient manner in order to support the larger objective of characterizing the quantity and quality of pediatric hospital capacity in the U.S.

Academic Track
UBT E / N209 - Floor 3
15:25
15:25
5min
Developing a FOSS4G based Walkable Living Area Planning Support Module to Assists the Korean 15-minute City
Junyoung CHOI

The concept of 15-minute cities, which aims to provide residents with access to amenities and services within a 15-minute walk, has gained popularity in recent years [1]. In Korea, there have been discussions about supporting the planning of walkable neighborhoods based on Chrono-Urbanism, a concept that is the basis of the 15-minute city concept that residents can receive services necessary for their daily lives in the same place where they live. Planning support based on Chrono-Urbanism measures the walkability of services for various age groups based on the distance to reach physical activity centers such as walking and bicycling from small living area units, and places necessary living infrastructure (urban amenities). The bottom-up planning approach that reflects the needs and living conditions of citizens, such as walking routines, can generate planning issues that reflect the needs of citizens through iterative alternative generation and evaluation in support of planning decisions by learning the surrounding environmental conditions using AI techniques.
However, to implement this concept, it is necessary to develop tools based on free and open source technologies for spatial planning. Previous studies have developed open source-based tools using Open Street Map (OSM) or open data of each city and used them effectively for urban planning [2,3]. In this study, we aim to develop a tool that supports the measurement of walkability and the distribution of urban amenities considering age groups as well as walkability, bicycle accessibility, and public transportation accessibility by utilizing free and open source software (FOSS4G) tools for spatial information.
First, we design walkability, including pedestrian walkability, bicycle accessibility, and transit accessibility, based on each home-based or residence-based trip.
To measure the walkability of a city, we need to consider pedestrian-friendly urban infrastructure elements such as sidewalks and crosswalks. When designing for measurement, design a walking network that provides information on the physical characteristics of the road network and a database that contains the distribution of residents by gender and age. By analyzing data based on the pedestrian network for different age groups, it is possible to determine the level of walkability in different urban space conditions. Similarly, the same data can be used to measure bicycle accessibility, taking into account bike lanes, bike parking facilities, and other factors. Access to public transportation can be measured using data from transportation agencies, including information about the frequency and routes of public transportation.

Second, we design a tool to measure accessibility to urban amenities based on Python spatial information and distribute the location of urban amenities according to accessibility.
We develop a tool that integrates data such as walkability, bicycle accessibility, and public transportation accessibility to determine the best locations for urban amenities. A network-based method of minimizing travel costs will be used to determine the locations [4]. The tool will be developed using QGIS and the Python programming language. The tool is designed by considering various parameters such as resident and traveling population, distance from existing amenities, and urban environment in various living areas.
Third, the tool is used to evaluate local 15-minute cities.
The implemented tool is designed to be used and evaluated by officials, planners, and researchers working on 15-minute cities. The tool can be used to identify areas that need more urban amenities and to deploy existing amenities in ways that enhance walkability. The tool can also be used to determine the feasibility of locating new facilities such as parks, community centers, and other public spaces. In addition, the tool is designed to be customizable to meet the environmental needs of different cities.
The development of a FOSS4G-based urban amenity distribution tool based on walkability measures can provide the following benefits. First, it provides an age- and facility-related data-driven approach to the placement of urban amenities, ensuring that amenities are located in areas that are easily accessible to citizens. Second, it provides a spatial structure that can promote the use of sustainable transportation modes such as walking, biking, and public transit. Third, it can encourage more inclusive urban development by ensuring that amenities are distributed in a more equitable manner.
In conclusion, the development of a FOSS4G-based urban amenity distribution tool can play an important role in the realization of the concept of walkable livability, a 15-minute city concept in South Korea. This tool can measure and distribute urban amenities based on walkability, bicycle accessibility, and public transportation accessibility, providing a way to create healthier, more equitable living areas. Implementing the tool to generate a range of alternatives will allow planners to learn from the alternatives about desirable walkable urban amenity alternatives. For urban planners and practitioners, open-source tools make it easy to take data-driven action and learn and innovate from what others have done. Transparency in the planning process allows citizens to understand the planning process and engage with planners, as well as be part of the planning process.

Academic Track
UBT E / N209 - Floor 3
15:30
15:30
30min
Coffee-break
Outdoor Stage
15:30
30min
coffee-break
Lumbardhi
15:30
30min
coffee-break
Drini
15:30
30min
coffee-break
Mirusha
15:30
30min
coffee-break
UBT E / N209 - Floor 3
15:30
30min
coffee-break
UBT F / N212 - Floor 3
15:30
30min
coffee-break
UBT C / N109 - Second Floor
15:30
30min
coffee-break
UBT C / N110 - Second Floor
15:30
30min
coffee-break
UBT C / N111 - Second Floor
15:30
30min
coffee-break
UBT D / N112 - Second Floor
15:30
30min
coffee-break
UBT D / N113 - Second Floor
15:30
30min
coffee-break
UBT D / N115 - Second Floor
16:00
16:00
30min
BikeDNA: A tool for Bicycle Infrastructure Data & Network Assessment
Anastassia Vybornova

Access to high-quality data on existing bicycle infrastructure is a requirement for evidence-based bicycle network planning, which can support a green transition of human mobility. However, this requirement is rarely met: Data from governmental agencies or crowdsourced projects like OpenStreetMap often suffer from unknown, heterogeneous, or low quality. Currently available tools for road network data quality assessment often fail to account for network topology, spatial heterogeneity, and bicycle-specific data characteristics.

To fill these gaps, we introduce BikeDNA, an open-source tool for reproducible quality assessment tailored to bicycle infrastructure data. BikeDNA performs either a standalone analysis of one data set or a comparative analysis between OpenStreetMap and a reference data set, including feature matching. Data quality metrics are considered both globally for the entire study area and locally on grid cell, thus exposing spatial variation in data quality with a focus on network structure and connectivity. Interactive maps and HTML/PDF reports are generated to facilitate the visual exploration and communication of results.

BikeDNA is based on open-source python libraries and Jupyter notebooks, requires minimal programming knowledge, and supports data quality assessments for a wide range of applications - from urban planning to OpenStreetMap data improvement or transportation network research. In this talk we will introduce how to use BikeDNA to evaluate and improve local data sets on bicycle infrastructure, examine what BikeDNA can teach us on the current state of data for active mobility, and discuss the importance of local quality assessments to support increased uptake of open and crowd-sourced data.

Open Data
UBT C / N110 - Second Floor
16:00
30min
Creating Global Edge-Matched Subnational Boundaries
Maxym Malynowsky

FieldMaps.io is a personal initiative originally created to develop offline interactive reference maps for humanitarian actors. However, in short time, it transitioned to helping develop common operational datasets that form the foundation for humanitarian response planning. Over the past 2 years, enormous effort has gone into releasing a high-resolution composite dataset able to be updated daily from multiple sources. This talk will cover 3 aspects of the project.

Algorithm

Edge-matching resolves gaps and overlaps between hundreds of separate national data sources, requiring an algorithm that can perform at global scale. The resulting methodology uses something akin to a euclidean allocation raster applied to vector space, free of the compromises other approaches like generalization and snapping make. If you've ever been challenged by topology or data cleaning, you might find some insights into solving your own problems with the ideas contained here.

Pipeline

The edge-matching algorithm involves multiple complex and computationally intensive steps. Although Geopandas and GDAL usually come to mind when building multi-step geoprocessing scripts, PostGIS ended up being the fastest and best scaling tool for transforming gigabytes of vector data. I'll challenge your assumptions of how it can be used to create pipelines on both desktops and in the cloud, and make a case for why you should include it in your next project.

Sources

A composite dataset is only as good as the foundations it builds upon, and great care was taken in selecting which sources were used in this project. For international boundaries, I'll go into detail about how I used only public domain sources to create an ISO 3166 compliant dataset. At the subnational level, I'll highlight two projects that each curate updated administrative boundaries: one by the United Nations, another by an academic institution.

Whether you're a remote sensing specialist in search of the best topologically valid boundaries to run zonal statistics with, a Python developer frustrated by your pipelines constantly running into memory limits, or just want to run this tool on your own boundaries, I hope you come away from this talk with a valuable concept you can apply to your own work.

Data: https://fieldmaps.io/data

Tool: https://github.com/fieldmaps/edge-extender

Use cases & applications
UBT C / N111 - Second Floor
16:00
30min
Digitizing and improving GIS for Global Health: from data collection to geospatial data management for a measles vaccination campaign in Cameroon
Céline Bassine

In Cameroon, the planning and monitoring of a measles vaccination campaign is implemented in an open source software called Iaso built on a Python based backend combining Django and Postgres/Postgis ; the frontend is React based. Iaso aims to provide a number of core functionalities to support ongoing geospatial data management: a mobile application, a web dashboard, a mapping function to merge various data sources, a user-friendly API for data science and scripting, and a seamless bi-directional integration with DHIS2 (standard health information system in low- and middle-income countries).

Iaso is articulated around three essential components : a central georegistry interface, a mobile data collection tool and a micro planning interface. Those tools are integrated seamlessly with each other to provide a powerful platform to manage, update, merge and validate multiple data sources and structured information collected. Geospatial data from GPS collection to the management of multiple reference lists of organization units (Health, Administrative or School pyramid) are Iaso's foundation. Those features allow interconnecting collected data to existing hierarchical features coupled with planification and collection of survey campaigns in the field through the mobile application and the web platform.

Iaso exposes a full API providing various endpoints allowing data scientists to integrate data analysis pipeline through external analytic platform. As a geospatial data management platform, it provides versioning of every dataset and is designed to keep a full history of all the changes on the data of interest from the forms to the geometry or metadata of the organization units. It also features seamless integration with QGIS and other desktop applications through a templated Geopackage format.

In this presentation, the tool is explained and described from the planning of the vaccination campaign in Cameroon to the near real-time monitoring of the campaign (eg. stock and team planning management).

Source : https://github.com/BLSQ/iaso

Use cases & applications
UBT C / N109 - Second Floor
16:00
30min
Enabling Knowledge Sharing By Managing Dependencies and Interoperability Between Interlinked Spatial Knowledge Graphs
Nathan McEachen

Knowledge sharing is increasingly being recognized as necessary to address societal, economic, environmental, and public health challenges. This often requires collaboration between federal, local and tribal governments along with the private sector, nonprofit organizations and institutions of higher education. In order to achieve this, there needs to be a move away from data-centric to knowledge sharing architectures, such as a Geographic Knowledge Infrastructure (GKI) to support spatial knowledge-based systems and artificial intelligence efforts. Location and time are dimensions that bind information together. Data from multiple organizations need to be properly contextualized in both space and time to support geographically based planning, decision making, cooperation and coordination.

The explosive uptake of ChatGPT seems to indicate that people will increasingly be getting information and generating content using chatbots. Examples of AI-driven chatbot technology providing misleading, harmful, biased, or inaccurate information due to a lack of access to information highlight the importance of making authoritative knowledge accessible, interoperable, and usable for machine-to-machine readable interfaces though GKIs to support AI efforts.

Spatial knowledge graphs (SKG) are a useful paradigm for facilitating knowledge sharing and collaboration in a machine-readable way. Collaboration involves building graphs with nodes and relationships from different entities that represent a source of truth, trusted geospatial information, and analytical resources to derive new and meaningful insights through knowledge inferencing by location or a network of related locations.

However, due to a lack of standardization for representing the same location and for managing dependencies between graphs, interoperability between independently developed SKGs that reference the same geographies is not automated. This results in a duplication of effort across a geospatial ecosystem to build custom transformations and pipelines to ensure references to geographic data from different sources are harmonized within a graph for the correct version and time period and that these references are properly maintained over time.

What is needed is a way to manage graph dependencies, or linking, between organizations in a more automated manner. References to geographic features (i.e., geo-objects) from graphs that are curated by external (and ideally authoritative) entities should come from formally published versions with the time period for which they are valid (i.e., the period of validity). As newer versions of SKGs are published for different periods of validity, updating dependencies between graphs should be controlled and automated.

It turns out that an approach for a similar kind of dependency management has been in mainstream use for decades in a related field. Software developers long ago abandoned the practice of manually managing code artifacts on filesystems and manually merging changes to code. Rather, they use a combination of namespacing for identity and reference management along with formally managing versioned releases in a code repository. Although there are nuanced differences between software code versioning and dependency management between SKGs, there are enough similarities to indicate distinct advantages to treating geospatial data as code for the purpose of managing graph dependencies to automate knowledge sharing.

We have been developing such an approach since 2018 with the core principles implemented in an open-source application called GeoPrism Registry (GPR), which utilizes spatial knowledge graphs to provide a single source of truth for managing geographic data over time across multiple organizations and information systems. It is used to host, manage, regularly update hierarchies and geospatial data through time for geographic objects. GPR is being used by the ministry of health in the country of Laos to manage interlinked dependencies between healthcare related geo-objects and geopolitical entities. More recently it has been installed in Mozambique for use by the national statistics division (ADE) to meet their National Spatial Data Infrastructure (NSDI) objectives to facilitate cross-sectoral information collaboration using common geographies for the correct periods of time.

Currently, GPR is being considered by the US Federal Geospatial Data Committee (FGDC) to help build a GKI for GeoPlatform.gov, which is mandated by the United States Geospatial Data Act of 2018 (GDA) to improve data sharing and cooperation between public and private entities to promote the public good in a number of sectors. US federal agencies are developing spatial knowledge graphs, but they are not interoperable using machine-to-machine readable interfaces with those from other agencies. We led a requirements, design, and scoping effort that revealed a GKI architecture for GeoPlatform, will at a minimum, require the following machine-readable characteristics to enable knowledge interoperability using SKGs at scale.

Authoritative:
Copies of data always remain authoritative by preserving the identity of its source.

Temporal:
The period of validity should be specified in metadata as a moment in time (such as a date), a frequency (e.g., annually or quarterly), or an interval (year 2000 to 2005) in which data have not changed relative to when they were published.

Distributed:
Utilize the Data Mesh architecture pattern by giving organizations the ability to publish locally hosted graph assets. Other organizations can build fit-for-purpose graphs by pulling and merging only what is needed from authoritative sources.

Transitive:
Changes made to graphs should automatically propagate to the graphs that reference them, even if the dependency occurs via multiple layers of indirection (i.e., a dependency of a dependency).

Versioned:
Metadata should capture the published version.

Interoperable:
The semantic identity of data types, attributes, and relationships should be defined such that equivalency and identity can be established. This would include the use of namespaces, controlled vocabularies, taxonomies, ontologies, geo-object types, and graph edge types.

In this paper we will present the approach for implementing these GKI requirements and GeoPlatform.gov interoperability use cases using open-source software. This will include the Common Geo-Registry concept for managing the authoritative and interoperable requirements, the Data Mesh framework for making the solution distributed and transitive, and the spatial knowledge graph repository for managing temporal, and versioned dependencies. We will also present the metamodel architecture used by GeoPrism Registry for managing graph dependencies, facilitating interoperability, publishing, and how it currently is being used as a graph repository.

Academic Track
UBT E / N209 - Floor 3
16:00
30min
Enhancing Researchers' Data FAIR Experience for producing Policy-Relevant Insights through STAC Open Source Software and Specifications
Chiara Chiarelli

The Joint Research Centre (JRC) of the European Commission is committed to providing independent, evidence-based science and knowledge that supports EU policies. To facilitate this, the JRC has developed the Big Data Analytics Platform (BDAP), a data platform that allows data scientists to easily access, analyze, view, and reuse scientific data to generate and communicate evidence-based insights and foresight.

BDAP hosts spatiotemporal data at petabyte scale from various domains, including elevation, meteorological, administrative, and satellite Earth Observation data. Its architecture leverages almost entirely on Free and Open Source software and tools. The platform offers a cluster environment with both CPU and GPU machines, allowing for large-scale data processing. Additionally, users can visualize and interactively analyze their data through Jupyter Notebooks and Voilà dashboards.

Recently, BDAP implemented the Spatio Temporal Asset Catalog (STAC) specification to describe its data. The catalog hosts different types of data, which share the basic STAC fields. Thanks to the STAC modularity each data type can be described with its own STAC extensions.

BDAP reuses and benefits from various STAC Free and Open Source software and tools. In particular, from the STAC ecosystem it implements the STAC Browser for displaying and searching data, it provides STAC compliant APIs through STAC FAST-API backed by an elasticsearch instance, and uses PySTAC as a Python library for working with STAC metadata. This implementation helps BDAP in its FAIRification process improving users' search, access, and reuse of data.

In this presentation, the design and implementation of the STAC compliant set of software tools will be described. Some real use cases will be presented, with an example on the creation of analysis ready data cubes from Sentinel-2 Earth Observation satellite imagery.

Use cases & applications
Drini
16:00
30min
Get most out of STAC Browser
Matthias Mohr

STAC Browser is a full-fledged web interface for browsing and searching static STAC catalogs and STAC APIs. It has been rewritten from scratch with a lot of new functionality. This talk will introduce STAC Browser, showcase new functionality and uncover some unexpected gems such as the broad range of customization possibilities. Lastly, the presentation will guide you through a set of best practices for your static STAC catalog or STAC API so that you get the most out of STAC Browser with regards to functionality and user experience.

State of software
Lumbardhi
16:00
30min
Land of 60000 zoning plans - QGIS to the rescue!
Ville Hamunen

This project was a pilot of a larger upcoming project, where the aim is to produce a national interoperable data model for every valid zoning and city plan in Finland. The project is part of the development of the Finnish Environment Institute’s Built Environment Information System and the harmonization of national land use planning information.

The aim of this presentation is to present the overall workflow of the project and the transition from proprietary data towards an open source national database with common spatial and descriptive information. Currently the data used in municipal decision making processes in Finland consists of proprietary data that is lacking spatial information or is outdated.

The transformation of the zoning and city plans from two different data providers created a lot of topological errors and unmatched geometries. QGIS was a key tool for fixing these errors - the digitizing and geometry repair tools were used in solving these issues.

This pilot project was implemented in Southern Savonia, Finland. In the region, zoning has been executed for approximately 80 % of the whole land area. The focus of the project was to investigate the compatibility of the base data and how to automate the processes of merging, fixing, updating and comparing the data. The data was in vector format and was provided by the National Land Survey of Finland and municipalities of Southern Savonia.

The automation processes were built with a python script and the quality control was made with manual digitization. The official documentation of the zoning and city plans were included in the borderline vector data. The final product was uploaded to a GitHub repository. The project also managed to produce a timeline for the upcoming nationwide project and the distribution between automated and manual workload in similar projects.

The methods and the results of the project could be duplicated in other countries or lead the way towards more open national or regional land use planning.

Transition to FOSS4G
UBT F / N212 - Floor 3
16:00
30min
Mapillary: The path to 2 billion images
Christopher Beddow, Edoardo Neerhut

Mapillary is an open platform for street-level imagery and map data that began in 2013. Since then around 1.8 billion images have been contributed from around the world. Imagery has been contributed from horseback in Kyrgyzstan, boats in the canals of Amsterdam, and bicycles on the streets and trails of Sydney. As Mapillary approaches 2 billion images, we’d like to summarize the latest features, acknowledge some of the amazing contributions, and hint at some of the updates that are coming.

Some of the things that we have been working on include:

Desktop Uploader improvements including support for videos and popular cameras.
Improvements to Mapillary Tools, command line scripts for working with and uploading geotagged imagery and video.
Mobile app updates including multi-tasking, redesigns, multi-language support, and upload improvements.
Camera Grant programs in the US and Europe, providing 360º cameras for people interested to map pedestrian infrastructure.
Integrations with Rapid Editor, an AI powered OpenStreetMap editor which we will demo in more detail at a workshop.
Updated Help Pages to make capturing, uploading, and using street-level imagery far easier.

After walking through the latest Mapillary improvements, we will take a look at case studies of organizations contributing and using imagery. We’ll zoom in on an NGO, a government agency, and a commercial entity, each of which are using Mapillary in different ways.

We’ll finish our talk with an exploration of upcoming Mapillary features and projects. We encourage questions and suggestions in the Q&A and hope for a productive conversation at the end as we walk together towards 2 billion images.

State of software
Mirusha
16:00
30min
Overview of draft OGC Styles & Symbology "SymCore" 2.0 models & encodings
Jerome St-Louis

An overview of the Core Models and Encodings for Styling and Symbology - Part 1: Core ("SymCore") 2.0 draft candidate Standard.

In comparison to the current OGC Symbology Conceptual Model: Core Part ("SymCore") version 1.0, the new draft candidate Standard aims to better reflect its classification as an OGC Implementation Standard by including the requirements classes needed to enable the implementation of interoperable encodings, renderers (e.g., OGC API - Maps / OGC API - Tiles) and systems parsing and/or generating style definitions (e.g., OGC API - Styles, visual style editors, style transcoders).

It does so by featuring:

  • A modular logical and conceptual model for styling capabilities,
  • A minimal Core requirements class including clear extension mechanisms, through the definition of abstract Selectors, Symbolizers, and Expressions,
  • a basic Vector Styling requirements class,
  • a basic Coverage Styling requirements class,
  • requirements classes providing additional styling functionality,
  • a JSON encoding of the conceptual and logical model facilitating machine readability,
  • a CSS-inspired encoding of the conceptual and logical model facilating hand-editing.

The latest version of the draft is available in HTML (https://opengeospatial.github.io/ogcna-auto-review/18-067r4.html) or PDF (https://opengeospatial.github.io/ogcna-auto-review/18-067r4.pdf).

The official GitHub repository is at: https://github.com/opengeospatial/styles-and-symbology

Open Standard
UBT D / N112 - Second Floor
16:00
30min
State of pgRouting
Vicky Vergara

pgRouting is evolving rapidly, many changes have been taking place. Lets catch on.

The focus of this talk will be on the topology functions that were created on 2013, Its been 10 years, and its their time to go:
* Why "I" don't want to use them any more
* New specialized functionality has been created that substitute the work that the topology functions are doing in a very rustic way.
* A quick guide on how not to use the "soon to be deprecated topology functions"

State of software
Outdoor Stage
16:30
16:30
30min
An Investigation into Updating the Building Stock Data for Municipalities in Baden-Württemberg, Germany
Franz-Josef Behr

In the submitted paper, the topicality of the building stock in municipalities in Baden-Württemberg, part of the Federal Republic of Germany, is examined. Three municipalities were selected and included in the study according to the spatial type concept of the Federal Office for Building and Regional Planning (BBSR 2023): rural town 2,000-5,000 inhabitants, small town 5,000-20,000 inhabitants, medium-sized town, 20,000-100,000 inhabitants. The analysis concept is explained and the quantitative and qualitative results of the project, which is currently in its final phase, are presented. The aim is to use these results to derive and communicate recommendations for action for the municipalities, but also for the public surveying administration, in order to contribute to timely and effective action by municipal decision-makers and citizens through faster provision of geospatial data.

Open Data
UBT C / N111 - Second Floor
16:30
30min
An overview of Cloud-Native Geospatial
Matthew Hanson

“Cloud-Native Geospatial” is a new paradigm for performing efficient data access and compute the cloud in an interoperable way in order to achieve scalable and repeatable analysis of geospatial data. The last few years have seen major developments in open standards and open software that make this possible, supporting full end to end interoperable workflows on remote sensing data, starting from data discovery to publishing of derived products.

This talk will provide an overview of what Cloud-Native geospatial is and why it is important for building scalable architectures. It will cover the current state of the Spatio Temporal Asset Catalog (STAC) specifications, and the landscape of cloud-optimized file formats, for raster, vector, and point-cloud data formats (COG, GeoZarr, GeoParquet, COPC).

State of software
Lumbardhi
16:30
30min
Community Activation for the Kahramanmaraş Earthquake Response via OpenStreetMap
Can Unen

On February 6, 2023 a sequence of major earthquakes with magnitudes 7.8 and 7.5 have struck Southern Turkiye and Northern Syria, causing massive damage and very high number of casualties in both countries. The sequence of earthquakes were followed with hundreds of aftershocks within the month following the earthquakes, as well as triggering other major earthquakes, such as the 6.4 magnitude earthquake that had struck Antakya on February 20. Humanitarian OpenStreetMap Team (HOT), with Yer Çizenler (YÇ), HOT’s local partner within the Turkish OSM community, have activated to map the missing road and building base data with the help of regional and global OpenStreetMap communities.

More than 7 thousand contributors from these communities, together, have contributed to the addition of more than 1.4 million buildings, 70,000 km of roads into OpenStreetMap for the use of field volunteers and organizations worldwide.

In this talk, the audience will be informed about the coordinated efforts within this mapping activation, the impact of the data created with some example use cases within the response activities. The audience will be informed about various open data sources that were used to enhance the existing OSM data, and their licensing and compatibility considerations during the mapping process. The presenters will also describe the validation, data quality assurance and monitoring methods, approaches and tools utilized for ensuring the OSM data is reliable, current and is able to meet community standards within both short and long terms.

Use cases & applications
UBT C / N109 - Second Floor
16:30
30min
Integrated modeling with k.LAB... and QGIS
Andrea Antonello

The Knowledge Laboratory, in short k.LAB, is a software stack that embraces the FAIR principles: findable, accessible, interoperable and reusable. Its objective is to support linked knowledge across the borders of the domains of single modelers and scientists. k.LAB’s fascinating novelty is the use of semantics to create a natural language to describe the models and the qualities that want to be observed.

Modelers can develop their models and publish them to the network. Publishing makes them findable and accessible within the network. Since everything in the network is observable, when running a model, k.LAB looks for the best knowledge unit able to resolve the particular request. Interoperability is build and reusability is a natural consequence.

The k.LAB software stack is free and open source and relies on various projects of the Osgeo community as Geoserver, Openlayers and the Hortonmachine. It has been in development for almost 2 two decades and got a particular visibility boost in 2021, when the Statistics Division of the UN Department of Economic and Social Affairs and the UN Environment Program, in collaboration with the Artificial Intelligence for Environment & Sustainability at the Basque Centre for Climate Change, launched the Artificial Intelligence powered application for rapid natural capital accounting: the ARIES for SEEA Explorer.

Lately a python client that allows interaction with k.LAB has been released. This opens up to new ways to observe the world from within common GIS tools as for example QGIS.

An overview of the state of the art of the project will be given.

State of software
UBT F / N212 - Floor 3
16:30
30min
Pedaling towards Progress: An Analysis of Capital Bikeshare Trips in Washington D.C. using Open-Source Geospatial Tools
Max Lindsay

In this presentation, we showcase a unique approach to analyzing Capital Bikeshare trips in Washington D.C. using Open-Source Geospatial (FOSS4G) tools and technologies. Our project involved loading trip data into a PostGIS database, utilizing the Valhalla routing engine and OpenStreetMap data to find the optimal routes between each pair of stations, and then constructing a topogeometry table to represent these routes. Using this topogeometry table, we are able to estimate the number of Capital Bikeshare trips that occur on each road in Washington D.C.

The use of FOSS4G tools and technologies allowed us to perform this analysis in a cost-effective and efficient manner, while also providing high-quality results. The results of our analysis have important implications for urban planning and mobility research, as they can be used to understand the patterns and impacts of bike-share usage in cities.

Our presentation will provide an overview of the methodology used in our project, as well as a discussion of the results and their implications. We will also share our experiences using FOSS4G tools and technologies and provide insights on how these tools can be used in similar projects. This presentation is of interest to geospatial professionals, urban planners, and anyone interested in using FOSS4G tools for data analysis and mobility research.

Use cases & applications
UBT C / N110 - Second Floor
16:30
30min
Processing and publishing Maritime AIS data with GeoServer and Databricks in Azure
Andrea Aime

The amount of data we have to process and publish keeps growing every day, fortunately, the infrastructure, technologies, and methodologies to handle such streams of data keep improving and maturing. GeoServer is an Open Source web service for publishing your geospatial data using industry standards for vector, raster, and mapping. It powers a number of open-source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale. We integrated GeoServer with some well-known big data technologies like Kafka and Databricks, and deployed the systems in Azure cloud, to handle use cases that required near-realtime displaying of the latest AIS received data on a map as well background batch processing of historical Maritime AIS data.

This presentation will describe the architecture put in place, and the challenges that GeoSolutions had to overcome to publish big data through GeoServer OGC services (WMS, WFS, and WPS), finding the correct balance that maximized ingestion performance and visualization performance. We had to integrate with a streaming processing platform that took care of most of the processing and storing of the data in an Azure data lake that allows GeoServer to efficiently query for the latest available features, respecting all the authorization policies that were put in place. A few custom GeoServer extensions were implemented to handle the authorization complexity, the advanced styling needs, and big data integration needs.

Use cases & applications
Drini
16:30
30min
SOZip: using directly (geospatial) large compressed files in a ZIP archive!
Even Rouault

SOZip (Seek-Optimized ZIP) is a new open specification on top of the ZIP archive format to compress one or several files organized and annotated such that a SOZip-aware reader can perform very fast random access (seek) within a compressed file.
SOZip makes it possible to access large compressed files directly from a .zip file without prior decompression. It is not a new file format, but a profile of the existing ZIP format, done in a fully backward compatible way. ZIP readers that are non-SOZip aware can read a SOZip-enabled file normally and ignore the extended features that support efficient seek capability.
We will present how SOZip works under the hood and discuss about SOZip implementations, in particular in GDAL, which make it possible for its downstream users, in particular QGIS, to read seamlessly and efficiently large compressed files in GeoPackage, FlatGeoBuf, or shapefile formats.

Open Standard
UBT D / N112 - Second Floor
16:30
30min
Upgrade your Postgres and PostGIS will thank you
Felix Kunde

Every year, there's a new Postgres major release that improves on performance in certain areas and could provide new hooks for extensions like PostGIS to take advantage from them. If not planned well, upgrading your production databases can become a pain. Sooner than you think you'll be running on EOL (End-of-Life) versions because the upgrade has been postponed too many times. Don't!

Did you know Postgres upgrades can be greatly automatized these days with downtimes of only a few seconds? This talk will show you how and will also present some essential features from recent Postgres and PostGIS versions to get you excited for the new upgrade.

State of software
Outdoor Stage
17:00
17:00
60min
Open-Source Solutions: Expanding our Humanity with Data Stories
Bonny P McClain

Geospatial analysis welcomes an audience to interact with complex interactions and dynamic shifts in ecosystem balance. Location intelligence collected as data layers mirror a symphony or chapters in a book. We will explore the potential risks of vulnerable cities by exploring the environment, economics, built infrastructure, and how they intersect. We build the story or music over time while exploring the tensions we create. Let’s examine the edges of eco-geomorphic frameworks and listen for a narrative.

Open Data
Outdoor Stage
09:00
09:00
30min
Creating a Peaceful and Profitable Society: FOSS4G and New Employment Opportunities
Rei Kasai

We will explore how Re:Earth as a digital public good could support a "Peaceful Profitable Society" and create new employment opportunities.
Re:Earth is an open source platform built around a geographic information system that digitally represents geospace and enables analysis and visualization of cities and regions. The use of such digital public goods offers opportunities to develop new ways of working and improve their own lives, especially for the socially vulnerable.

In particular, we will explore the potential for vulnerable populations, such as refugees and single mothers, to use Re:Earth to pave the way for self-empowerment. We will also delve into how digital public goods such as Re:Earth can impact society as a whole, especially how they can be a tool for the vulnerable to improve their own lives and contribute to the realization of a "society where peace is profitable".

This speech will provide insight into how such digital public goods can impact individual lives and society as a whole, and how they can help shape a "society where peace is profitable".

Community & Foundation
Outdoor Stage
09:30
09:30
30min
Geochicas: From SOTM to FOSS4G, a Geospatial journey
Miriam Gonzalez

Geochicas is a initiative born in State of the Map Sao Paolo and adopted by FOSS4G communities over the past years. We would like to share with you what had happened in the last couple of years and what we foresee in the future of the initiative. How Geochicas is part of a larger ecosystem of siblings organizations working towards having a more balanced presence of women and minority groups in the Geospatial communities.

Community & Foundation
Outdoor Stage
10:00
10:00
30min
coffee-break
Outdoor Stage
10:00
30min
coffee-break
Lumbardhi
10:00
30min
COFFEE-BREAK
Drini
10:00
30min
coffee-break
Mirusha
10:00
30min
coffee-break
UBT E / N209 - Floor 3
10:00
30min
coffee-break
UBT F / N212 - Floor 3
10:00
30min
coffee-break
UBT C / N109 - Second Floor
10:00
30min
coffee-break
UBT C / N110 - Second Floor
10:00
30min
coffee-break
UBT C / N111 - Second Floor
10:00
30min
coffee-break
UBT D / N112 - Second Floor
10:00
30min
coffee-break
UBT D / N113 - Second Floor
10:00
30min
coffee-break
UBT D / N115 - Second Floor
10:30
10:30
30min
GEOSPATIAL BIG DATA ANALYTICS FOR SUSTAINABLE SMART CITIES
Muhammed Oguzhan Mete

Growing urbanization cause environmental problems such as vast amount of carbon emissions and pollution all over the world.
Smart Infrastructure and Smart Environment are two significant components of the smart city paradigm that can create opportunities for ensuring energy conservation, preventing ecological degradation, and using renewable energy sources. United Nations Sustainable Development Goals (SDGs) such as “Sustainable Cities and Communities”, “Accessible and Clean Energy”, “Industry, Innovation and Infrastructure”, and “Climate Action” can be achieved by implementing the smart city concept efficiently. Since a great portion of the data contains location information, geospatial intelligence is a key technology for sustainable smart cities. We need a holistic framework for the smart governance of cities by utilizing key technological drivers such as big data, Geographic Information Systems (GIS), cloud computing, Internet of Things (IoT). Geospatial Big Data applications offer predictive data science tools such as grid computing and parallel computing for efficient and fast processing to build a sustainable smart city ecosystem.

Handling geospatial big data for sustainable smart cities is crucial since smart city services rely heavily on location-based data. Effective management of big data in storage, visualization, analytics, and analysis stages can foster green building, green energy, and net zero targets of countries. Geospatial data science ecosystem has many powerful open source software tools. According to the vision of PANGEO, a community of scientists and software developers working on big data software tools and customized environments, parallel computing systems have the ability to scale up analysis on geospatial big data platforms which is key for ocean, atmosphere, land, and climate applications. Those systems allow users to deploy clusters of compute nodes for big data processing. In the application phase of this study, Pandas, GeoPandas, Dask, Dask-GeoPandas, and Apache Sedona libraries are used in Python Jupyter Notebook environment. In this context, we carried out a performance comparison of two cluster computing systems: Dask-GeoPandas and Apache Sedona. We also investigated the performance of the novel geospatial data format GeoParquet together with other well-known formats.

There is a common vision, policy recommendations, and industry-wide actions to achieve the 2050 net zero carbon emission scenario in the United Kingdom. The energy efficiency of the English housing stock has continued to increase over the last decade. However, there is a need for systematic action plans in parcel scale to deliver on targets. In the study, open data sources are used such as Energy Performance Certificates (EPC) data of England and Wales, Ordnance Survey (OS) Open Unique Property Reference Number (UPRN), and OS Building (OS Open Map) for analysing energy efficiency level of domestic buildings. Firstly, EPC data is downloaded from Department for Levelling Up, Housing & Communities data service in Comma Separated Value (CSV), UPRN data from OS Open Hub in GeoPackage (GPKG), and buildings data from OS in GPKG formats. After saving each file in GeoParquet format, EPC data and UPRN point vector data are joined based on the unique UPRN id. Then each UPRN data attribute is appended to the relative building polygon by conducting spatial join operation. Read, write, and spatial join operations are both conducted on Dask-GeoPandas and Apache Sedona in order to compare the performances of the two big spatial data frameworks.

Cluster computing system enables much faster data handling when compared with the traditional approaches. Comparing performances of the frameworks, local computing hardware (11th Gen Intel Core i7-11800H 2.30 GHz CPU, 64 GB 3200 MHz DDR4 RAM) is used. According to the results, Dask-GeoPandas and Apache Sedona prevailed GeoPandas in read, write, and spatial join operations. Apache Sedona performed better during the performance tests. On the other hand, GeoParquet file format was much faster and smaller in size when compared with the GPKG data format. After spatial join operation, energy performance attributes are included in building data. In order to reveal regional energy efficiency patterns, SQL statements are used for filtering the data according to the energy rates. The query result is visualized using Datashader which provides highly optimized rendering with distributed systems.

This study answers the question “Can geospatial big data analytics tools foster sustainable smart cities?”. Volume, value, variety, velocity, and veracity of big data require different approaches than traditional data handling procedures in order to reveal patterns, trends, and relationships. Using spatial cluster computing systems for large-scale data enables effective urban management in the context of smart cities. On the other hand, energy policies and action plans such as decarbonization, and net zero targets can be achieved by sustainable smart cities supported by geospatial big data instruments. The study aims to reveal the potential of big data analytics in the establishment of smart infrastructure and smart buildings using large-scale geospatial datasets on state-of-the-art cluster computing systems. In future studies, larger spatial datasets like Planet OSM can be used on cloud-native platforms to test the capabilities of the geospatial big data tools.

Academic Track
UBT E / N209 - Floor 3
10:30
30min
Gleo Feature Frenzy
ivansanchez

Gleo is a nascent javascript WebGL mapping library. It aims to find a niche alongside Leaflet, OpenLayers, MapLibre and Deck.gl.

This library was presented at FOSS4G 2022, with an emphasis on its architectural foundations: geometry/reprojection/antimeridian handling, and object-oriented abstractions for WebGL data structures.

This session provides a tour of the features developed during the last year. These include, among others:
- Work done as part of the OSGeo-OGC codesprints (OGC API clients, experimental symbols)
- Animated symbols (render loop)
- Symbol class decorators (ability to add more functionality to a cartographic symbol class during runtime)
- Flexibility of scalar field manipulation (symbols that render as a magnitude instead of a colour, then the field renders as e.g. a heatmap)

These functionalities are a fresh approach to cartographic rendering and will provide a glimpse of the potential of Object-Oriented WebGL manipulation for cartographic rendering.

State of software
UBT D / N112 - Second Floor
10:30
30min
Introducing Terra Draw: A JavaScript Library To Draw On Any Web Map
James Milner

If you have ever had the experience of having to write code to draw on web maps, you'll know how painful the process can be - especially when situations get more complex.

Terra Draw is an open source JavaScript library that provides a new way to add drawing functionality to a host of web mapping libraries, including Leaflet, OpenLayers, Google Maps, MapboxGL JS and MapLibreGL JS.

The library provides a selection of built in modes that 'just work' across different mapping libraries. These features include elementary drawing tools like point, line and polygon, as well as supporting more advanced concepts like snapping, rotation and scaling.

Terra Draw is also designed to be extendable so that you can write your own custom modes and adapters (thin wrappers for each mapping library). The architecture of the library means that any mode work can work with any adapter and vice versa creating a strong multiplier affect as new modes and adapters are written. This decoupling has the added benefit that drawing libraries can be swapped out without breaking your app!

The talk will examine the history of the library, how to get started, and also an opportunity to hear more about the future of Terra Draw.

State of software
Drini
10:30
30min
Making of a community - beyond the recipe
Vasile Crăciunescu, Codrina Ilie

In our allocated 15 minutes, we would like to take you on a trip following the winding roads of building a community, the Romanian geospatial community: geo-spatial.org. We want to share our story, beyond our geodata and knowledge portal, to the very core of the values and principles that have guided us through difficult times and made our overcame challenges even brighter.
In our more than a decade of existence, we’ve organised over 25 national FOSS workshop, a regional FOSS4G in 2013 and a global FOSS4G in 2019, we’ve initiated collaborative geo-related projects and managed to infuse the geospatial component in various non-spatial organisations, such as the ones in education or investigative journalism.

Community & Foundation
Lumbardhi
10:30
30min
Many Data Sources, One Web Map: Data cleaning and optimization with FOSS
Will Field, Valerie Bauer

The Long Island Zoning Atlas is an interactive web map that displays zoning data, public services, and demographic data for municipalities all across Long Island excluding New York City. The app focuses on statistics that help affordable housing advocates plan housing projects. This year we rebuilt the Long Island Zoning Atlas using our new FOSS stack. The project presented a problem very common to GIS projects: transforming data from many different sources, in this cases towns. We were given the data in many different formats and needed to transform it all into clean, usable data which is organized to our needs and renders quickly and efficiently on the web.

Use cases & applications
UBT D / N115 - Second Floor
10:30
30min
Open source geospatial software in support of the common European Green Deal data space
Marco Minghini

Published in 2020, the European strategy for data sets the vision for Europe to become a leader in a data-driven society by establishing so-called common European data spaces in all strategic societal sectors. Data spaces are envisioned as sovereign, trustworthy and interoperable data sharing environments where data can fairly flow within and across actors, in full respect of European Union (EU) values to the benefit of European economy and society. The development of data spaces is accompanied by a set of horizontal legislative measures, including, among others, an Implementing Act on high-value datasets under the Open Data Directive that lays down a list of datasets (many of which being geospatial) that EU Member States public sector organisations are required to make available for free, under open access licenses, in machine-readable formats and via Application Programming Interfaces (APIs).
The talk will describe the activities around open source geospatial software and open geospatial data that the European Commission’s Joint Research Centre (JRC) has performed to support the development of the common European Green Deal data space, focused on environmental data sharing and instrumental to address climate changes and environmental challenges in line with the top priority of Von der Leyen’s Commission 2019-2024.
A key enabler to bring public data into this data space is the infrastructure setup for the EU INSPIRE Directive, which is technically coordinated, maintained and operated by the JRC. The INSPIRE Directive itself, together with the Directive on public access to environmental information, are currently subject of an impact assessment that might lead to a revision of the legal framework (GreenData4All initiative). This is accompanied by an overall modernisation of the technical infrastructure, increasingly based on open source software both at the Commission side (GeoNetwork for the INSPIRE Geoportal, ETF for the INSPIRE Reference Validator and Re3gistry for the INSPIRE Registry) and at the Member States side, where FOSS4G tools are the primary choice for both serving and consuming data. Thanks to a number of INSPIRE Good Practices promoted by the community, new standards and approaches for data encoding and sharing (e.g. based on OGC APIs) are bringing additional value to the INSPIRE stack. The same set of approaches ensures the full alignment and complementarity between INSPIRE and the Implementing Act on high-value datasets, thus positioning open source geospatial software as a true enabler for the Green Deal data space.

Open source geospatial ‘Made in Europe’
UBT F / N212 - Floor 3
10:30
30min
QGIS Feature Frenzy - both for the Long-term release (3.28) and the Latest release (3.32)
Kurt Menke

QGIS releases three new versions per year and each spring a new long-term release (LTR) is designated. Each version comes with a long list of new features. This rapid development pace can be difficult to keep up with, and many new features go unnoticed. This presentation will give a visual overview of some of the most important new features released over the last calendar year.

In March of 2023 a new Long-term release was published (3.28), and shortly before FOSS4G, the latest stable version of QGIS (3.32) will be released. I will start by comparing the new LTR (3.28) to the previous (3.22). Here I will also summarize by category the new features found in the latest LTR (GUI, processing, symbology, data providers etc.).

I will then turn my attention to the important new features found in the latest releases (3.30 & 3.32). Each highlighted feature will not simply be described but will be demonstrated with real data. The version number for each feature will also be provided. If you want to learn about the current capabilities of QGIS, this talk is for you!

Potential topics include: Annotation layers * GUI enhancements * New Expressions * Point cloud support * Print layout enhancements * New renderers and symbology improvements * Mesh support * 3D * Editing

State of software
Outdoor Stage
10:30
30min
Smart Maps for the UN and All - keeping web maps open
Hidenori Fujimura, Yui Matsumura

Do you want to broaden your horizons by learning about geospatial support for the United Nations operations? Or are you interested in developing highly efficient and portable geospatial apps which make use of PMTiles, COPC, COG, Raspberry Pi, and a cool Web3 technology named IPFS (Inter-planetary File System)? We are doing both in the Domain Working Group 7 (DWG 7) on Smart Maps of the UN Open GIS Initiative.

In this participatory and voluntary DWG established in Firenze in August 2023, participants bring in their objectives and combine efforts within the Partnership for Technology in Peacekeeping to bring greater involvement to peacekeeping through innovative approaches and technologies that have the potential to empower UN global operations. In addition to our core objective to support the use of UN Vector Tile Toolkit in the UN Global Service Centre, DWG 7 is supporting domestic and campus-level service operations, and supporting 3D geospatial data such as point clouds and 3D city models. We are combining efforts to define and implement the concept of Smart Maps.

We are happy to share with you our new effort named Model UN Development and Operations (MUNDO) that simulates geospatial support for the United Nations operations by making use of existing open geospatial data and our Smart Maps technologies. MUNDO project is not only useful for demonstrating the technology for the UN staff, but also useful for learning about the situation and the UN’s effort. We are also happy to share with you our new concept of WebMaps3, which introduces Web3 technology for web maps. By combining IPFS and cloud optimized formats like PMTiles, COPC, and COG, we were successful in hosting a vector tiles service from a newly released nation-wide cadastre dataset on a Raspberry Pi, within 10 days after the release, by producing a 14GB PMTiles file.

Community & Foundation
UBT C / N109 - Second Floor
10:30
30min
Supercharging deck.gl layers with extensions
Felix Palmer

deck.gl is a popular open source data visualization library that uses the power of WebGL to render huge amounts of data performantly in the browser. A collection of versatile layers allows the user to create many different types of visualizations, with excellent support for geospatial data in particular.

The core layers can be extended by the means of deck.gl extensions to create interactive experiences which are not possible in other data visualization frameworks.

This talk will give an overview of deck.gl, including some of the core layers and will then focus on three of the latest extensions:

  • The CollisionFilterExtension avoids collisions between features on screen. This can be used to selectively show large cities in preference to small ones on a map when they would otherwise overlap or laying out labels.
  • The MaskExtension implements realtime masking of data by an arbitrary spatial boundary. An example use case is clipping a set of roads and places of interest to the boundary of a city.
  • The TerrainExtension offsets the 3D component of features by referencing a separate 3D layer. For example, a set of pins on a map can be placed at the correct height relative to a 3D terrain layer.
Use cases & applications
UBT D / N113 - Second Floor
10:30
30min
What's new and coming up in OpenLayers
Olivia Guyot

OpenLayers is a powerful web-mapping library, and it has been around for quite a while. Far from being stuck in a past state where it offered most features anyone could expect, the community of contributors and maintainers are continuously pushing it forward, rethinking orientations and taking in new trends. Be it cloud-native formats, emerging standards or drastic performance improvements, more and more innovations are becoming parts of OpenLayers feature set.

This talk will give you an overview of the past few years of development, and show in how many incredibly useful ways OpenLayers can be used nowadays. We will also discover the exciting developments that are shaping up for the future, and how all this is being made possible.

State of software
Mirusha
11:00
11:00
30min
#30DayMapChallenge with Open tools
Raymond Lay

30DayMapChallenge is a daily map making challenge which is held since 2019 every year in november on social network. This challenge has become year after year popular for the mapmakers community, and more than 8000 maps have been posted in 2022 session.

Last year was my first participation, it was a great opportunity to try to make unusual maps, complete sleeping projects, and be updated with geospatial technologies.
In this talk will be presented how this challenge has been completed and especially which open tools has been used to make the 30 maps.

Open Data
UBT C / N111 - Second Floor
11:00
30min
EGMS: Validating 10.000 million open geospatial ground motion timeseries at EU scale
Joan Sala Calero

The European Ground Motion Service (EGMS) is part of the Copernicus Land Monitoring Service (CLMS) lead by the EEA (European Environment Agency). EGMS is based on the full resolution InSAR processing (20x5m) of the European Space Agency (ESA) Sentinel-1 (S1). This massive geospatial timeseries dataset is composed by ~10.000 million timeseries distributed over 31 European countries. The baseline covers 2015-2020 and updates are being published on a yearly basis. It is publicly accessible at https://egms.land.copernicus.eu/ with a 3D viewer and download service.

This open dataset consists of three product levels (Basic, Calibrated and Ortho). The Basic and Calibrated are offered at full resolution 20x5m (Line of Sight) whereas the Ortho product offers horizontal (East-West) and vertical (Up-Down) anchored to the reference geodetic model resampled at 100x100m.

Sixense is coordinating a consortium responsible for the independent validation of this continental scale geospatial dataset. The validation goal is to assess that the EGMS products are consistent with user requirements and product specifications, covering the expected range of applications. To evaluate the fitness of the EGMS ground motion data service seven reproducible validation activities (VA) have been developed gathering validation data from different sources across 12 European countries:

• VA1 – Point density check performed by Sixense.

• VA2 – Comparison with other ground motion services carried out by NGI (Norwegian Geotechnical Institute).

• VA3 – Comparison with inventories of phenomena/events performed by BRGM (French Geological Survey).

• VA4 – Consistency check with ancillary geo-information carried out by NGI.

• VA5 – Comparison with GNSS data performed by TNO (Dutch Geological Survey).

• VA6 – Comparison with insitu monitoring data performed by GBA (Austrian Geological Survey).

• VA7 – Evaluation XYZ and displacements with Corner Reflectors performed by TNO.

The validation environment developed and maintained by Terrasigna includes all the necessary elements to perform all the validation tasks from data collection and description to execution of the different methodologies. The objective of this portable Kubernetes/Terraform cloud-based system is to guarantee reproducibility of all the validation activities:

• A MinIO web-based validation data upload tool where scientists can upload their validation data and EGMS subsets.

• A validation data catalogue based on GeoNode (based on OGC CSW) where all validation sites data is properly described and georeferenced to ensure reproducibility.

• JupyterHub notebook environment where scientists can develop their validation scripts (Python/R). These notebooks produce graphs and figures to be included in the yearly validation reports.

Open Data
UBT F / N212 - Floor 3
11:00
30min
Google Earth Engine and the Use of Open Big Data for Environmental and Climate-change Assessments: A Kosovo Case Study
Dustin Sanchez

Kosovo is one of the most environmentally degraded countries in Europe. It is also one of the poorest. The country lacks the capacity to conduct environmental assessments to gauge the scale of its environmental problems. It has even less capacity to understand its vulnerability to climate change and its prognosis for sustainable development. This paper describes the use of available (open) resources by the technically trained to understand environmental changes and provide a framework for developmental research that provides practical understandings of climate impacts. There tends to be a lack of awareness of the tools and scant knowledge of their use towards sustainable development.
An environmental assessment of Kosovo using large and open remote-sensing data from Google Earth Engine is explained through an embedded multi-case design. Our approach used publicly available models and code walkthroughs from the book Cloud-based Remote Sensing with Google Earth Engine. The models were coded for Kosovo and the greater western Balkans region in JavaScript using Google Earth Engine open datasets to analyze environmental conditions in this region. This work demonstrates the value of free and open tool development and analysis for development of environmental sustainability. The use of open data requires careful analytical designs and the application of correct tools for specific regions and particular uses. Complex environmental conditions can muddle the data and analyses generated from open datasets. The “un-muddled” analysis performed here adds to the knowledge base of the environmental conditions within Kosovo and provides insight into regional assessment of changing climates.
Models for air pollution and population exposure, groundwater monitoring with GRACE, urban environments, and deforestation viewed from multiple sensors were compiled into an environmental assessment of the scopes and scales of several environmental issues that plague Kosovo. The air pollution and population exposure model assesses the human toll of air pollution in Kosovo. Groundwater monitoring with Gravity Recovery and Climate Experiment (GRACE) appraises the health of aquifers and the security of water resources. Urban-environment analysis evaluates the changes that are occurring in urban locations in Kosovo. And the deforestation model is used to determine and evaluate the changes to several environments in Kosovo. The project will also include discussions of scalability to understand how the interconnected environmental conditions of the Balkans region can be further studies. The models, analytical frameworks, and overarching goals provide a robust strategy towards practical leveraging of remote sensed data to provide intrinsic value into developmental countries.
The methods are interchangeable and replicable for climate-change analysis, sustainability decision making, and monitoring of environmental change. The urban expansion in Kosovo from 2010 till 2020 is studied with Landsat and MODIS mission data to understand the consequences of land use change. The air pollution and population exposure model employs Sentinel-5 TROPOMI and population density data to help discern air pollution levels and the human toll of environmental degradation. The groundwater monitoring application uses Gravity Recovery and Climate Experiment strives to clarify water storage capacities and trends within Kosovo’s aquifers. The forest degradation and deforestation model uses Landsat mission data to understand the changes occurring within the forests of Kosovo. The combination of these models creates a comprehensive case study of the environmental conditions within Kosovo and provides a baseline for understanding the effects of changing climates in the region. This information is crucial in developing effective strategies to address the challenges posed by climate change and to ensure a sustainable future for the region.
This paper clarifies the methods used for modeling of big data sets in Google Earth Engine to generate products that can be used to assess both climate change and environmental change. We explore the frameworks for cloud computing of open-data environmental analyses by evaluating data selection and analytical techniques to provide an analytical framework for future development. Further building the cross-sectional understanding of the leverage utility of Google Earth Engine with analytical frameworks that provide utility with developing academic frameworks for resilience building and products that can traverse into government institutional knowledge building, private sector sustainable developmental gaps, public sector environmental and climate developmental strategies.
The emergence of new technologies has provided opportunities for new approaches to broadly understand the impacts of global climate change and free-to-use frameworks places the capacity to understand attainable for developing countries. The use of this technology enables development of a regional understanding of climate change, its impacts, and the approaches for enhancing resilience through analysis of petabytes of open satellite data. This paper delivers a framework with which remotely sensed data can be assessed to understand how human-environment interactions in developing nations will be influenced by changing climates. These models which are all functionally different have environmental links that through development provides the future of open big data for building climate change resilience through a remote sensed top to bottom understanding of what the data means and how it can be applied.

Academic Track
UBT E / N209 - Floor 3
11:00
30min
Graph-based geo-intelligence
Francesca Drăguț

We developed a free graph-based geo-intelligence engine that serves fast, scalable, and reliable data analysis. The engine's value lies in its flexibility and applicability to any relational dataset, as well as its integration of open-source technologies and libraries. We chose to build our geo-intelligence engine on a graph infrastructure to enable faster, index-free queries and better support for interconnected data.

To showcase the capabilities of our engine, we have developed a geo-financial software that provides users with a powerful tool for analyzing financial scores of companies based on geo-location. Businesses can quickly and easily analyze data to gain valuable insights into competitors, potential partnerships, and market trends. Our software presents the results of the analysis in a user-friendly and visually appealing format, making it accessible even to non-technical users.

Our geo-financial analysis software is based on user-specified location and range. The user interacts with an Angular frontend, which incorporates the Leaflet library for map interaction and an OpenStreetMap basemap. The backend is based on Golang, which handles authentication and message queueing interaction with a Python analysis tool. The data retrieved for Python processing comes from a Neo4j graph database, which is accessed through Cypher queries and networking algorithms. All of the software components are located in separate containers, promoting flexible and independent scalability achieved with Docker Compose and orchestrated by Kubernetes.

In this presentation, we will discuss our graph-based geo-intelligence engine, which is the backbone of our application. We will showcase the geo-financial analysis application itself, providing a demo and demonstrating how it can be used for business geo-intelligence analysis. Throughout the presentation, we will continuously discuss the open-source technologies that are at the core of our work and focus on the value that each of them has brought to our achievements.

Use cases & applications
UBT D / N113 - Second Floor
11:00
30min
How to get points of interest from OSM
Ilya Zverev

This talk is exactly what it says on the tin: I want to extract restaurants or shops or train stations from OpenStreetMap. Or every POI there is. How do I do that and why extraction is so damn hard? This talk is not exactly a one-two-click instruction: we will see how data gets into OSM and why it is not easy to get it out.

Open Data
UBT D / N112 - Second Floor
11:00
30min
Look, how we build geospatial CMS without using GeoServer and EAV!
Edgars Košovojs

Using generic or standard content management system (CMS) like Wordpress or Strapi for managing geospatial data isn't an optimal solution. Since object geometry isn't just one of many data fields, it requires special handling for setting the data (e.g., on the map), storing data, transforming data for various needs (geometry output format, CRS etc.) and using them for spatial analysis.

When talking about a geospatial CMS, one would think that using GeoServer should be a must. How else would you vizualize a non-trivial amount of data on the map, right? Although Geoserver might be a good answer, that's not the only one. We, at our company, have developed our custom geospatial CMS using the OpenLayers mapping library on the frontend and PostgreSQL (with PostGIS, of course) on the backend, using PHP Laravel and GeoJSON as middle man between the data store and the frontend.

CMS platforms frequently have one specific feature. Different objects may have various attributes. Using the EAV (entity-attribute-value) model is one of the methods that is frequently utilized, although this choice usually comes with a number of issues, such as querying and storing the data. We used the possibility to swap out the EAV model for a straightforward json field in our CMS.

This talk will present what choices we had to make to build solution in such way and what some of our challenges were.

Use cases & applications
UBT D / N115 - Second Floor
11:00
30min
Offline web map server "UNVT Portable"
ShogoHirasawa, Hidenori Fujimura, Taichi Furuhashi

UNVT Portable is a package for RaspberryPi that allows users to access a map hosting server via a web browser within a local network, primarily for offline use during disasters. It is designed to aid disaster response by combining aerial drone imagery with OpenStreetMap and open data tile datasets.

Use cases & applications
UBT C / N109 - Second Floor
11:00
30min
QGIS 3D, point cloud and elevation data
Saber Razmjooei

Since we have introduced QGIS 3D in 2017, it has gone through major improvements. In addition to new features, several new data formats have been also integrated to QGIS.

This presentation will cover the latest improvements made as result of the recent crowdfunding efforts to introduce point cloud processing, enhance 3D maps for elevation data.

State of software
Outdoor Stage
11:00
30min
State of the OL-Cesium library
Guillaume Beraudo

OL-Cesium is a popular Open source Javascript library that you can leverage to add 3D to a new or existing OpenLayers application. You code the logics in a single place and it gets applied to both OpenLayers 2D map and Cesium 3D globe. The library handles the synchronization of the view, layers, styling, for you. This behaviour is customizable.

Since its creation, 9 years ago, the library has attracted a large community of users. It has evolved to follow OpenLayers, Cesium and the global javascript ecosystem.
This talk is about the strengths of the library, its state and the plans for the future.

State of software
Mirusha
11:00
30min
vis.gl, the powerful framework suite behind deck.gl and kepler.gl
Alberto Asuero Arroyo, Ib Green

Slides

vis.gl is a suite of composable, interoperable open source geospatial visualization frameworks (GPU powered) centered around deck.gl. During the last 4 years vis.gl has played an essential role in the development of geospatial applications during the last 4 years.

With close to 100K daily downloads from npm, it’s widely used today in many areas and industries: from academics teams, to enterprise companies like Uber, Foursquare, CARTO, Google or Amazon.

The open governance of vis.gl has guaranteed the evolution and maintenance of the framework, the project joined the OpenJS foundation in 2022 with the main goal of re-enforcing the open evolution of the project.

During this talk we’ll do a quick and high level introduction of the most important frameworks that belong to this suite (deck.gl, kepler.gl, loaders.gl, etc.), we’ll do an update of the most important features and milestones achieved in the last year, and we’ll share the strategy and direction for the next year.

State of software
Drini
11:30
11:30
30min
A review of Mapillary Traffic Sign Data Quality and OpenStreetMap Coverage
Yunzhi Lin, Said Turksever

Traffic signs are a key feature for navigating and managing traffic safely, affecting all of us on a daily basis. However, traffic sign datasets are lacking on open government data portals as well as OpenStreetMap (OSM).

Mapillary’s computer vision capabilities can extract more than 1,500 classes of traffic signs globally from street-level imagery. Generated traffic signs are available on iD Editor, Rapid and JOSM Mapillary plugin to enrich OpenStreetMap data.

Our team wanted to know how the accuracy of traffic signs detected by Mapillary compared with the reality on the ground (the ground truth). To answer this question we collected more than thousands ground truth data in San Francisco and used this information to produce the recall, precision, and positional accuracy of our machined generated traffic sign data. This provided some interesting insights in OpenStreetMap and the level of completeness and gaps of that dataset.

In this talk, we will cover Mapillary’s traffic sign extraction capabilities, Mapillary generated traffic sign data against ground truth data and OSM’s traffic sign coverage in San Francisco’s downtown. We will be also addressing how data quality can be improved using various data collection techniques and the role of post-processing with Structure from Motion and control points annotations.

Open Data
UBT C / N111 - Second Floor
11:30
30min
Gisquick: Let’s share (Q)GIS much quicker
Martin Landa, Jáchym Čepický

Gisquick (https://gisquick.org/) is an open-source platform for publishing GIS projects on the web. A GIS project is defined by a QGIS project file including data sources (files, databases, even virtual layers) and symbology defined in the QGIS desktop application using the styling tool.

With the help of the Gisquick plugin for QGIS, it is possible to upload the data to the Gisquick server and host the map.

Gisquick is a fully featured hosting platform, where the project administrator can fine-tune web publishing attributes, set predefined scales, bounds, or visibility. Also group permissions on the project level as well as layer level (query, edit, export) may be defined. Vector data - geometry and attributes - can be edited directly on the web.

Interface between the frontend and backend is based on open standards (OGC WMS and WFS). The mapping application has standard components from the GIS point of view: decent layer switcher, attribute table, zoomable map, printing tool (based on QGIS templates), and customizable feature-detail form.

All this can be tested on our demo platform https://demo.gisquick.org/ - but you can also make your own deployment via Docker images. Gisquick is open-source software published under the GNU GPL.

In the presentation, we are going to present various features of Gisquick and show practical examples and discuss technologies used for its development.

State of software
Drini
11:30
30min
Increasing the uptake of Earth Observation services and products through European efforts
Codrina Ilie

In this talk we introduce a European initiative with global effects that aims to support the uptake of Earth Observation (EO) data products and services by increasing European capability to generate timely, accurate, disaggregated, people-centred, accessible and user-friendly environmental information based on EO data. The initiative - Open Earth Monitor Cyberinfrastructure - is following a well defined workflow:
(1) Identify gaps and needs analysis : finding out what are the bottlenecks of data platforms together with stakeholders;
(2) Use open source EO computing engine : integrating EO with in-situ data to obtain improved geospatial data services and products;
(3) Build better data portals: harmonise, bridge and improve existing open source platforms;
Make data platforms FAIR: improve accessibility of data with open source licences and capacity building;
(4) Serve concrete goals: all Open Earth Monitor activities are centred around pre-defined use cases with various stakeholders.

We do not plan to reinvent the wheel, therefore all our efforts will focus on improving existing open source solutions and other initiatives, such as: OpenEO.org, Geopedia.world, GlobalEarthMonitor.eu, EarthSystemDataLab.net, OpenLandMap.org, EcoDataCube.eu., LifeWatch.eu, XCUB and EuroDataCube.com. Our developments will materialise in a series of monitoring tools at European as well as global level in various fields: forestry, natural hazards, biodiversity, crop monitoring etc.

In the context of Open Earth Monitor, Cyberinfrastructure is defined as the coordinated aggregate of software, hardware, human expertise and other technologies required to support current and future discoveries in science and engineering, enabling relevant integration of often disparate resources to provide an useful and usable framework for research, discovery and decision-making characterised by broad access and "end-to-end" coordination.

Open Earth Monitor Cyberinfrastructure has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No. 101059548. (HORIZON-CL6-2021-GOVERNANCE-01).

Open source geospatial ‘Made in Europe’
UBT F / N212 - Floor 3
11:30
30min
MapServer Features by Example
Seth Girvin

MapServer, a founding OSGeo projects, has been powering mapping systems since the mid 1990s. This talk gives an overview of the many features of MapServer that have been developed over the past 25 years, with a focus on advanced functionality that is not well-known as they deserve.

Features will be shown using sample Mapfiles - the configuration files used by MapServer. Examples will include advanced symbology, special layer types such as graticules, charts, and contours, displaying data from S3 buckets, and more!

State of software
Mirusha
11:30
30min
Mergin Maps: A Year of Progress
Tomas Mizera

Mergin Maps has become a popular open-source GIS platform for collecting, managing and sharing geospatial data. In the past year, we have introduced several new features and improvements to the platform. Our goal is to provide a flexible and powerful GIS solution that is accessible to users of all levels, from seasoned professionals to those just getting started. In this talk, we will highlight the latest developments and demonstrate how they can benefit users in various fields.

One of the significant updates is the introduction of workspaces, which allows users to organize their projects, data, and users in a hierarchical structure. This new feature streamlines the management of multiple projects and simplifies the process of adding and removing users.

Another update is the implementation of tracking, which enables users to collect and visualize location data. This feature is particularly useful for tracking vehicles, equipment, and personnel in the field, and can be customized to include various attributes.

Finally, we will discuss the Mergin Maps roadmap for the future, including plans for new features, enhanced integrations and community-driven development. We believe these changes will make Mergin Maps more accessible and user-friendly for everyone, regardless of their level of experience.

Whether you are a seasoned GIS professional or new to the world of geospatial data, this talk will provide valuable insights into the latest developments in Mergin Maps and its potential for your work.

State of software
Outdoor Stage
11:30
30min
Motivating environmental citizen scientists and open data acquisition on openSenseMap with Open Badges
Frederick Bruch, Mario Pesch

The christmas bird count, started by ornithologist frank chapman in 1900, is one of the earliest and longest-running citizen science projects in the world. Today, it involves thousands of birdwatchers who count birds over a 24-hour period in mid-december. The data collected during the christmas bird count provides scientists with valuable information about bird populations, migration patterns, and other important ecological trends. This project set the stage for the growth of citizen science initiatives, where people participate in scientific research.
Recently, there has been an increase in the number of citizen (cyber-)science projects, which leverage the power of the internet and digital technology to involve people in scientific research. These projects have had a significant impact on society, contributing to advancements in fields such as astronomy, ecology, and health. While these projects can be a lot of fun, sometimes the tasks for participants can be really monotonous, and they can lose motivation to continue being a part of the project. Therefore, project organizers need to keep participants engaged. This is where gamification comes into play. Applying game elements to anything that isn't a game is known as gamification. Adding elements of competition and rewards can help people stay engaged in the project and continue making contributions (Haklay, 2012). This can be especially helpful for long-term projects that require continual effort from participants. The openSenseMap(1) is an open-source(2) citizen cyber-science platform that facilitates environmental monitoring by allowing individuals to measure and publish sensor data. The platform is designed to create a community-driven network of sensors to monitor various environmental factors, such as air, water quality, and much more. A significant advantage of the platform is that it operates on open data principles, whereby all sensor data
is accessible to the public(3). This openness encourages collaboration and facilitates innovation, which has led to numerous applications in environmental monitoring. Despite its success, the platform still faces challenges regarding user engagement and motivation, necessitating the incorporation of gamification strategies to enhance participation.
Digital badges can be earned in a variety of settings and are a recognized symbol of skill or accomplishment. Although badges are a common gamification component, they are typically only usable in closed environments. The possibility of awarding badges for voluntarily participating in scientific research can increase participant motivation. The ability to display, share, and verify badges alongside skills and credentials from other environments has changed the game of digital credentials. This technology is called Open Badges.
This paper focuses on the motivational impact Open Badges can have on citizen science in the context of the openSenseMap platform. Users of the openSenseMap platform were surveyed for this study. Based on the results, a prototype was implemented, combining an open badge platform with the existing openSenseMap platform. The prototype added an open badge component to the platform, allowing users to earn badges for various achievements, such as contributing a certain number of measurements or completing a specific task.
The badges were designed to be displayed on the users profiles and could be shared on social media or other online platforms. This feature enabled participants to showcase their contributions and achievements, increasing their motivation to continue participating in the project. The survey results indicated that participants found the open badge component to make the citizen science platform more interesting, which may suggest that open badges have the potential to increase motivation and engagement in citizen science projects.
Furthermore, its important to note that the open badge platform (called mybadges(4) ) used in this project is open source(5), aligning with the spirit of collaboration and transparency in citizen science. By leveraging the power of open badges and open-source technology, this project has the potential to drive significant positive change in the field of cyber-science and promote reproducibility in scientific research.
In addition to its potential impact on citizen cyber-science, open badges can also be adapted to the open (geo)education context. Open Badges can provide learners with an opportunity to showcase their knowledge and skills in a tangible and transferable way (Halavais, 2012). A genealogy of badges: inherited meaning and monstrous moral hybrids). By earning badges for completing educational tasks, learners can build a portfolio of evidence that can be used to demonstrate their achievements and credentials. This can be particularly valuable in fields such as geospatial science, where there is a growing demand for individuals with specific technical skills and knowledge. The use of
Open Badges in open (geo)education can enhance the learning experience and increase learner motivation, leading to improved educational outcomes and better-equipped professionals in the field.
This paper explores the use of Open Badges, a gamification component, to enhance engagement and motivation in citizen cyber-science projects. The proposed approach uses an open-source citizen cyber-science platform, the openSenseMap, to collect and publish sensor data, making it accessible to the public. The incorporation of Open Badges can incentivize participants to contribute to the project continually. The results of our survey indicated that participants found the open badge component to be an engaging and motivating feature, which suggests that Open Badges have the potential to increase engagement in citizen science projects. This papers contribution aligns with the foss4g academic track audiences interest in exploring innovative approaches to open-source technologys use to address environmental and social challenges. Therefore, this papers findings and implementation approach could be of significant interest to the foss4g academic community.

1 - https://opensensemap.org
2 - https://github.com/sensebox/openSenseMap-API
3 - https://docs.opensensemap.org
4 - https://mybadges.org/public/start
5 - https://github.com/myBadges-org/badgr-server

Academic Track
UBT E / N209 - Floor 3
11:30
30min
Road condition assessment and inspection using deep learning
Bogdan Negrea

Road Surface Inspector is a system developed by IT34 with the purpose of speeding up the process of road damage registration by using deep learning. The time consuming process of inspection and registration of road damage is reduced significantly by using our Road Scanner Inspector app that can be placed in the windshield of any vehicle. The app records a video and gps coordinates, which are later processed in order to find different types of damage - potholes, cracks, damaged markings using deep learning.

The system can also detect other types of assets such as traffic signs, traffic lights, manholes and others that can be used in fx digitalization tasks.

The results of the image analysis are presented on a webgis portal as heatmaps presenting the condition of the road in the areas that were inspected using the app. The heatmaps are further used by the decision makers in order to prioritize the road maintenance work.

While using the app, Gps logs are built in realtime based on the positions sent by the phone while driving. These are further used for street inspection documentation.

Open source components.
Postgres + Postgis for storing the data and for geometry based analysis
PyTorch and Yolo7 for deep learning
OpenLayers for visualizing the images/detection results as rasters in webgis
Geoserver for publishing data as WMS/WFS
QGis as an external visualization tool for the data

Use cases & applications
UBT D / N113 - Second Floor
11:30
30min
Self-hosted CMS maps for everyone
Pirmin Kalberer

Privacy aware Content Managment System (CMS) operators don't let their viewers accept cookies from an external map provider. But creating a map used to require specialized GIS knowledge and hosting a map server is not everyone's cup of tea.

This talk explains how non-experts can serve a map based on OpenStreetMap vector tiles from a CMS. A MapLibre GL JS based Wordpress plugin displaying a self-hosted PMTiles dataset is shown as an example.

Use cases & applications
UBT D / N115 - Second Floor
11:30
30min
The power of collective intelligence: HOT’s approach to open tech and innovation
Petya Kangalova, Synne Marion Olsen

Are you interested in open geospatial tech for humanitarian purposes? Have you ever wondered who the people behind the geospatial technologies are? The collective brains? In this talk, we will tap into the power of the tech collective at Humanitarian OpenStreetMap Team, share our experience, excite you about joining the collective and get some hands-on input from YOU!

Meet two members of the Humanitarian OpenStreetMap Team (HOT) - Petya & Synne. We are a global team that operates with four regional Open Mapping Hubs: https://www.hotosm.org/hubs/. In developing and improving open geospatial tech for humanitarian purposes, our vision is to creatively meet the needs of the communities through collective, community-centered efforts. Our mission? To amplify community-led innovation for impact through diversity, creativity & passion!

Some of the stories we will share will be about our experiences and lessons learnt on collective projects and products (https://github.com/hotosm/) ranging from the HOT Tasking Manager collective , collaborating with Kathmandu Living Labs (KLL) in Nepal, to development of a Field Mapping Tasking Manager (FMTM). We will also share some of the boldest regional activities, including OpenStreetMap (OSM) Hackfest in Asia Pacific and the Ideas Lab in Eastern and Southern Africa.

You will also find out how YOU can get involved by contributing to open geospatial tech. Expect a short participatory exercise [the collective brains/ power of collective intelligence] during this session!

Community & Foundation
Lumbardhi
11:30
30min
UNDP's one stop shop for cloud based geospatial data visualisation and analytical tool
Jin Igarashi, Joseph Thuha, Samara Dilakshani Polwatta Polwatta Lekamlage, Ioan Ferencik

United Nations Development Programme (UNDP) is a United Nations agency tasked with helping countries eliminate poverty and achieve sustainable economic growth and human development.

Recent advances in technology and information management have resulted in large quantities of data being available to support improved data driven decision making across the organization. In this context, UNDP has developed a corporate data strategy to accelerate its transformation into a data-driven organisation. Geo-spatial data is included in this strategy and plays an important role in the organization. However, the large scale adoption and integration of geo-spatial data was obstructed in the past by issues related to data accessibility (silos located in various country offices), interoperability as well as sub-optimal hard and soft infrastructure or know-how.
All this issues have been addressed recently, when UNDP SGD integration started developing a geospatial hub - GeoHub - to provide geospatial data visualisation and analytical tools to UNDP staff and policymakers.

UNDP GeoHub is a repository of a wide array of data sets of the most recent time span available at your fingertips! It is a centralized ecosystem of geospatial data and services to support development policymakers. It allows users to search and visualise datasets, compute dynamic statistics and download the data. In addition, GeoHub provides a feature to share their maps with the community easily. With our repository, you can also upload to share your valuable data to share with the community! It connects geospatial knowledge and know-how across the organization to enhance evidence-based decision-making with relevant data-led insights.

Geohub ecosystem consists of sveltekit & maplibre based frontend web applications and various FOSS4G software in the backend side. PostgreSQL/PostGIS, titiler, pg_tileserv and martin are deployed in Azure Kubernetes (AKS) to provide advanced visualisation and analysis for users. All source code is published in Github with an open-source license.

Use cases & applications
UBT C / N109 - Second Floor
12:00
12:00
90min
lunch
UBT C / N110 - Second Floor
12:00
90min
lunch
UBT C / N111 - Second Floor
12:00
90min
lunch
UBT D / N112 - Second Floor
12:00
90min
lunch
UBT D / N113 - Second Floor
12:00
90min
lunch
UBT D / N115 - Second Floor
12:00
30min
Disaster Mapping Prioritization in OSM
Harry Mahardhika Machmud, Honey Fombuena

One of the primary motivations for the Open Mapping Hub Asia-Pacific to increase the quantity and quality of OpenStreetMap (OSM) data in the region is the region's high exposure to multiple types of hazards.

Apart from assisting response efforts following a disaster event by providing access to critical geospatial information, the hub aims to ensure that OSM data is already available in high-risk areas, even before a disaster occurs, to be used in critical anticipatory action such as developing early warning systems and mitigation plans. It is critical to have a systematic method for determining the OSM mapping requirements in these disaster hotspots.

Although some tools separately assess the Completeness of OSM Data and the Disaster Risk Level of a location, a new tool that combines these assessments is required to highlight the areas that should be prioritized for mapping in OSM.

The Open Mapping Hub Asia-Pacific created a data-driven method for determining which areas in OSM disaster mapping should be prioritized. The resulting method is deployed as a QGIS plug-in and distributed to OSM communities for offline assessments to identify disaster-prone areas that have not yet been mapped in OSM.

Use cases & applications
UBT C / N109 - Second Floor
12:00
30min
EuroGEOSS Prototype Development
Albana KONA

Europe is a world leader in Earth Observation (EO) and climate change studies. An outstanding example is Copernicus, the most ambitious EO programme worldwide, which in addition to being an independent system is also a strong component of the Group on Earth Observation (GEO), an intergovernmental partnership aiming to improve the availability, access and use of open EO to support policy and decision making in a wide range of sectors.
Since 2005, the Global Earth Observation System of System (GEOSS) has been a key initiative by GEO to integrate platforms and connect existing infrastructures using common standards for sharing and using digital resources. Europe is delivering a regional contribution to GEO, named EuroGEO, by covering the last mile of the EO value chain. However, this regional node lacks the effective interoperability needed to implement a European ecosystem to fully support the policy cycle.
To fill this gap, the development of a sustainable EuroGEOSS ecosystem connecting many European assets including data, sensor networks, analytical methods and models, computing infrastructures, products and services that support European objectives (i.e. a EuroGEOSS ecosystem), is of a vital importance in the evolution of the initiative.
The purpose of this talk is to present the rationale and the development status of a EuroGEOSS prototype, that the European Commission’s Joint Research Centre is conceptualizing.
Starting with the analysis of use cases with the highest European policy priority, five of them were identified as the prominent ones to be replicated. Along with the replication of use cases, a monitoring framework of issues and gaps identified in the life cycle will be populated meanwhile.
The EuroGEOSS prototype architecture will implement the following patterns: a) Portal and Single Sign On; b) Meta catalogue of the services (data, models, infrastructures, etc.); c) High flexibility and modularity level; d) Adoption of the Machine Learning operation (MLOps) methodology.
The EuroGEOSS ecosystem is not conceived as another platform. It will rather be a virtual platform leveraging on: a) open sources and open interoperability standards (normative and de facto); b) interconnection of novel technologies; c) inclusion of relevant European communities such as those around EuroGEO and INSPIRE; d) Scalable interoperable infrastructures: CREODIAS, OpenEO, etc.
The development of a EuroGEOSS prototype will last until the end of 2024, documenting the status of gaps, challenges in the available data and infrastructure, as well as assisting a future scenario and business model and a possible operationalization.

Open source geospatial ‘Made in Europe’
UBT F / N212 - Floor 3
12:00
30min
G3W-SUITE and QGIS integration: state of the art, latest developments and future prospects
Walter Lorenzetti

G3W-SUITE is a modular, client-server application (based on QGIS-Server) for managing and publishing interactive QGIS cartographic projects of various kinds in a totally independent, simple and fast way.

Accessing administration, consultation of projects, editing functions and use of different modules are based on a hierarchic system of user profiling, open to editing and modulation.

The suite is made up of two main components: G3W-ADMIN (based on Django and Python) as the web administration interface and G3W-CLIENT (based on OpenLayer and Vue) as the cartographic client that communicate through a series of API REST.

The application, released on GitHub with Mozilla Public Licence 2.0, is compatible with QGIS LTR versions and it is based on strong integration with the QGIS API.

This presentation will provide a brief history of the application and insights into key project developments over the past year, including:
* new editing functions and greater integration with QGIS tools and widgets in order to simplify the preparation of web cartographic management systems
* QGIS embedded project management
* WMS-T and MESH data management and integration of TimeSeries functions
* on/off management for the individual symbology categories as in QGIS
* integration of the QGIS Processing API to allow the integration of QGIS analysis modules and perform online geographic analysis
* structured management for log consultation on three levels: G3W-SUITE, QGIS-SERVER and DJANGO

The talk, accompanied by examples of application of the features, is dedicated to both developers and users of various levels who want to manage their cartographic infrastructure based on QGIS

State of software
Drini
12:00
30min
GeoStyler - One Tool for all Styles
Daniel Koch, Jan Suleiman

When it comes to styling of geodata many tools have their own solution: SLD, QGIS-Styles, OpenLayers-Styles, Leaflet, …

But what to do if you need to share the same style across different formats?
GeoStyler brings the solution. With its standalone parsers, nearly any (layer based) style can be converted from one format to another - from SLD to OpenLayers, QGIS, Mapfile, and vice versa.

On top of this, GeoStyler offers a library of React UI elements to easily create styles in your own WebGIS.

This talk will give an overview of possible use cases for GeoStyler, its latest developments such as the new layout and the support for expressions, as well as past and upcoming community events.

State of software
Mirusha
12:00
30min
How to join OSGeo (for projects)
Tom Kralidis, Jody Garnett

Welcome to the Open Source Geospatial Foundation, proud hosts of FOSS4G, and advocate for free and open source geospatial software everywhere. This is a call out to open source software developers; please join OSGeo and help us help you!

Join OSGeo today:

  • Even just listing your project on the osgeo.org website is a great first step. Help us promote your technology so users can discover and enjoy your software.
  • The OSGeo “community program” gives project teams a chance to join the foundation with an emphasis on supporting innovation and new projects. The foundation provides some direct support, assistance along with endorsement and recognition from our board.
  • For established projects please join our “incubation program” to be recognized for excellence and as a full OSGeo committee.

Unlike other foundations OSGeo does not require that you give up or transfer any Intellectual Property; we simply ask that you be spatial, open-source, and open to participation.

This presentation gives clear instructions on how to join OSGeo, and representatives from recent successful projects will be on hand to answer your questions.

Community & Foundation
Lumbardhi
12:00
30min
QField news - stakeout, measurements, printing and many more
Marco Bernasocchi

The mobile application QField is based on QGIS and allows fieldwork to be carried out efficiently based on QGIS projects, offline or online. Developments in recent months have added additional functions to the application that are useful for fieldwork. Examples are used to present the most important new features. Discover the most recent features like 3D-layers handling, printing of reports and atlases, elevation profiling of terrain and layers, multi-column support in feature form, azimuth values in the measuring tool, locked screen mode, the QR-code reader, stakeout functionalities, the official release of the iOS version and many more.

State of software
Outdoor Stage
12:00
30min
SafoMeter - Assessing Safety in Public Spaces: The urban area of Prishtina
Gresa Neziri

As cities expose people to increasing threats, urban planning perspectives on safety remain on the periphery of urban design and policy. Public spaces cause different emotions in individuals, and the feeling of safety is the primary emotion that affects their well-being and behavior (Pánek, Pászto, & Marek, 2017). For this reason, an urban planning strategy should pay special attention to providing a safe environment, especially in public spaces.

Negotiating with the use of public spaces poses a more significant challenge for marginalized groups, especially for women in every social group for whom sexual harassment and other forms of gender-based violence in public areas are a daily occurrence in every city worldwide (UN Women, 2021). Nonetheless, there is a very limited amount of data that showcases the level of safety of site-specific public spaces, especially for cities in developing countries like Kosovo.

In this regard, aiming to contribute to the effort for developing a methodology for assessing site-specific safety in public space, we have developed SafoMeter. SafoMeter is a methodological framework for assessing safety in public spaces and their spatial distribution. SafoMeter's approach adheres to a human-centered approach that analyzes public spaces by looking closely at people's everyday experiences. Its framework is built by mediating indicators that assess both objective safety and subjective perceptions of safety.

The objective indicators for measuring safety fall into two broad categories: urban fabric and accessibility. Research on the relationship between the built environment and perceived safety highlights several physical components attributed to feelings of safety (UN-Habitat, 2020). In addition, spatial criteria/features used in previous research include urban structure and accessibility as two broad categories of spatial elements that positively or negatively affect people's sense of safety (Wojnarowska, 2016).

The subjective indicators for measuring emotional safety fall into the categories of threats and comfort. Contrary to conventional methods, the framework highlights the necessity for collecting data from the individual evaluation of perceived safety. Subjective evaluations of the users of public spaces are considered very important due to the low correlation between objective safety and subjective assessment of one's well-being, as shown in previous research (Von Wirth, Grêt-Regamey & Stauffacher, 2014).

The pilot location used for applying SafoMeter’s methodology to measure safety in public spaces was the urban area of Prishtina. The official population of the Municipality of Prishtina is about 200,000 inhabitants, of which almost 150,000 live in the city area. Being the capital city of Kosovo, the Municipality of Prishtina is the central city of significant political, economic, and social developments in the country.

The data for each indicator of the SafoMeter methodology were collected for a period of three months (July, August, and September 2022) at different hours during the day. Mergin Maps application was used via mobile phone to collect the field data recording both objective and subjective indicators. The data collection project was developed in QGIS, version 3.22.12 LTR, including 8 layers, one for each indicator. A hexagonal grid of 0.86 ha was used to aggregate data into a Safety Index. Furthermore, the results of the Safety Index were calculated and visualized via QGIS. A particular focus was drawn to visualizing unsafe hotspots in the city and showcasing their spatial distribution to inform citizens and decision-makers about spaces that need more urgent intervention.

For the Safety index with a scale from 0 (least safe) to 10 (most safe), all spaces evaluated in the study area result below half or with a maximum value of 5.57. Therefore, it can be concluded that the indicators measured in Prishtina point to an urgency for intervention, both in physical infrastructure and in terms of improving safety that comes as a threat from the human factor. Additionally, besides being very few, the areas considered safer within the city are not connected to each other, not allowing users to move safely from one place to another. Parks or green spaces, which are scarce spaces in Pristina, turn out to be amongst the main hotspots with the lowest score.

Applying the SafoMeter methodology generated valuable insights for assessing safety in the public spaces of Prishtina. The results of the pilot study reveal an urgent need for intervention. These findings suggest that policymakers and urban planners should prioritize the creation of safer public spaces in Prishtina and other cities facing similar challenges.

At the same time, a systematic safety assessment requires systematic year-round data collection processes to design effective area-based interventions and policies. Therefore, a more detailed, further data collection process should be established. In addition, this process should aim at increasing the number of participating citizens in evaluating the safety indicators. All the data collected via the SafoMeter framework will be published via a web-based platform where different user groups can use them. Finally, via SafoMeter, we aimed to provide a tool that can be replicated for further studies by other users shared according to the principles of open-source knowledge.

Academic Track
UBT E / N209 - Floor 3
12:30
12:30
90min
lunch
Outdoor Stage
12:30
90min
lunch
Lumbardhi
12:30
90min
lunch
Drini
12:30
90min
lunch
Mirusha
12:30
90min
lunch
UBT E / N209 - Floor 3
12:30
90min
lunch
UBT F / N212 - Floor 3
12:30
90min
lunch
UBT C / N109 - Second Floor
13:30
13:30
30min
A Synesthete's Atlas: Performing Cartography in Real Time
Eric Theise

Since April 2022 I've been manipulating projected digital maps in collaboration with improvising musicians, dancers, and spoken word artists across Europe and North America. Constraining my project to use only web mapping technologies, "A Synesthete's Atlas" is a curious mutation of expanded cinema, applying strategies from experimental film & animation, color theory, the Light and Space movement, and concrete poetry to geography.

I'll present Carto-OSC, an assemblage of open source libraries, data, and protocols, plus 1000+ lines of JavaScript that integrates it all into a touch-surface interface. I'll discuss my motivations and use of the OSC protocol to control the manipulations, offer aesthetic observations, and present video excerpts of previous performances.

Use cases & applications
UBT D / N113 - Second Floor
13:30
30min
Aircraft trajectory analysis using PostGIS
Benjamin Trigona-Harany

PostGIS supports geometries with a Z dimension and geometries with M (measure) values, but there are not a lot of examples of both of these being used together. One use case is the analysis of airplane tracks which requires both - that is to say every vertex has an altitude and a timestamp.

This talk will show how live positional data transmitted from aircraft can be accessed in a PostGIS database. I will then show how a sequence of these positions can be represented effectively as LINESTRINGZM geometries which can be analyzed as trajectories using native PostGIS functions.

With spatial SQL, we can do things such as determine anomalous changes in an aircraft's velocity or altitude and find the exact point in time at which two aircraft came closest to one another. The focus on the talk will be showing how future work on large datasets of ADS-B data can be done using PostGIS and other open-source geospatial tools.

I will cover how to use Python and PostgreSQL's PL/Python language extension to import the data and QGIS to render the data, but the analysis will be be done in SQL.

Use cases & applications
UBT C / N110 - Second Floor
13:30
30min
Introduction to decentralized geospatial digital twins: merging all LiDAR datasets in the world
Charlie Durand, Bertrand Juglas

How do you create a near-real-time source of 3D geospatial data from around the world?

The French Institute of Cartography and start-up Extra are collaborating to develop a decentralized protocol for this purpose. The Circum protocol will merge LiDAR datasets from various providers, sell this data source to consumers, and redistribute the value back to the original providers.

Circum uses blockchain technology and 3D surface reconstruction algorithms to carry out its mission. Learn about the protocol’s key mechanisms with the team at this conference.

AI4EO Challenges & Opportunities
UBT C / N111 - Second Floor
13:30
30min
Locality-Sensitive Hashing with the Hilbert Curve for fast reverse geocoding
Ervin Ruci

3geonames.org is a free api for fast reverse geocoding, using a new technique of locality-preserving hashing of 2d/3d spatial points to 1d integers via a combination of Hilbert curve and bit interlacing. This talk expands on the use-case and the performance/accuracy advantages of this technique. (The talk slides will be available at: https://3geonames.org/prizren.html )

Use cases & applications
UBT D / N112 - Second Floor
13:30
30min
River Runner: navigating and indexing hydrologic data with open standards and data
Benjamin Webb

The Hydro Network-Linked Data Index (NLDI) is a system that can index data to a hydrographic network and offers a RESTful web service to discover indexed information upstream and downstream of arbitrary points along the stream network. This allows users to search for and retrieve geospatial representations of stream flowlines, catchments, and relevant water monitoring locations contributed by the water data community - without downloading the national dataset or establishing links themselves.

This is done by data providers publishing open information about the locations of their data within the context of the U.S. stream network. Data linked to the NLDI includes various federal, state and local water infrastructure features and water quantity and quality monitoring locations. The NLDI is being developed as an open source project and welcomes contributions to both its code and indexed data, with the main implementation currently being maintained by the U.S. Geological Survey.

The community of practice surrounding the NLDI extends to R and python developers working on clients that allow scientists to quickly retrieve data relevant for specific hydrologic analyses. As the NLDI community grows, a similar concept could be applied at a global scale, facilitating the development of downstream tools and applications.

While the NLDI is limited to the US, global work would be possible by leveraging global stream network datasets such as MERIT-Hydro. A proof-of-concept global River Runner allowing discovery of the flowpath downstream of arbitrary points anywhere on Earth has already been implemented using MERIT-Hydro and OGC-API Processes in pygeoapi. This session includes demonstrations of the NLDI and the global River Runner.

Use cases & applications
UBT D / N115 - Second Floor
14:00
14:00
30min
CesiumJS and OpenLayers for a metropolitan cooperation web platform based on the digital twin of Rennes Métropole.
Frederic Jacon, Ben Kuster

In a context of digital transition and the increasing availability of urban data, Rennes Métropole wishes to better equip its decisions and public policies on the basis of data and cooperation.

Ultimately, the goal is to :
- Promote cooperation and the contribution of the actors of the territory, in particular the citizens
- "Enlighten" public decisions and policies, in particular the democratic, ecological and energy transition projects carried out by Rennes Métropole.

Issues of transparency, public service efficiency and cost control are also sought.

The metropolitan cooperation platform that is currently developed will consist of one or more tools based on the digital twin intended to equip public decisions and policies on the basis of data and cooperation.

The platform is developed partly on VC Map which is an Open-Source JavaScript framework and API for building dynamic and interactive maps on the web. It can display 2D data, oblique imagery and massive 3D data including terrain data, vector data, mesh models, and point clouds making it easy for users to explore and interact with the data in an integrated and high-performance map application. VC Map is built upon open, proven, and reliable GIS and web technologies such as OpenLayers and Cesium for the visualization of 2D and 3D geo-data.

A particular effort was made on the design in order to offer users, mainly citizens of Rennes Metropole, a pleasant user experience that allows an exploration of the development projects of the metropole in 2D and 3D.

We will present the cooperation platform through three use cases of interest for Rennes Metropole :

Solar Cadaster : Simulation of the photovoltaic production potential of the roofs and comparison with the energy consumption of the residents, the costs and the capacity of the network.

Linear transport systems : Mediation (including visualization) and consultation with citizens and communities for the implementation of a linear transport infrastructure

Exposure to electromagnetic waves : Visualization of exposure levels to electromagnetic waves (simulations and real and real measured values) as well as objects (radioelectric relays and sensors) on the territory of the City of Rennes.

Use cases & applications
UBT C / N109 - Second Floor
14:00
30min
Connecting SMODERP with Living Landscape - QGIS Plugin
Martin Landa, Ondřej Pešek, Petr Kavka

The Model of Living Landscape (MLL) is a set of empirical based tools for land management and landscape planning. It recognizes the complexity of the interactions between humans and the natural environment, and it aims to create a sustainable and resilient landscape that supports the well-being of both people and nature. One of the core MLL components is a process-based model for rainfall-runoff and erosion computation called SMODERP. The model operates on the principle of cell-by-cell mass balance, calculated at each time step. SMODERP (https://github.com/storm-fsv-cvut/smoderp2d) is open-source software implemented in Python language to ensure compatibility with most GIS software solutions. The current implementation supports Esri ArcGIS, GRASS GIS and QGIS. In this contribution, a new QGIS SMODERP plugin linking the hydrologic model outputs to MLL will be presented. The plugin performs the input data preparation on the background using GRASS GIS data provider, computation is done by SMODERP Python package, and results visualised with predefined map symbology in QGIS map canvas.

This contribution was supported by grant RAGO - Living landscape (SFZP 085320/2022) and Using remote sensing to assess negative impacts of rainstorms (TAČR - SS01020366).

Use cases & applications
Mirusha
14:00
30min
Implementing Digital Twin City in MapLibre with the integration of different information sources
Ariel Anthieni, Sebastian Lopez

Use case for the implementation of a platform that supports data that contributes to the publication and management of Digital Twins, based on the use of MapLibre as a web viewer and at the same time consuming information from different geospatial sources, including Mesh, Raster, DEM; and near real time data sources such as OneBusWay or OpenTripPlanner based on GTFS formats, for the comparison and analysis of information.

Use cases & applications
UBT C / N111 - Second Floor
14:00
30min
Introducing GeoAI Technology to Undergraduates in Public Two-Year Community Colleges
Phillip Davis

GIS instructors at an American technical college have created a five-course certificate in GeoAI. The first cohort of undergraduate students has completed the degree requirements two years later. This presentation will discuss the formation for the degree, the courses, and the resulting graduates. The presentation will discuss the learning outcomes for the degree and individual AI and machine learning GIS courses.

Education
UBT D / N113 - Second Floor
14:00
30min
Lessons from Successful Enterprise GIS Implementations with QGIS and PostGIS
Santtu Pyykkönen

In this talk, I'll share some practical tips and tricks for managing an enterprise GIS workflow with QGIS and PostGIS. I'll showcase some real-world examples to highlight the benefits of using a centralized spatial database to manage GIS data, and I'll walk through the steps to set up a QGIS project for creating, updating, and deleting data directly from QGIS.

My goal is to help organizations that are planning to set up a PostGIS-powered QGIS workflow and are looking for innovative ways to maximise the benefits of the joint powerhouse of QGIS and PostGIS.

As we dive deeper, I'll explore some of the key technical aspects of using QGIS and PostGIS for enterprise GIS. I'll share some tips for configuring and integrating the tools, and showcase how to set up an easily accessible end-user workflow for creating and editing data in QGIS using QGIS forms.

Throughout the talk, I'll also share some stories from different projects to illustrate how these tips and tricks have been successfully applied in practice. I will do my best to ensure that you’ll leave the talk with an understanding of the benefits of using QGIS and PostGIS as part of their Enterprise GIS workflows.

Whether you're a GIS professional, team leader / project manager or anyone seeking to optimize their GIS data management, this talk will provide valuable insights and practical advice for optimizing your GIS data management. Join me as we explore the power of open source tools for enterprise GIS!

Use cases & applications
UBT C / N110 - Second Floor
14:00
30min
Modernising Tasking Manager Infrastructure
Yogesh Girikumar

Talk for the HOTOSM Tasking Manager.

UBT D / N112 - Second Floor
14:00
30min
Open Source for Geospatial Software Resources Platform for Geospatial Data Exploitation – OSS4gEO: community led Open Innovation at ESA and beyond
Stefanie Lumnitz, Codrina Ilie

Our talk presents an initiative that works to develop an open, interactive, user intuitive platform for a constantly updated, comprehensive and detailed overview of the dynamic environment of the open source digital infrastructure for geospatial data storage, processing and visualisation systems. OSS4gEO is designed as a repository that functions as an extended metadata catalogue, curated by the community and a tool for metrics computation, visualisation, ecosystem statistical analysis and reporting.

The initial development of the Open Source for Geospatial Software Resources platform builds on previous extensive work started in 2016 that has materialised into a pioneering overview of open source solutions for geospatial, voluntarily updated by the team. Starting in 2023, OSS4gEO has become a part of a wider ESA EO Open Innovation initiative to actively support and contribute to the EO and geospatial open source community and it is intended as a seed action to better understand, represent and harvest the geospatial open source ecosystem.

There are 3 main objectives that OSS4gEO aims to achieves:
(1) It aims to offer an informed and as complete as possible overview of the open source for geospatial and EO ecosystem, together with various capabilities of filtering and visualisations, within the platform as well as technical solutions to programmatically access and extract data from the database (APIs) to use in any purpose, including commercial;
(2) It aims to provide guidance through the complexity of the geospatial ecosystem so that one can choose the best solutions, while understanding their sustainability, technical and legal interoperability and all the dependencies levels;
(3) It aims to serve as a community building, a promoting and maintaining platform for new and innovative open source solutions for EO and geospatial, developed within various projects, research centres, small or large companies, universities or through individual initiatives.

Our talk will outline the OSS4gEO initiative as a community-led, bottom-up initiative, highlight current and future developments and co-development activities and introduce the wider ESA EO Open Innovation context.

Use cases & applications
UBT F / N212 - Floor 3
14:00
30min
Scaling GeoServer in the cloud: clustering state of the art
Andrea Aime

GeoServer deployments in the cloud and kubernetes are becoming the norm, while the amount of data published is also growing, both in terms of layers and size of data. As a result, the need for scaling up is becoming more and more common.

This presentation covers GeoServer clustering approaches, comparing the available options and their suitability to different environments. We will cover:
* Managing the GeoServer configuration, stable configuration with planned upgrades versus dynamic runtime changes.
* Deployment options (monolithic, separate tiling, microservice oriented)
* Dynamic configuration clustering with JMS, external database storage, and distributed memory.

Attend this presentation to get an update on GeoServer cloud and clustering options, and pick the option that is the best match for your specific use case.

State of software
Lumbardhi
14:00
30min
State of PDAL
Michael Smith

PDAL is Point Data Abstraction Library. It is a C/C++ open source library and applications for translating and processing point cloud data. It is not limited to LiDAR data, although the focus and impetus for many of the tools in the library have their origins in LiDAR. PDAL allows you to compose operations on point clouds into pipelines of stages. These pipelines can be written in a declarative JSON syntax or constructed using the available API. This talk will focus on the current state of the PDAL Pointcloud processing library and related projects such as COPC and Entwine, for pointcloud processing. Coverage of the most common filters, readers and writers along with some general introduction on the library, coverage of processing models, language bindings and command line based batch processing. First part will be covering new features for current users. Some discussion of installation method including Docker, binaries from package repositories, and Conda packaging. For more info see https://pdal.io

State of software
Drini
14:00
30min
The Role of 3D City Model Data as an Open Digital Commons: A Case Study of Openness in Japan's Digital Twin "Project PLATEAU”
Toshikazu Seto

This paper aims to clarify the state of development of highly accurate and open 3D city model data and its usage methods, which started in Japan in 2020, from two aspects: quantitative geospatial analysis using publicly available data, and qualitative evaluation analysis of 40 use cases using the data.

As a background to this study, digital twins, which are virtual replicas of the physical urban built environment (Shahat et al., 2021), are gaining global attention with the development of geospatial information technology to understand current conditions and plan future scenarios in cities (Lei et al., 2022). This trend can be applied in areas related to a wide range of urban issues, such as urban development, disaster prevention, and environmental and energy simulation, and has the potential to be used for urban planning through an intuitive approach via various GIS tools. On the other hand, the geospatial information required by the digital twin also needs to be accompanied by three-dimensional shape information and many attribute information of building units. Data development and related research using CityGML (Kolbe et al., 2021), a representative standard specification, has mostly been carried out in European and US cities, and there have been few efforts in Asia (https://github.com/OloOcki/awesome-citygml).

In Japan, urban planning has mainly been carried out using analogue methods such as paper maps and window services. However, as citizens' lifestyles and socio-economic systems are drastically changing due to the high interest in smart cities and the spread of COVID-19 infection, urban policies such as disaster prevention and urban development using digital technology are becoming increasingly important. The digital transformation of urban policies such as disaster prevention and urban development using digital technology has become an urgent issue. “Project PLATEAU (https://www.mlit.go.jp/plateau/)” is a project initiated in 2020 under the leadership of the Ministry of Land, Infrastructure, Transport and Tourism (MLIT) to develop a high-precision 1:2500-level 3D city model CityGML format in a unified manner and convert it into open data format via CKAN's data portal (CityGML, 3D Tiles, GeoJSON, MVT, ESRI Shapefile), to develop an open-source data viewer and to explore use cases.

This study details the history of the "Project PLATEAU" initiative and discusses the relationship between openness and urban data commons. Many of the data specifications, converters and online viewers are closely related to FOSS4G. Next, data for 126 cities in Japan (about 19,000 square kilometers) developed as open data over a three-year period are regionally aggregated and then quantitatively compared with OSM building data in Japan. Trends such as coverage rates between cities and micro-regional analysis within Tokyo are then attempted. To analyze a large amount of data, this part was carried out using data converted to FlatGeobuf format.

Some of the results of the data preparation analysis are as follows: The basic analysis of the cities covered by PLATEAU showed that the total number of buildings in LOD1 was about 15.7 million, with a population coverage of about 38.4%. These cities have shown an increasing trend in population over the last five years (an average of about +10,000 for the 126 cities). By comparison, the total number of OSM buildings in the country is about 12.7 million, generally widely distributed across the country's 1903 administrative districts (about 38,000 square kilometers). Therefore, only the cities maintained by PLATEAU provide data with a higher level of detail than OSM. However, the detailed LOD2 building data with roof shape is limited to about 480,000 buildings (about 300 square kilometers in 97 cities nationwide), which are high-rise buildings and landmarks in large cities.

To identify more micro trends, we compared the accuracy of the building data for central Tokyo, which has the largest number of units in both datasets, in 2020, the year the PLATEAU data was created. The number of units in each building dataset is OSM (726,685 units) and PLATEAU (1,768,868 units). When PLATEAU is used as the base data, the coverage of OSM is about 40%. On the other hand, of the 3190 city blocks in central Tokyo, 502 (about 15.7%) were identified as having more OSM buildings than PLATEAU. As a factor contributing to this discrepancy, a historical analysis of the timestamps and versions of the building data (about 80,000 units) that exist only in OSM revealed that most of them were created more than two years before the PLATEAU data and have never been updated. Therefore, the PLATEAU data should be updated to keep the data fresh, even in areas where OSM data are already widely distributed, if only data older than 2020 are maintained.

For example, open 3D urban model data for cities of various sizes have been released in Japan, and they are highly accurate and complementary to OSM data. In addition, these data have begun to be used in administrative practice, and a total of 44 applications in new areas such as citizen participation and entertainment (especially services using XR) have been identified. The evaluation of the exploitation methods is explained in the paper, but the cases related to smart cities and disaster prevention are particularly striking. The issues to be addressed in these efforts are the nationalization of the scope of maintenance, the organic merging with open data as represented by OSM, and the further GIS education in the field of urban planning. Finally, as data contributing to the reproducibility of this study, the data sources used in the analysis are themselves open data and thus readily available. Therefore, we plan to provide a download list of each data source and GIS data summarizing the tabulation results as open data on Github.

Academic Track
UBT E / N209 - Floor 3
14:00
30min
Traffic Analysis with QGIS and GTFS: GTFS-GO
IGUCHI Kanahiro

GTFS is stands for General Transit Feed Specification, which is developed by Google and used for describing schedules of public transpotation. A bunch of dataset is distributed in the world and GTFS includes geospatial information - stops and routes. To utilize such intresting data, we have developed GTFS-GO - QGIS plugin to process GTFS. You can translate GTFS to GIS data and visualize them by GTFS-GO. The plugin can be used for analyzing public transportaion by aggregating traffic frequencies on each stop or route. In this talk, you can see how GTFS is visualized or analyzed by using GTFS-GO on QGIS.

State of software
Outdoor Stage
14:00
30min
Transit Access to Essential Services in the face of Climate Change
Kaushik Mohan, Erin Stein

Climate change’s impact on public transportation tends to focus on improving transit infrastructure to reduce stoppages. While this is important, it does not take into account the effect it has on communities, often already underserved, that rely on the transit system. As part of The Opportunity Project’s Building Climate Change Resilience Through Public Transit sprint, our team at Data Clinic set out to develop an open source, user-friendly, and scalable tool to communicate intersectional risks faced by transit infrastructure and community access at the local level. This solution was inspired by both the event, and user research with key stakeholders in transit agencies, academia, and community organizations.
In this presentation, we will demonstrate TREC: Transit Resiliency for Essential Commuting, and expose key decisions that resulted in a geospatial solution designed for wide audiences, and geographic and data scalability. TREC’s transit stop-level insights can become crucial tools for transit planners and community organizations to prioritize and advocate for infrastructure improvements that take community effects into account.
Focused initially on two locations- one small (Hampton Roads, Virginia) and one large (New York City) transit system, each station is treated as a destination providing access to essential services during localized climate change events. In this MVP, we employ flooding as our climate scenario, the event most cited as recurring and disruptive by our stakeholders.
Using OpenStreetMap to calculate walksheds around each station obtained from GTFS data, we categorize importance in accessing essential services such as hospitals and jobs around a transit stop. Layered onto this, we bin current flood risk for each station using the prevalence of buildings with moderate- to extreme high-risk of flooding according to open data, and provide polygons representing projected flood risk in 2050.
While we built the TREC UI to maximize accessibility of this contextualized data to multiple stakeholders, we also seek to optimize usability of the repo to allow tech-mature transit planners to adopt the tool internally and incorporate their proprietary fine-grained data. Further, we are committed to expanding the functionality of TREC according to user feedback.
The threat of climate change disrupting daily life on a recurring basis, beyond large-scale disasters, continues to grow. With the help of this tool, we hope to democratize relevant data, inspire the open publication of localized geospatial data related to climate change, and enable human-centered decisionmaking through a multidimensional lens.

Use cases & applications
UBT D / N115 - Second Floor
14:30
14:30
30min
An interoperable Digital Twin to simulate spatio-temporal photovoltaic power output and grid congestion at neighbourhood and city levels in Luxembourg
Ulrich Leopold

Background

Cities are home to 72% of the population in Europe and account for 70% of the energy consumption. Being particularly vulnerable to climate change impacts, urban areas play a key role in carbon mitigation and energy transition. It is, therefore, of particular importance to increase renewable energy production for urban areas and cities.

Cities urgently require information about their potential for renewable energy production to target ultra-sustainable policies. Luxembourg has set very ambitious goals with its Plan National Intégré Énergie Climat (PNEC). It describes policies and measures to achieve ambitious national targets for reducing greenhouse gas emissions (-55%) as well as pushing renewable energy production (+25%) and energy efficiency (+40-44%) by 2030.

Public authorities often lack the expertise in integrated assessment and relevant simulation tools to support scientific evidence-based decisions about energy strategies, enhance interaction with stakeholders and accelerate energy transition. The main outputs of this study are related to the demonstration of the role of interoperable geographical digital twins based on Free and Open-Source software and geospatial software technologies in the simulation, monitoring and management of the current and near future renewable-based energy systems.

Approach and Concept

The scope of the presented work is to demonstrate the role of a 3D geographical urban digital twin in the context of a high penetration and optimised installation of PV and the impact of its power generation on the grid. Free and Open-Source software technologies build the basis of a web platform which implements a geographical digital twin based on open data, open OGC standards to build a fully interoperable Digital Twin. This allows the integration of open 3D CityGML data with simulation algorithms of renewable potentials and the energy grid system all into one technical interoperable architecture.

The objective of this study is to simulate the potential for building integrated and building attached solar photovoltaic (PV) electricity generation in use case cities, and later to scale up the results to a nationwide level. The approach taken involves several key steps:

  1. Estimation of electricity consumption of the building stock at local level, in order to understand the demand for electricity and the potential for PV generation.

  2. Simulation of the electricity generation potential of building-integrated and building-attached PV systems, considering factors such as rooftop and facade orientation and shading effects.

  3. Development and analysis of scenarios for different PV technologies, including consideration of techno-economic parameters such as feed-in-tariffs, lifetime of installation, efficiency, and power consumption.

  4. Selection of optimal locations for PV placement across the city, based on a combination of rooftop and facade suitability, electricity demand, and electricity grid nodes.

  5. Implement all steps into an interoperable web-based decision support platform providing advanced simulation and assessment tools using high resolution open building information.

Results

Geospatial software technologies and 3D and 4D algorithms are building the core of the platform (based on iGuess®) to enable the planning of PV electricity generation from the local to the national scale. Global solar irradiation is simulated for each roof-top and façade at a very high resolution, taking into account 3D shading effects of the surroundings in the urban environment. Scenarios for different PV technologies, feed-in tariffs and cost efficiencies and amounts of PV installations are computed to show impacts of spatio-temporal differing PV generation. This simulates the large increase of PV installations required to accelerate the development of sustainable energy and climate action plans (SECAPs) for all municipalities in Luxembourg and the entire nation.

The developed platform serves multiple beneficiaries, e.g., Municipalities, urban planners etc. to support 3D based realistic urban energy planning. Citizens and energy communities can help shape their city and get access to high resolution information. This platform provides a tool for estimating long- as well as short- and mid-term PV power generation at high resolution across entire neighbourhoods and districts generating time-series data.

Furthermore, we have implemented tools for the identification of cost-efficient PV placement/integration in buildings on roof-tops and facades to test the different scenarios and allow for interactive selection for optimal PV placement identification across the study area.

Conclusions

This paper presents the importance of geographical digital twins providing the core platform for the current energy transition from fossil fuels to renewables. The advantage of an interoperable geographical urban digital twin, as proposed here, provides the flexibility necessary to simulate and test scenarios for rapid, integrated urban planning under climate change. Based on open-source, open standards and open APIs, open data, simulation and assessment methods and tools can be seamlessly integrated to provide a 3D real world environment to assess and develop energy transition approaches. Different stakeholders, such as citizens, municipalities and businesses can act and be stimulated to enable a faster transition to renewable energy and harvest the full potential of improved urban planning based on geographical Digital Twin technologies.

Academic Track
UBT E / N209 - Floor 3
14:30
30min
Comparison of GeoServer configuration deployment options
Alexandre Gacon

The goal of this presentation is to give an overview of the different options available for deploying a GeoServer configuration to different environments. In addition to the common data_dir folder deployment option, we will explore the possibilities offered by existing extensions and by the REST API, including different client libraries around it. We will also discuss the advantages that can be brought by Terraform for this use case.

State of software
Lumbardhi
14:30
30min
Editable topologies in pgRouting
Christian Beiwinkel

pgRouting, a PostGIS extension containing algorithms and tools for working with graph data, has become a highly flexible member of the FOSS routing engine family. In this talk, I want to demonstrate just how flexible it can be by showing how routable networks (called 'topologies' in pgRouting) can be made editable.

I will take the audience from theoretical conception of editable topologies (how can edits, insertions and deletions be handled in PostGIS?) through its implementation. Finally, I will end with a demonstration of a fully editable topology in a web mapping application based on a real world example using OpenStreetMap data.

Use cases & applications
UBT D / N115 - Second Floor
14:30
30min
Geoconnex.us: a standards based framework to discover water data
Tom Kralidis, Benjamin Webb

The Web has an increasing number of web applications being developed to freely provide their information and is a hub for open data publishing. For this to happen as a self-sustained ecosystem, data must be findable, accessible, interoperable, and reusable to both humans and machines across the wider web. This session delves into Web Best Practices for publishing data using open source and standards-based solutions.

The geoconnex.us project is about providing technical infrastructure and guidance to create an open, community-contribution model for a knowledge graph linking hydrologic features in the United States as an implementation of Internet of Water principles. This knowledge graph can be leveraged to create a wide array of information products to answer innumerable water-related questions.

Implementation has two parts: persistently identified real world objects and organizational monitoring locations that collect data about them. Both must be published to the Web using persistent URIs and communicated with common linked data semantics in order for a knowledge graph to be constructed.

The Internet of Water Coalition supports the first part with a Permanent Identifier Service and reference hydrologic reference features (e.g. watersheds, monitoring locations, dams, bridges, etc.) within the US.

In support of the second part, geoconnex.us takes advantage of pygeoapi using the OGC API - Features standard to publish structured metadata resources about individual hydrologic objects and the data about them. pygeoapi supports extending this standard by incorporating domain-specific structured data into the HTML format at the feature level, and allowing for external HTTP URI identification. In addition, pygeoapi’s flexible plugin architecture enables for custom integration and processes. This means that individual features from various sources can have structured, standardized metadata harvested by search engines and assembled into a useful knowledge graph.

This spatial feature-based linked data architecture enables data interoperability between independent organizations who hold information about the same real world thing without centralizing data infrastructure - answering important questions like, “Who is collecting water data about my local stream and its tributaries?” or “What data do we have about water upstream and downstream of East Palestine, Pennsylvania?”

Use cases & applications
UBT C / N109 - Second Floor
14:30
30min
Improving QGIS plugin developer experience
Antero Komi

At the National Land Survey of Finland (NLS) we are developing multiple QGIS plugins, and we needed a way to share the common code and break the components to smaller independent plugins while still providing a good developer experience.

One of the main issues when sharing library code between different QGIS plugins is the runtime environment uncertainty. Since Python import machinery is not easily configurable to support multiple versions of dependencies (like nested node_modules in nodejs-world), the runtime is limited by default to a single version of a library, and later access to the same module is cached. This limits the version available to all plugins in a single QGIS session to the code that is first run, which makes sharing code difficult, especially when breaking API changes are necessary to the dependency library code.

At NLS we developed tooling to work around these limitations, which improves the developer experience and allows sharing of common QGIS plugin code easily via standard Python libraries. Tool provides a streamlined developer workflow and necessities like typing and IDE helpers, and a way to package a plugin that depends on other standard Python libraries.

Development environment for a QGIS plugin can be initialized simply by using a virtual environment, installing the dependencies and launching QGIS with the plugin and its dependencies fully setup. This works with bootstrap code passed on the command line, which will provide QGIS access to the virtual environment, setups the plugin from the environment with access to any library dependencies. Tool also provides a debugger session and could also provide for example hot reload signals for the plugin when code is changed. This provides a quicker and easier feedback cycle for the developer and simplifies the workflows when developing QGIS plugins.

Runtime dependencies are reorganized at build-time to be imported for a sub-package of the plugin, so only the exact packaged version of a dependency is used at runtime. This works by rewriting external library dependency import statements in the source code. Tool also generates the metadata.txt file in a way that is compatible with standard Python packaging tools, for example setuptools. This allows easily sharing the same code both as Python library and as a QGIS plugin.

Use cases & applications
UBT D / N112 - Second Floor
14:30
30min
Lidar data processing, management and visualisation in a browser using Pointview
Bogdan Negrea

Pointview is a product developed by IT34 for working with Lidar and Photogrammetry data. It gives the user the possibility to upload, process, visualize and work with data.

Lidar data formats such as LAS, LAZ, E57 can be uploaded, processed and visualized in the browser.

Photogrammetry: Images from drones or video from phones can be uploaded, processed into a 3d point cloud and visualized in a browser.

In addition, data can be captured using our SmartSurvey app that captures video which is used for building a 3d pointcloud, together with an ortophoto and dem. The data is later available for visualization in Pointview or in QGIS though a WFS service.

Moreover, the system offers a complete management system where the user can create projects for organizing the data, can share the data with other users and manage the access.

The system uses various data processing workflows for data processing based on open source components such as:
PostgresSql + Postgis for storing the data and for geometry based analysis.
OpenLayers for visualizing the images and ground control points results as rasters
Geoserver for publishing data as WMS/WFS,
QGis for visualizing data,
PDAL for lidar data processing,
GDAL for raster data processing,
CloudCompare for lidar data processing,
Potree for Data Visualization

Use cases & applications
Drini
14:30
30min
MobiDataLab - Building Bridges on the way for FAIR mobility data sharing
Johannes Lauer, Thierry Chevallier

MobiDataLab is the EU-funded lab for prototyping new mobility data sharing solutions.
Our aim is to foster data sharing in the transport sector, providing mobility organising
authorities with recommendations on how to improve the value of their data,
contributing to the development of open tools in the cloud, and organising hackathons
aiming to find innovative solutions to concrete mobility problems.

Started in 2021, the project investigated mobility data and services and did grown an
open knowledge base about mobility data as one of the four main pillars of the project.
With the realization of tools and the combination of data and services in the Transport Cloud,
which is the second pillar of the project, a representative set of technical
"mobility data sharing enablers" has been grown.

In the second half of the project, these assets are being provided to the public.
The Virtual and Living Labs will host environments for mobility data stakeholders
to explore the state of the art for data, services and their interaction to solve
mobility data challenges. All aligned with the FAIR statement - making data and services
findable, accessible, interoperable and reusable.

The challenges are mainly based on a broad set of use-cases, defined by the core project group,
the reference group and external stakeholders. These challenges are the core of the Livind and
Virtual Labs, where participants building solutions for the given challenges and exploring new
opportunities with the shared mobility data and services.

With the feedback of the labs, our partners, the reference group and external stakeholders
- mobility data providers from public and private sector, municipalities,
governmental institutions, start-up communities and stakeholders from research and industry,
the project will make challenges transparent and remove barriers for data sharing.

Since the project started in February 2021, we will present our achievements
provide an outlook on the last mile of the project, where we are bringing the
tools on the road.

Further information on the project is available via https://mobidatalab.eu and https://github.com/mobidatalab .

MobiDataLab is funded by the EU under the H2020 Research and Innovation Programme (grant agreement No 101006879).

Open source geospatial ‘Made in Europe’
UBT F / N212 - Floor 3
14:30
30min
More correct maps/data with Postgis Topology rather then Simple Feature ?
Lars Opsahl

In Norway we now get more up-to-date maps for land resource map (AR5), because the domain experts on agriculture in the municipalities in Norway have got access to a easy to use client. This system includes a simple web browser client and a database built on Postgis Topology.

In this talk we will focus on, what is it with Postgis Topology that makes it easier to build user friendly and secure tools for updating of land resource maps like AR5. We will also say a couple of words about advantages related to traceability and data security, when using Postgis Topology.

In another project, where we do a lot ST_Intersection and ST_Diff on many big Simple Feature layers that covers all of Norway, we have been struggling with Topology exceptions, wrong results and performance for years. Last two years we also tested JTS OverlayNG, but we still had problems. This year we are switching to Postgis Topology and tests so far are very promising. We also take a glance on this project here in this talk.

A Postgis Topology database modell has normalised the data related to borders and surfaces as opposed to Simple Feature where this is not the case. Simple Feature database modell may be compared to not using foreign keys between students and classes in a database model, but just using a standard spreadsheet model where each student name are duplicated in each class they attend.

URL’s that relate this talk

https://gitlab.com/nibioopensource/pgtopo_update_gui
https://gitlab.com/nibioopensource/pgtopo_update_rest
https://gitlab.com/nibioopensource/pgtopo_update_sql
https://gitlab.com/nibioopensource/resolve-overlap-and-gap

Use cases & applications
UBT C / N110 - Second Floor
14:30
30min
Open Source Basded 3D City Model Visualization - A LH Urban Digital Twin Case
Cheun-gill Park, Hansang Kim

The LH Urban Digital Twin Platform is a comprehensive solution for new town planning and development that utilizes open source digital twin technology. The platform combines real-world data with spatial information context to offer a three-dimensional sharing/collaboration integration support system. 

Developers will appreciate the platform's flexibility and scalability, which are based on a microservice architecture that connects multiple modules independently and loosely. The platform utilizes open standards WMS, WFS, WCS, WPS OGC Web Service standard features through GeoServer and GeoWebCache, a tile cache server that accelerates map delivery built into GeoServer. Additionally, the platform supports visualization of data in various formats using mago3D, F4DConverter, and Smart Tiling. 

The platform offers a range of services, including automatic apartment building placement, construction site safety management, 3D urban landscape simulation, environmental planning simulation, and underground facility visualization simulation. The platform also features real-time monitoring and visualization of IoT-based data, which is of particular interest to developers interested in smart city development. 

Firstly, the presentation will show how open source based digital twin visualize the complex 3D city models in a web browser. Secondly it will showcase the platform's features and data, including actual system's functions and service UI/UX through a video. Attendees will gain insights into how the platform can be used to support rational decision making during complex urban planning, design, development, and operation stages. 

This presentation is of interest to developers working in the field of urban planning, design, and development, as well as those interested in open source digital twin technology. 

LH Corp is one of the largest public companies in Korea providing land and housing for public purpose. They are owned and controlled by the Korean government. They’ve played a large role in new town development and housing welfare.

Use cases & applications
UBT C / N111 - Second Floor
14:30
30min
QFieldCloud - seamless fieldwork for QGIS
Marco Bernasocchi

QFieldCloud enables the synchronisation and consolidation of field data collected by teams using QField. From small individual projects to large data collection campaigns, the platform allows you to manage the collaboration of multiple people on the same project, assign different roles and rights to different users, work online and offline, and keep track of changes made. In 2022, QFieldCloud was testable as a beta version. Already during the beta phase, over 40,000 registered users synchronised their projects via the platform. Beginning of 2023, the official version was released. A brief overview of how QFieldCloud works and how the platform is built is given.

State of software
Outdoor Stage
14:30
30min
Training the future with FOSS4G
Elisa Hanhirova

The use of free open source software is catching on and (at least) in Finland governmental institutions are making the big switch to open source software from other solutions. This opens up the need and possibility for training.

Training needs may differ from no previous training or knowledge to advanced GIS professionals so customising the training and exercises are important. Some might need to start with basic GIS and spatial information in general and continue to hands-on learning and multiple different exercises to help them learn the use of different tools and workflows in QGIS.
For more advanced users, training and helping with different programs for example GeoServer and QField or deepening the knowledge of different workflows such as visualisation or Python in QGIS are more in order.

FOSS4G has also been catching on and spreading in schools and universities. These new professionals that have used FOSS4G from the very beginning of their studies can be more efficient and skillful using these different programs. They may also demand more from the software and think of new ways to modify and perfect their workflows and produce new innovations. They can be a new and very important resource for developing different areas of FOSS4G.

Training new and more experienced professionals in FOSS4G is a very important step for implementing new tools and workflows into different industries and businesses. Training also works both ways, through discussion and hands-on exercises some new and interesting needs may emerge and those could be possible to develop further into new tools or plugins.

The more institutions, businesses and other users are interested in switching to FOSS4G, the more new opportunities and needs for different tools and working methods arise. This in turn helps to develop the software further.

Education
UBT D / N113 - Second Floor
14:30
30min
Visualizing Geospatial Data with Apache Superset
Jan Suleiman

Apache Superset is one of the most used no-code platforms for business intelligence. It allows for the exploration and visualization of data, from simple line charts to highly detailed geospatial charts, without the need for programming skills. These charts can be published on interactive dashboards to provide users with meaningful and up-to-date information. Currently, a plug-in for visualizing cartodiagrams is in development which is based on the OSGeo projects OpenLayers and GeoStyler. This plug-in gives users the ability to use any visualization of Superset within a geospatial context, so that e.g. simple pie charts or even complex location based timeseries can be displayed on a map. Thereby, Superset becomes a powerful tool for visualizing geospatial data.

This talk gives a brief overview of Superset and possible use cases while focussing on geospatial data.

Use cases & applications
Mirusha
15:00
15:00
30min
AR: Why open map data is critical to the future of computing
Edoardo Neerhut

Use cases should drive product development, not the other way around. Maps and the products we use to consume them have the biggest impact on the world when these principles are adhered to. How many government portals have you visited where a carefully curated map is presented that hardly anyone sees let alone uses? Presenting the data to the user in an intuitive way that helps them make a decision or take action is essential.

Large paper maps of the 1700s were well suited to a captain’s desk as their ships traversed the oceans. Road atlases of the 20th century helped to spur family adventures and weekend getaways as highway networks were constructed around the world. The small computers in our pockets today allow us to see when the next train will arrive and which one will get us home sooner. These examples took the technology of the day and used it to make products with significant impact on society. The mobile internet in particular changed mapping in one of the most notable ways since humans started abstracting 3D space on 2D surfaces.

We’re on the cusp of another great shift in the way maps are used with many exciting use cases awaiting discovery. The technology powering this potential is Augmented Reality (AR). This talk will explore some of the use cases that AR is supporting and where it might be useful in future. We’ll look at how AR can be accessed and how the medium of access affects its utility. With these use cases in mind, we’ll assess how open tools and map data enable AR. Some of the data and tools we’ll look at include:
Geometries of pedestrian ways
Associated attributes: Incline, safety, lighting, access, surface type, accessibility features
Building entrances
3D building data
VPS for localisation
Routing algorithms

The talk will conclude with a summary of Meta’s approach to map building and how open source geospatial technology powers the maps we build for today and the years ahead.

Use cases & applications
UBT C / N111 - Second Floor
15:00
30min
Developing a real-time quality checker for the operators on QGIS
Olli Rantanen

The National Land Survey (NLS) of Finland decided in the fall of 2020 to develop a national topographic data production system based on open source technologies and especially on QGIS client. Since then, many significant steps have been taken to implement the MVP of the application for the operators of the NLS at the start of 2025.

The latest significant expansion of our product has been the development of a comprehensive and user-friendly way to handle data quality management for the operators. Our aim was to develop it in a way that changes for the quality rules could be easily made and maintained and that would be as informative as possible. The basic idea behind quality management is clear: our customers want high-quality data, and we want the operators to have clear and easy-to-understand checks for their workflow that do not limit their productivity. For this, we have developed a tool, simply named Quality management tool.

The reason we couldn't use the basic QGIS tools was that they were not easily modifiable or extensive enough for our use cases and quality demands for our data. We have been able to use some of them, such as geometry checks, but for the most part, the quality tools had to be manually selected and configured, which would take the operator's time.

The key concept of quality management is that the operator gets real time feedback about the quality, so the errors can be fixed as part of the basic workflow and there is no need for separate phases for quality control. Additionally, we would not limit the user from saving their work to their local database, regardless of the errors they may have, so that the workflow would not be interrupted.

At this moment we have published the graphical user interface for visualizing the quality check results (can be found here: https://github.com/nlsfi/quality-result-gui) but on this talk I would show how it can work on a larger scale. For this purpose, I would present the tool with use case videos. I would also like to talk about the architecture of the tool and how we are going to develop the tool even further. I hope that some listeners can apply this tool for their workflows and benefit from this example.

Use cases & applications
UBT D / N112 - Second Floor
15:00
30min
Expanding Geospatial Data Access: Lessons from Radiant MLHub and the Shift to Source Cooperative
Michelle Roby

Radiant Earth is building a new data sharing utility called Source Cooperative that aims to make it trivially easy for data providers to publish data on the Internet. Source Cooperative is the next generation of Radiant MLHub which Radiant Earth built to share Earth observation training datasets. In this talk, we will share lessons learned about sharing data from working with NASA, Planet, Sinergise, AWS, Microsoft, and others. We will also share how we’re applying those lessons to create Source Cooperative.

AI4EO Challenges & Opportunities
Drini
15:00
30min
Felt Maps for Sharing and Collaboration
Michal Migurski

Introducing Felt, a new map sharing and collaboration product.

We connect closely with the current ecosystem of open source mapping tools and make it easier to work together with colleagues inside and outside mapping. In this talk, we will show:

  • How current users of programs like QGIS bring Felt into their workflows
  • Where Felt lets them expand into new areas like community feedback
  • How we’ve used and expanded core OSS libraries like MapLibre, GDAL, Pelias, and Tippecanoe
  • Why we’re pushing forward emerging formats and standards like PMTiles

Session attendees will gain an important new tool for their stack, a product made for extending the reach of existing open source mapping tools and improving collaborative map-making beyond analysis.

Use cases & applications
Outdoor Stage
15:00
30min
G3W-SUITE as a tool for the preparation of web cartographic management systems
Walter Lorenzetti

The Lazio Region Authority (Italy) has been using for several years a system based on the G3W-SUITE and QGIS application which has allowed it, not only to publish public web services, but to prepare web cartographic management systems dedicated to internal staff for the management of territorial aspects of own competence:
* management of damages caused by wildlife and related reimbursement procedures
* environmental impact assessment practices
* wolf genetics
* signaling the presence of wild boar in urban areas
* nests and strandings of sea turtles
* road accidents with wildlife

The close integration between the suite and QGIS has allowed to create web cartographic management systems characterized by:
* numerous geometry editing features
* customization of the structure of the editing and attribute consultation forms
* simplification of attributes compilation thanks to the ability to inherit from QGIS: editing widgets,
* mandatory and uniqueness constraints, default values, conditional forms and drill down cascade based on expressions
* possibility of defining geographical constraints in visualization and editing in order to divide the
territory based on areas of competence associated with individual users
* possibility to differentiate the information content accessible on the basis of different users and roles
* descriptive analysis of the data through integration with the graphs created with the DataPlotly plugin

Thanks to the contribution and funding from the Lazio Region dedicated to the development and integration with the QGIS functions related to data editing, G3W-SUITE is configured as a valid tool for the preparation of advanced geographic data management systems on the web.

As an example, we report a series of use cases:
* Environmental Protection Agency of the Piemonte Region: post-event damage and usability census, management and cartographic representation of post-earthquake inspection requests
* Gran Paradiso National Park: park route signage management
* Piemonte Region: preparation of Civil Protection Plans
* Environmental Protection Agency of the Lombardy Region: Hydrological Information System

Use cases & applications
UBT C / N109 - Second Floor
15:00
30min
Leveraging the Power of Uber H3 Indexing Library in Postgres for Geospatial Data Processing
Jashanpreet Singh

The Uber H3 library is a powerful geospatial indexing system that offers a versatile and efficient way to index and query geospatial data. It provides a hierarchical indexing scheme that allows for fast and accurate calculations of geospatial distances, as well as easy partitioning of data into regions. In this proposal, we suggest using the Uber H3 indexing library in Postgres for geospatial data processing.

Postgres is an open-source relational database management system that provides robust support for geospatial data processing through the PostGIS extension. PostGIS enables the storage, indexing, and querying of geospatial data in Postgres, and it offers a range of geospatial functions to manipulate and analyze geospatial data.

However, the performance of PostGIS can be limited when dealing with large datasets or complex queries. This is where the Uber H3 library can be of great use. By integrating Uber H3 indexing with Postgres, we can improve the performance of PostGIS, especially for operations that involve partitioning of data and distance calculations.

We propose to demonstrate the use of Uber H3 indexing library in Postgres for geospatial data processing through a series of examples and benchmarks. The proposed presentation will showcase the benefits of using Uber H3 indexing for geospatial data processing in Postgres, such as improved query performance and better partitioning of data. We will also discuss the potential use cases and applications of this integration, such as location-based services, transportation, and urban planning.

The proposed presentation will be of interest to developers, data scientists, and geospatial analysts who work with geospatial data in Postgres. It will provide a practical guide to integrating Uber H3 indexing with Postgres, and offer insights into the performance gains and applications of this integration.

Use cases & applications
UBT C / N110 - Second Floor
15:00
30min
MOOC EOODS - Massive Open Online Course for earth observation and open data science: a course to educate the next generation of EO researchers in data cubes, cloud platforms and open science
Peter James Zellner

The Massive Open Online Course - Earth Observation Open Data Science (MOOC EOODS) teaches the concepts of data cubes, cloud platforms, and open science in the context of Earth Observation (EO).

It aims at Earth Science students, researchers, and Data Scientists who want to increase their technical capabilities onto the newest standards in EO cloud computing. The course is designed as a MOOC that explains the concepts of cloud native EO and open science by applying them to a typical EO workflow from data discovery, data processing up to sharing the results in an open and FAIR way.

The EO College platform hosts the course and hands-on exercises are carried out directly on European EO cloud platforms, such as Euro Data Cube or openEO Platform, using open science tools like the Open Science Data Catalogue and STAC to embed the relevance of the learned concepts into real-world applications. The MOOC is an open learning experience relying on a mixture of animated lecture content and hands-on exercises created together with community renowned experts.

After finishing the course, the participants will understand the concepts of cloud native EO, be capable of independently using cloud platforms to approach EO related research questions and be confident in how to share research by adhering to the concepts of open science.

The MOOC is valuable for the EO community and open science as there is currently no learning resource available where the concepts of cloud native computing and open science in EO are taught jointly to bridge the gap towards the recent cloud native advancements in EO. The course is open to everybody, thus serving as teaching material for a wide range of purposes including universities and industry, maximizing the outreach to potential participants.

Our talk will give an overview of the MOOC at the current status. Furthermore, we encourage review, feedback on its content and discussion.

Education
UBT D / N113 - Second Floor
15:00
30min
Migration strategies: Or how to get rid of a deprecated framework
Tobias Kohr

Deprecation of a used framework is a common risk for software projects. Migrations are very time-consuming and costly, without showcasing any new functional features. This can make them an unpopular task, that tends to be postponed until there is no other choice, be it for a customer or the community of an open source project.

During the last decade for instance, AngularJS has been one of the most popular web frameworks around. This was not any different in FOSS4G projects, where it had been adopted in geoportals and other frontend components. With the end of the decade, active development of AngularJS came to an end and since summer 2021 no more security updates are provided. This has become a major challenge for many web ecosystems - including FOSS4G ones - where AngularJS is still very present, but will have to be replaced in the long run.

This talk will present various open source projects and how they differently approach this challenge. It will reflect on lessons learned so far and aspires to provide inspiration for other projects in a similar situation.

Geomapfish is a WebGIS framework that allows to build geoportals. It is a community driven project. Its frontend is based on the ngeo javascript library, which has been built on top of AngularJS and OpenLayers. Due to its wide functionality, the project’s goal is to prevent a one shot migration. It has been decided for a continuous migration based on (Lit Element) web components, that allow to integrate migrated functionalities step by step.

Geoportal.lu is the national geoportal of Luxembourg. It is based on the Geomapfish framework, but has a very customized frontend. The requirement here is similar. Instead of migrating all at once, the different parts should be continuously integrated. After following the Geomapfish migration strategy based on web components at first, the project is finally migrated to another javascript framework (vue), without giving up on the continuous migration.

Geonetwork is a well-known FOSS4G catalog application. On top of its powerful backend, sits a frontend that is also based on AngularJS. Once again, its functionality is so vast, that a complete rewrite would be enormous. Thus came up the idea of geonetwork-ui: A new project that could live alongside Geonetwork without the goal to become isofunctional, but to complement it. A project providing libraries specialized in proposing user interfaces by leveraging Geonetwork’s backend capabilities.

Use cases & applications
Mirusha
15:00
30min
Runtime environment for the validation of the Copernicus Ground Motion Service
Vasile Crăciunescu, Marian Neagul

The Copernicus Ground Motion Service (EGMS) is a European Union (EU) initiative under the Copernicus program, which aims to provide near-real-time information about ground deformation caused by natural or man-made hazards. The service uses a variety of data sources, including satellite radar imagery, to monitor and analyze ground motion in areas prone to landslides, sinkholes, earthquakes, and other hazards. Given the sensitive nature of the service, EGMS product validation is a key activity in assuring the user community (especially the decision makers) of the quality of the ground motion and deformation information provided.

The main goals of the EGMS validation system are as follows: to provide a reproducible environment on top of modern cloud infrastructures (with a particular focus on the European geo clouds), to enable the development of scientific tools that validate EGMS characteristics, to facilitate the reproducibility of the validation tasks, and to account for key performance indicators (which will allow shareholders to monitor the quality of the primary EGMS product).

To achieve the first goal of providing a reproducible environment, we have focused on providing Terraform modules that facilitate the deployment of our software stack on any supported cloud platform. The software stack is built on top of the Kubernetes container orchestration system, which runs on top of a managed cloud environment. Kubernetes provides uniform services regardless of the underlying cloud platform.

For the goals of developing the validation tools and the execution of those tools we decided on using an unified approach based on the JupyterHub solution. JupyterHub is used for providing an unified development environment based on R and Python EO software tools (based on modified Pangeo Docker images). Also Jupyter is used for executing the validation tools outside of JupyterHub by leveraging an internal python service that uses papermill to execute the notebook and then “nbconvert” to generate a html webpage containing the required visualizations and documentation in human readable form.

The validation system is complemented by an bespoke web dashboard aimed for providing reports and information related to the status of the various key performance indicators.

Overall the whole validation system was developed by solely using FOSS4G components: GeoPandas, RasterIO, GeoNode, GeoServer and JupyterHub.

Use cases & applications
UBT F / N212 - Floor 3
15:00
5min
UAVIMALS: the "open" remote sensing system for surface archaeological investigations.
Federica Vacatello

Today, there is a growing use of airborne sensors in archaeology, especially to investigate the surface of vast territories quickly and accurately. Airborne laser scanning technologies from small remotely piloted aircraft are rapidly turning towards more and more performing solutions for the investigation of archaeological traces hidden by vegetation or soil deposits substantial. The proposed contribution aims to fit into this field of archaeological research by presenting "UAVIMALS" (Unmanned Aerial Vehicle Integrated with Micro Airborne Laser Scanner), a new system of aerial remote sensing of "shadow marks” (Masini – Lasaponara 2017; p. 32) designed for surface archaeological investigations, the result of an Early Carrer Grant funded by the National Geographic Society. The system, consisting of a custom drone based on an open architecture and software for vehicle control and data processing, integrates a solid-state laser sensor, commonly designed to avoid obstacles, but here exploited to process a DTM (Digital Terrain Model) accurate of small surfaces with a significant reduction in acquisition time and cost. The ambition of the UAVIMALS project was not to create an airborne LIDAR at low cost and less performing than those already on the market, but rather to create an instrument of easy transportability, less expensive and equally precise. We believe the solution represents a breakthrough in research into airborne laser scanner technologies.
The acquisition of three-dimensional images at very high morphometric resolution, has proved to be a fundamental practice for the study of various contexts of our planet, but in the archaeological field, in particular, drone remote sensing is an extremely important practice for the investigation of ancient structures, sometimes still unexplored, not otherwise searchable by other means, such as excavation and reconnaissance activities, due to uncomfortable geomorphological conditions, places of difficult access and traces invisible to the human eye at short distances and in particular climatic conditions (Štular- Eichert- Lozić 2021). Nevertheless, most of the instruments currently on the market still have prohibitive costs for archaeological research, as well as unfavourable dimensions to meet transport needs in inaccessible places in the absence of transport. The realization of the system presented has tried to overcome these critical issues by working on the hardware solution best suited to the needs of an investigation of aerial archaeology, by using a type of lidar sensor never used for remote sensing by drone. The instrument, with its low cost and dimensions, was born as a system for autonomous driving on road vehicles (https://leddarsensor.com/solutions/m16-multi-segment-sensor-module/) and was customized on a self-built drone to obtain a prototype of the 'very light' class. Following the experimentation in two different archaeological contexts, the work continued with the resolution of the second criticality, that is the creation of a software useful for the control of the medium in phase of flight but also able to monitor the acquired data finalizing a first graphical elaboration. Currently, in fact, it is possible to work the clouds of lidar data points only through dedicated software (Cloude Compare; 3D Zephyr; QGIS etc.) that not being connected with the drone, do not allow a real-time visualization of what is seen by the sensor and prevent a preliminary first monitoring of any archaeological presence hidden in the overflight area. The DEM, meshes and clouds of points obtained from the sensor can then be loaded into geospatial software such as QGIS, allowing spatial, territorial, and geomorphological analysis of the data acquired using specific tools. If for other contexts of application such activity may be superfluous, in the archaeological field, a system like the one thought can represent a concrete possibility of widening of the archaeological investigations that in such a way would be speeded up by a tool of observation as well as facilitated by a cost widely accessible to the funds given to the research university. The system, moreover, would allow to speed up also the preliminary archaeological operations preliminary to the realization of any public work, through an immediate verification of the possible archaeological presence in the areas affected by the operations, thus avoiding costly design changes in the process. The proposed contribution, therefore, would present not only the hardware and software solution developed, but also the preliminary results obtained from its application in the archaeological context of Leopoli - Cencelle, a medieval city, about 60 km north of Rome, where critical issues such as the extent of the site, the presence of large elevation changes and dense vegetation have always complicated the excavation activities on the hill, still leaving much of the unexplored city. In this context, in fact, remote sensing by drone, has proved to be an effective method for the investigation of ancient structures with a different degree of archaeological visibility in which the evidence is not yet completely above ground and are obliterated by high and medium stem vegetation. The examination, although the result of an experimental activity, not only made it possible to identify anomalies relating to structures not yet intercepted by the excavation operations but also encouraged the planning of future investigation campaigns, allowing a more conscious planning of the areas of interest.

Academic Track
UBT E / N209 - Floor 3
15:05
15:05
5min
Agro-tourism impact analysis of climate change using Google Earth Engine in the Rahovec wine region of Kosovo.
Dustin Sanchez

This project conducts a statistical model using the Mann-Kendell and a Sens Slope on the completed MODIS LST mission data for analysis of climate change thermal shifts across the Republic of Kosovo. This project leverages Google Earth Engine open data to build the statistical models that are extracted and analyzed in Q-GIS. This approach uses non-parametric statistical timeseries analysis for completed MODIS LST mission data to analyze and understand day and night land surface temperature shifts over different temporal periods to gather an understanding of the current and project the future expected impacts of climate change on various developing tourist economies in the Republic of Kosovo.
Water balance is utilized as a function of understanding the impacts of climate change on wine grape capacity and the attempt to functionally understand the future disruptions of climate change through linear geographic regressions. These regressions will guide the understanding of the climate changes that are occurring with the country and provide a basis for analysis to develop resilience methods. This model will be broken down into viticultural regionality to understand the dynamism of the impacts across the country. The two data sets used will be MODIS land surface temperature and Tropical Rainfall Measuring Mission which will create an understanding of surface temperature shifts within Kosovo and water balance shifts that are occurring due to climate change in Kosovo. The datasets also be correlated between each other using a Pearson’s correlation coefficient to understand if a relationship exists between land surface temperature and water balance within wine region of Kosovo.
The findings of this project will reveal geographic dispersion of anomalous rain patterns and long-term temperature shifts occurring that can have disruptive impacts on agricultural production of grapes. The results will provide insights based on known geographic extent of wine grape region to determine the significant temperature changes occurring over the past 20 years and the trends for both Day and Night LST within the Republic of Kosovo. Further, the analysis will seek to develop an understanding on the immediate to long-term impacts based on the satellite data trends. Water balance data analysis will provide precipitation shifts that are occurring on a monthly basis and can be assessed with Land Surface Temperature as a means of understanding areas that are susceptible to flood-based natural hazards and amplification through increased temperatures and loss of water balance. The connection between the two can be assessed to understand systemic vulnerabilities occurring within regions that require environmental quality for success.
Additionally, the project is a novel framework for timeseries analysis of bigdata to provide insights into climate change impacts on the economies of the developing world. The analysis will focus on the geographic dispersion of touristic economy assets that are being built and improve the use of big data approaches to derive an understanding of temperature changes in data poor environments. The results of this paper will leverage open datasets, an analysis of the impacts of temperature changes on the developing tourist economy in the Republic of Kosovo, and the knowledge of capacity for leveraging large geographic datasets for open climate change research.
The use of big data and open modeling provides a considerable resource for governments, municipalities, and NGOs to develop an understanding of how climate change will impact their communities. The paper discusses the statistical concepts used on the MODIS complete dataset and interpretations of the results. The major concepts approached are the use of Google Earth Engine utilization for modeling remote sensed data to understand the environmental conditions being caused by climate change. The underlying data analysis and implications draw connections within local conditions and how human environmental conditions are impacted for wine tourism development. This paper does not assess the loss of economic value but rather interpret the data to understand the positionality of the underlying environmental commodity conditions such as snowpack and grape vine stock. We discuss an analysis utilization within a novel framework for open-source climate intelligence building for those regions without the resources for pay-to-use products and data. This paper will build an understanding of methodological analysis approaches with multiple models to develop and deliver products capable of informing national and regional climate adaptation strategies in both the long and short-term.
The Republic of Kosovo is working towards developing many touristic economic sectors that are heavily reliant on climate including the wine region of Rahovec and Prizren, both of which face tremendous uncertainty in the face of climate change. Development of tools and technics to display the capabilities of open big data analysis and provide vital analysis into the impacts of climate change. We seek to explore the capability for utilization of open-source learning tools, to build open-data models capable of providing vital insights into the impacts of climate change in countries that have the least resources and the most risk.

Academic Track
UBT E / N209 - Floor 3
15:10
15:10
5min
A free and open-access GIS for the documentation and monitoring of urban transformations in the area of the Expo 2015 exhibition in Milan
Federica Gaspari

This work is based on the design and development of a system aimed at monitoring the urban transformations of the area used for the Expo 2015 exhibition in Milano, exploiting the potential offered by the storage and management of geographic data in a GIS environment (Burrough, 1986). The system is designed to collect and analyze data showing the changes of the urban landscape going through pre-Expo, to Expo and post-Expo transformations (Gaeta & Di Vita, 2021).

One of the reasons behind this work is the fact that a complete digital database documenting the urban transformations in the Expo 2015 area is not yet available. In fact, all the data which were needed to implement the GIS were originally represented by maps (in paper or in digital non georeferenced format) of development projects and by cartographic work attached to city plans. After checking the compatibility of the process with the original licenses, the maps had been made openly accessible to the public after being scanned, so they had to be geo-referenced and vectorized in order to be able to insert the data in the GIS database.

The implementation and use of GIS technology implied (i) the definition of the database conceptual and logical model; (ii) the acquisition of a large number of geographic data layers, which were structured according to the design of a relational database. Layers which were acquired included data on: cadastral parcels; buildings; players involved in the urban transformations; land regulations; open spaces; land cover; functional lots; public transport stops; roads and underground utility lines.

The structure of the DB has been designed based on a relational model (Codd, 1970) by following the standard methodology defined in 1975 by the ANSI - SPARC Committee, going through successive phases and originating the external, conceptual and logical models. Following this strategy, the external model was defined on what were assumed to be the future users’ needs in terms of data storage, consultation and queries on the data. Aiming at documenting also the timeline of the urban transformations of the area, the Entity Relationship Diagram (ERD) was designed integrating in a unique conceptual scheme the temporal dimension of the transformations, going from the pre-Expo, to the Expo which took place in Milano from May to October 2015 and finally the post-Expo layout of the area. Subsequently, the logical model of the database was also designed.

The data acquisition required to research a large number of sources, which were mainly represented by images of maps available online on the websites of the different stakeholders, ranging from public administration channels and OpenStreetMap crowdsourced geodata to official Expo 2015 communication platforms. They were then geo-referenced in order to acquire spatial elements in vector format which were afterwards stored in the spatial database of the GIS, becoming easily manageable and upgradeable in an interactive way. Notably, the topological models of the streets and of the underground network of the district heating were implemented, in the latter case also connecting each building with the corresponding segment of the network (Cazzaniga et al., 2013). Finally, the topological consistency and coherence of such network and its components was validated.

The application of GIS technologies to monitor the transformations of the entire site allowed to understand and analyze the different phases of the evolution of the urban territory, identifying critical issues and strengths of the development projects. Indeed, in the GIS environment it is now possible to perform reproducible elaborations and analysis useful to understand how the area changed in time, especially from an urban planning point of view. This approach can provide insights on the surface covered by buildings in the different periods and on the change of destination or decommissioning of exhibition pavilions in the post-EXPO environment. Moreover, the database model allows users to query the data in order to identify underground services as well as buildings that may be affected by future works on roads or structures located in the area of interest. Such functionalities and retrieved information could be crucial especially considering the recent construction of a critical structure like the new Galeazzi hospital, which has been operative since 2022. Finally, the possibility to present the project, the data and its related metadata and to communicate them also to a wider audience of non-technical users was envisaged through the publication of a WebGIS-on the Internet, which was tested with a demo. In future, by implementing further improvements, this prototype could lead to a decision support system, to be used as a tool to understand the area for the benefit of all actors involved with different expertise and background in the urban transformations. In particular, the choice of the web platform was driven by the possibility to make the project as accessible as possible also through expandable tools in support of geo-narratives and storytelling as well as easy-to-understand dashboards for visualizing quantitative analysis results.

The whole project has been developed by using free and open-source technologies, namely MySQL Workbench for the development of the database model, QGIS for the implementation of the system and GeoNode for the testing of the publication of the System on the Internet. The choice to use free and open-source technologies is both an economical and ethical solution aimed at knowledge sharing and at making the DB flexible and easily expandable, facilitating the integration of new data, their updating and the implementation of future functionalities, paying attention also to the technical accessibility even by non-expert users.

Academic Track
UBT E / N209 - Floor 3
15:15
15:15
5min
TOWARDS A PAN-EU BUILDING FOOTPRINT MAP BASED ON THE HIERARCHICAL CONFLATION OF OPEN DATASETS: THE DIGITAL BUILDING STOCK MODEL - DBSM
Pietro Florio

Currently, a reliable harmonized and comprehensive pan-EU map of the building stock provided in vector format is not publicly available, not even for a level-of-detail LOD0 (according to the CityGML standard), where the buildings’ footprints can be identified.
European countries offer vector maps of their building stock through a variety of levels of detail, formats, and tools; data across countries is often heterogeneous in terms of attributes, accuracy and temporal coverage, available through different user interfaces, or hardly accessible due to language barriers. Bottom-up solutions from local cadastral data in the framework of the INSPIRE initiative and top-down standard-setting regulations like the EU Regulation 2023/138 laying down a list of specific high-value datasets and the arrangements for their publication and re-use [1], are increasing and improving the homogeneity in the data availability.
However, crowd-sourced providers of building footprint vectors like OpenStreetMap (www.openstreetmap.org) are covering an increasing fraction of territory within the European Union. Simultaneously, improvements in remote sensing increased the resolution of satellite imagery and allowed for building footprints segmentation on very high-resolution images based on deep learning: major stakeholders in the field of information technology were able to disseminate large vector datasets with extensive territorial coverage publicly (like Microsoft and Google). Other research institutions released grid-based maps of built-up, covering the world at the resolution of 10 metres (like the Built-Up Surface of the Global Human Settlement Layer) or Europe at the resolution of 2 metres (like the European Settlement Map). Another project called EUBUCCO [2] has compiled a vector database of individual building footprints for 200+ million buildings across the 27 European Union countries and Switzerland, by merging 50 open government datasets and OpenStreetMap, which have been collected, harmonized and partly validated.
The methodology presented here provides a replicable workflow for generating seamless building datasets for each of the EU-27 countries, by combining the best available public datasets.
After reviewing existing literature and assessing publicly available buildings data sources, the following were identified as core input datasets:
• OpenStreetMap (OSM): a free and open-source global dataset of geographic features, including building footprints and attributes;
• Microsoft Buildings (MSB): a freely available dataset of building footprints developed by Microsoft using machine learning algorithm on very high-resolution satellite imagery [3];
• European Settlement Map (ESM): raster dataset of built-up areas classified using Convolutional Neural Networks from 2-meter spatial resolution from very high-resolution imagery available through Copernicus [4].
Building footprints are available in OpenStreetMap across all 27 countries, but with different levels of completeness and coverage. Human contributors trace data in OSM manually, thus the available building footprints are considered of higher geometric quality compared to those extracted by machine learning algorithms of the MSB and ESM datasets. Microsoft provides high resolution building footprints for all 27 countries, but their coverage within the country areas varies considerably. The ESM dataset was derived from a seamless mosaic covering the entire EU-27 area, so it is considered being the most complete in terms of coverage, although the lower resolution and quality does not allow for extracting detailed building footprints as available with OSM and MSB.
The combination of the above-listed dataset is carried out with a stepwise approach. First, the MSB dataset is compared to OSM, and buildings are selected for any area where they don’t overlap or intersect. MSB buildings below 40 m2 of surface are filtered out as outliers. Then, the ESM data is compared to the combined OSM and MSB buildings and vectorised, to fill in any gap that is not covered by the latter. Building footprints issued from ESM are further refined with various geo-spatial post-processing operations (e.g., buffer, holes filling, …), then filtered to retain only features above 100 m2 of surface, thus discarding outliers.
To implement and automate the described logical workflow, an interactive model has been developed to work in the popular QGIS desktop software. The QGIS model builder allows for building logical processing workflows by linking input data forms, variables and all the analysis functions available in the software.
The conflation process is conducted at the country level since OSM and MSB sources are already conveniently provided in country extent packages. Depending on the geographic size of each country and the amount of data included, some countries are further split into tiles for processing. The resulting building footprints from each input dataset are kept in separate files for easier handling, but can be combined visually in GIS software or physically merged in a single file.
There are several known limitations to the data and the processing workflow:
• Many MSB building footprints present irregular geometries that are caused by faulty image interpretation. These can be filtered by calculating the vertex angle values of each polygon and removing specific outlier values. A methodology was developed at small scale, but it was not possible to implement it at country scale yet.
• The ESM geometries do not accurately describe the actual building footprints but only the rough block outline. While ESM has seamless coverage, its best application would be for guiding additional feature extraction from VHR imagery in areas where OSM and MSB have poor coverage.
• The default overlap settings could be tweaked and dynamically adjusted, based on the built-up pattern (e.g., less in urban areas, more in rural areas).
• Filters of minimum feature size of 40 m2 for MSB and 100 m2 for ESM can be optimised to find the most robust balance between including non-building features and actual smaller buildings.
The resulting buildings dataset is compared with the European Commission’s GHSL Built-up surface layer [5] to get an understanding of the respective coverage at pan European level. A more focused look into the comparison with available cadastral data for a particular city, provides a preliminary understanding of the accuracy of the new layer along with its limitations.

Academic Track
UBT E / N209 - Floor 3
15:20
15:20
5min
Validating the European Ground Motion Service: An Assessment of Measurement Point Density
Joan Sala Calero, Amalia Vradi

The European Ground Motion Service (EGMS) constitutes the first application of high-resolution monitoring of ground deformation for the Copernicus Participating States. It provides valuable information on geohazards and human-induced deformation thanks to the interferometric analysis of Sentinel-1 radar images. This challenging initiative constitutes the first ground motion public dataset, open and available for various applications and studies.

The subject of this abstract is to validate the EGMS product in terms of spatial coverage and density of measurement points. A total of twelve sites have been selected for this activity, covering various areas of Europe, as well as representing equally the EGMS data processing entities. To measure the quality of the point density we employ open land cover data to evaluate the density per class. Furthermore, we propose statistical parameters associated with the data processing and timeseries estimation to ensure they are consistent across the different selected sites.

The usability criteria to be evaluated concern the completeness of the product, its consistency, and the pointwise quality measures. Ensuring the completeness and consistency of the EGMS product is essential to its effective use. To achieve completeness, it is important to ensure that the data gaps and density measurements are consistent with the land cover classes that are prone to landscape variation. Consistency is also vital for point density across the same land cover class for different regions. For instance, urban classes will have higher density than farming grounds, and this density should be consistent between the ascending and descending products. Pointwise quality measures are critical in assessing the quality of the EGMS PSI results. For example, the temporal coherence is expected to be higher in urban classes, and the root-mean-square error should be lower. Overall, these measures and standards are crucial in ensuring the usefulness and reliability of the EGMS product for a wide range of applications, including environmental management, urban planning, and disaster response.

For the validation of point density, a dataset of 12 selected sites across Europe is used, representing the four processing entities (TRE Altamira, GAF, e-GEOSS, NORCE). The aim of the point density validation activity is to ensure consistency across the EU territories by comparing the point density at three sites for each algorithm, one of which is in a rural mountainous area and the other two are urban. The dataset is obtained directly from the Copernicus Land – Urban Atlas 2018 and contains validated Urban Atlas data with the different land cover classes polygons, along with metadata and quality information. We have extensive Urban Atlas (version 2018) verified datasets on the cities of Barcelona/Bucharest (covered by TRE Altamira), Bologna/Sofia (covered by e-GEOSS), Stockholm/Warsaw (covered by NORCE) and Brussels/Bratislava (covered by GAF). In parallel we select four different rural and mountainous areas to analyse more challenging scenarios as well for the four processing chains of the providers.

There are 27 different land cover classes defined in Urban Atlas. To facilitate the analysis and the interpretation of the results, we aggregate and present our findings for each of the main CLC groups: Artificial Surfaces, Forest and seminatural areas, Agricultural areas, Wetlands and Water bodies.

For the validation measures, key performance indices (KPI) are calculated, with values between 0 and 1. We normalise the estimated density values for each service provider with respect to the highest value for Artificial surfaces, Agricultural areas and Forest and seminatural areas. Users expect consistent and good densities in these classes, specifically in the Artificial surfaces. And the lowest value for Wetlands and Water bodies. This will enable outlier detection since the applied algorithms should barely produce any measurement points on these surfaces.

Regarding the pre-processing of the data from EGMS, one of the challenges was the overlapping of bursts from different Sentinel-1 satellite tracks. If all bursts were included in the analysis, areas with more track overlaps would result in a higher point density, creating a bias in the data. To address this issue, a custom algorithm was designed to identify and extract the unique, non-overlapping polygon for each burst. This iterative algorithm was specifically designed to ensure a fair comparison among different areas, and to eliminate any biases that could impact the results of the analysis.

In conclusion, as an open and freely available dataset, the EGMS will provide valuable resources for a wide range of applications and studies, including those that leverage free and open-source software for geospatial analysis. The validation results presented here will help to ensure the accuracy and reliability of the EGMS product, thereby enabling further research and applications in areas such as geohazards, environmental monitoring, and infrastructure management.

References

Costantini, M., Minati, F., Trillo, F., Ferretti, A., Novali, F., Passera, E., Dehls, J., Larsen, Y., Marinkovic, P., Eineder, M. and Brcic, R., 2021, July. European ground motion service (EGMS). In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 3293-3296). IEEE.

Atlas, U., 2018. Copernicus Land Monitoring Service. European Environment Agency: Copenhagen, Denmark.

Academic Track
UBT E / N209 - Floor 3
15:30
15:30
30min
coffee-break
Outdoor Stage
15:30
30min
Coffee-break
Lumbardhi
15:30
30min
coffee-break
Drini
15:30
30min
coffee-break
Mirusha
15:30
30min
coffee-break
UBT E / N209 - Floor 3
15:30
30min
coffee-break
UBT F / N212 - Floor 3
15:30
30min
coffee-break
UBT C / N109 - Second Floor
15:30
30min
coffee-break
UBT C / N110 - Second Floor
15:30
30min
coffee-break
UBT C / N111 - Second Floor
15:30
30min
coffee-break
UBT D / N112 - Second Floor
15:30
30min
coffee-break
UBT D / N113 - Second Floor
15:30
30min
coffee-break
UBT D / N115 - Second Floor
16:00
16:00
30min
EOReader - Remote-sensing opensource python library for optical and SAR sensors
BRAUN Rémi

EOReader is a remote-sensing opensource python library reading optical
and SAR constellations, loading and stacking bands, clouds, DEM and spectral indices in a sensor-agnostic way.

Optical SAR
Sentinel-2 and Sentinel-2 Theia
Sentinel-3 OLCI and SLSTR
Landsat 1 to 9
Harmonized Landsat-Sentinel
PlanetScope, SkySat and RapidEye
Pleiades and Pleiades-Neo
SPOT-6/7
SPOT-4/5
Vision-1
Maxar
SuperView-1
GEOSAT-2
Sentinel-1
COSMO-Skymed
TerraSAR-X, TanDEM-X and PAZ SAR
RADARSAT-2 and RADARSAT-Constellation
ICEYE
SAOCOM
Capella

It also implements sensor-agnostic features, such as load and stack many bands:
- satellite bands (optical or SAR)
- spectral indices
- clouds
- DEM

Context

As one of the Copernicus Emergency Management Service Rapid Mapping and Risk and Recovery Mapping operators,
SERTIT needs to deliver geoinformation (such as flood or fire delineation, landslides mapping, etc.) based on multiple EO constellations.

In rapid mapping, it is important to have access to various sensor types, resolutions, and satellites. Indeed, SAR sensors are able to detect through clouds and during nighttime while optical sensors benefit from of multi spectral bands to better analyze and classify the crisis information.

This is why SERTIT decided to decouple the sensor handling from the extraction algorithms: the latter should be able to ingest semantic bands without worrying about how to load the specific sensor band or in what unit it is.
The assumption was made that all the spectral bands from optical sensors could be mapped bands between each other, in addition to the natural mapping between SAR bands.

Examples

  • Why EOReader?
  • Basic tutorial
  • Optical data
  • SAR data
  • VHR data
  • Water detection on multiple products
  • STAC
State of software
Lumbardhi
16:00
30min
GeoNode at work: how do I do this, how do I do that?
Alessio Fabiani, Giovanni Allegri

GeoSolutions has been involved in a number of projects, ranging from local administrations to global institutions, involving GeoNode deployments, customizations and enhancements. A gallery of projects and use cases will showcase the versatility and effectiveness of GeoNode, both as a standalone application and as a service component, for building secured geodata catalogs and web mapping services, dashboards and geostories. In particular the recent advancements in data ingestion and harvesting workflows will be presented, along with the many ways to expose its secured services to third party clients. Examples of GeoNode’s builtin capabilities for extending and customizing its frontend application will be showcased.

Use cases & applications
UBT C / N110 - Second Floor
16:00
30min
Introduction to Coordinate Systems
Javier Jimenez Shaw

Introduction to basic but important concepts about Coordinate Reference Systems (what is doable in 20 min ;)

  • Geographic Coordinate (Reference) Systems
  • Different Datums/Ellipsoids
  • Projections (Mercator, UTM, LCC, ...)
  • EPSG catalog
  • WKT (well known text) description
  • Reference to PROJ.org library

The purpose is to explain basic concepts to have a good basis to understand later more complex problems. The presentation will have a lot of links to go deeper into any area of interest.

Education
UBT D / N113 - Second Floor
16:00
30min
Let's defense my country using FOSS4G!
Sanghee Shin

This talk is about the current state of MilMap and its ongoing development. MilMap is a military geo-portal system widely and successfully used in every sectors of Korean military. The system is now undergoing major change from geo-portal to military digital twin system.

MilMap is developed on top of numerous open source projects such as PostGIS, GeoServer, GeoWebCache, Cesium, OpenLayers, mago3D, OpenGXT. The system provides several functionalities like POI search, geospatial data search, layer control, satellite image search and download, spatial terrain analysis, coordinates reading, and map notes, to the military officers through the intranet. Although the system provides geospatial analytics functions through OGC WPS(Web Processing Service), the current system is basically a web based 3D GIS for data viewing and printing. Thanks to MilMap, military officers can now access the huge amount of geospatial data(maps, imagery, 3D, POI, and others) in their browser without installing additional software.

MilMap is now undergoing major development to be a more customized, automated, and analytical system. The future MilMap will support user data uploading for intelligence sharing, more bespoke battle field analysis and others. In the long run, MilMap is expected to be a cloud based military digital twin system for geospatial intelligence sharing and battle field analysis & simulation.

Use cases & applications
Drini
16:00
30min
Low-cost AirQuality stations + open standard (OGC SensorThings) + open data (CC-BY) + open source (FROST + QGIS plugin for sensors)
Piergiorgio Cipriano, Luca Giovannini, Giacomo Magisano

This is the story of 2 twin projects (namely AIR-BREAK and USAGE) undertaken by Deda Next on dynamic sensor-based data, from self-built air quality stations to the implementation of OGC standard compliant client solution.
In the first half of 2022, within AIR-BREAK project (https://www.uia-initiative.eu/en/uia-cities/ferrara), we involved 10 local high schools to self-build 40 low-cost stations (ca. 200€ each, with off-the-shelf sensors and electronic equipment) for measuring air quality (PM10, PM2.5, CO2) and climate (temperature, humidity). After completing the assembling, in late 2022 stations were installed at high schools, private households, private companies and local associations. Measurements are collected every 20 seconds and pushed to RMAP server (Rete Monitoraggio Ambientale Partecipativo = Partecipatory Environmental Monitoring Network - https://rmap.cc/).
Hourly average values are then ingested with Apache NiFi into OGC’s SensorThings API (aka STA) compliant server of the Municipality of Ferrara (https://iot.comune.fe.it/FROST-Server/v1.1/) based on the open source FROST solution by Fraunhofer Institute (https://github.com/FraunhoferIOSB/FROST-Server).
STA provides an open, geospatial-enabled and unified way to interconnect Internet of Things (IoT) devices, data and applications over the Web (https://www.ogc.org/standard/sensorthings/). STA is an open standard, it builds on web protocols and on OGC’s SWE standards and has an easy-to-use REST-like interface, providing a uniform way to expose the full potential of the IoT (https://github.com/opengeospatial/sensorthings/).
In second half of 2022, within USAGE project (https://www.usage-project.eu/), we released the v1 of a QGIS plugin for STA protocol.
The plugin enables QGIS to access dynamic data from heterogeneous domains and different sensor/IoT platforms, using the same standard data model and API. Among others, dynamic data collected by the Municipality of Ferrara will be CC-BY licensed and made accessible from municipal open data portal (https://dati.comune.fe.it/).
During the talk, a live demo will be showcased, accessing public endpoints exposing measurements (timeseries) about air quality (from EEA), water (BRGM), bicycle counters, traffic sensors, etc.

Open source geospatial ‘Made in Europe’
UBT F / N212 - Floor 3
16:00
30min
Mapping Japan cultural heritages with OpenSource based architecture
IGUCHI Kanahiro, Raymond Lay

Japan fascinates the world with its rich culture, materialized with a full of cultural sites in its territory as example. To protect it, the Law for the protection of cultural properties established a “cultural heritage” designation system, where designated places should be preserved.
With the collaboration of the Nara National Research Institute for Cultural Properties, Japan cultural heritages has been mapped as a WebGIS tool where more than 100,000 places can be visualized.
In this talk will be presented tool functionalities and technically its OpenSource based architecture.

Use cases & applications
UBT D / N115 - Second Floor
16:00
30min
OSGeoLive project report
Astrid Emde, Angelos Tzotsos

OSGeoLive is a self-contained bootable DVD, USB thumb drive or Virtual Machine based on Lubuntu, that allows you to try a wide variety of open source geospatial software without installing anything. It is composed entirely of free software, allowing it to be freely distributed, duplicated and passed around. It provides pre-configured applications for a range of geospatial use cases, including storage, publishing, viewing, analysis and manipulation of data. It also contains sample datasets and documentation. OSGeoLive is an OSGeo project used in several workshops at FOSS4Gs around
the world.

The OSGeoLive project has consistently and sustainably been attracting contributions from ~ 50 projects for over a decade. Why has it been successful? What has attracted hundreds of diverse people to contribute to this project? How are technology changes affecting OSGeoLive, and by extension, the greater OSGeo ecosystem? Where is OSGeoLive heading and what are the challenges and opportunities for the future? How is the project steering committee operating? In this presentation we will cover current roadmap, opportunities and challenges, and why people are using OSGeoLive.
- Project page https://live.osgeo.org
- Link to the presentation https://live.osgeo.org/en/presentation.html

State of software
Outdoor Stage
16:00
30min
Open data of digital twin city models in CityGML format and their import into OpenStreetMap: Project PLATEAU2OSM
Taichi Furuhashi

In recent years, 3D city models have gained popularity for supporting urban planning, citizen engagement, and research. As technology and infrastructure have improved, many cities and countries now use 3D models to address urban issues, encourage participation, and inform decision-making.

The Japanese government, including the Ministry of Land, Infrastructure, Transport and Tourism's Project PLATEAU, have promoted open 3D city models and 3D point cloud data. Over 100 cities are currently developing and releasing open digital twin data in CityGML format as of February 2023. Binyu et al. published the results of these efforts, which are also highlighted in the 3D City Index benchmarking report. The report shows that seven out of 40 cities (18%) compared were Japanese cities.

This report discusses the current state of open digital twin data in Japan, which is compatible with the open database license ODbL. The data can be imported into popular tools such as OpenStreetMap, and converters have been developed for this purpose. Since 2022, import work has been conducted on an experimental basis in collaboration with national and international communities. Sharing the results and challenges of this work is expected to promote the use of 3D city model data globally.

Open Data
UBT D / N112 - Second Floor
16:00
30min
OpenStreetMap as an input source for producing governmental datasets: the case of the Italian Military Geographic Institute
Marco Minghini, Alessandro Sarretta

The collection, curation and publication of geospatial information has been for centuries the sole prerogative of public sector organisations. Such data has been traditionally considered the reference source for datasets and cartographic outputs. However, new geospatial data sources (e.g. from the private sector and citizen-generated[1]) have emerged that are currently challenging the role of the public sector [2]. In response to this, governments are currently exploring new ways of managing the creation and update of their geospatial datasets [3].
Datasets of high relevance are increasingly produced by both private companies and crowdsourced initiatives. E.g., in 2022 Microsoft released Microsoft Building Footprints, a dataset of around 1 billion building footprints extracted from Bing Maps imagery from 2014 to 2022. More recently, in December 2022n Amazon Web Services (AWS), Meta, Microsoft, and TomTom founded the Overture Maps Foundation (https://www.linuxfoundation.org/press/linux-foundation-announces-overture-maps-foundation-to-build-interoperable-open-map-data), a joint initiative in partnership with the Linux Foundation with the aim to curate and release worldwide map data from the aggregation of multiple input sources including civic organisations and open data sources, especially OpenStreetMap data.
These initiatives aim to improve the coverage of existing governmental geospatial information through the release of open data and a strong dependency on OpenStreetMap. In particular, the Overture initiative has the explicit goal to add quality checks, data integration, and alignment of schemas to OSM data.
Recently, the Italian Military Geographic Institute (IGM, one of the governmental mapping agencies in Italy) has released a multi-layer dataset called “Database di Sintesi Nazionale” (DBSN, https://www.igmi.org/en/dbsn-database-di-sintesi-nazionale). The DBSN is intended to include geospatial information relevant to analysis and representation at the national level, with the additional purpose to derive maps at the scale 1:25,000 through automatic procedures. The creation of the DBSN builds on top of various information sources, with regional geotopographic data as primary source of information and products from other national public bodies (e.g. cadastral maps) as additional sources. The source is recorded in a specific attribute field for each feature in the database, with a list of codes referencing the various sources. Among the external sources used as input for the work of integration in the DBSN, OpenStreetMap was explicitly considered and used.
One of the elements of novelty, at least in the Italian context, is the release of the DBSN under the ODbL licence (https://opendatacommons.org/licenses/odbl), caused by the fact that the inclusion of OSM data requires derivative products to be released with the same licence.
Currently, the DBSN includes data covering only 12 out of the 20 Italian regions (Abruzzo, Basilicata, Calabria, Campania, Lazio, Marche, Molise, Puglia, Sardegna, Sicilia, Toscana, Umbria). The remaining ones will be released in the near future.
The datasets have been downloaded from the official IGM website in January 2023.
The DBSN schema is a subset of the specifications defined in the "Catalogue of Spatial Data - Content Specifications for Geotopographic Databases” (Decrete 10 November 2011) and is composed of 10 layers, 29 themes and 91 classes. We compared it with the OpenStreetMap specifications (based on the community-based tagging scheme at https://wiki.openstreetmap.org/wiki/Map_Features) and selected two main themes (buildings and streets).
The analysis was performed through a set of Python scripts available under the open source WTFPL licence at https://github.com/napo/dbsnosmcompare.
Firstly, we analysed—for buildings and streets in the IGM database—where OSM data was used as the primary source of information. The percentage of buildings derived from OSM is minimal, ranging from 0.01% in Umbria to 1.3% in Marche; regarding streets, the differences between regions increase, ranging from almost 0% in Abruzzo and Calabria to 94% in Umbria.
Secondly, we calculated the area covered by buildings and the length of streets in both the IGM and OSM databases to understand how much OSM completeness is good, compared to the official IGM dataset.
In the 12 regions, the area covered by buildings in OSM is on average about 55% of the corresponding area in IGM, while the percentage of the length of streets is about 78%. Anyway, these numbers are highly variable among regions, ranging between 32% in Calabria and 105% in Puglia for buildings, and between 46% in Calabria and 103% in Umbria for streets.
These first results show that the main source information in the DBSN (namely the official regional data) is highly variable across the 12 regions, which required the IGM to find additional data sources to fill the gaps. OSM plays a minor role for integrating buildings in the database, while it demonstrates a high potential for contributing to street information.
Results also show that, even with only a small contribution, some elements that are present in OSM are still not included in the DBSN. This can be due to at least two reasons: (i) the current workflow of selection of elements in OSM (through tags) does not include some potentially relevant elements; ii) the (ideally) daily update of OSM is able to bring in the database new features at a pace that is unbeatable by the IGM, and governmental organisations in general.
While this study highlights the importance that OpenStreetMap has achieved as a reference source of geospatial information for governmental bodies as well, providing evidence of its contribution to the national database of the IGM, iit also paves the way for improving OpenStreetMap itself by importing data for various layers, benefiting from the release of the DBSN under the ODbL licence.

Academic Track
UBT E / N209 - Floor 3
16:00
30min
Routing Machine, state and side-effects
Marin Nikolli

The routing machine is about the route track a user can take from one point to the other with directions after reaching each point. For paid services such as Google maps, this already exists, and Google has applied a centralized model of usage. In this talk, we will talk about the type of libraries and already existing implementations that are almost deprecated but we can keep alive, since for the open source community, the ability to customize and change, they are essential. There are no active Open Source or community versions of the routing machine for maps. We need to change that. We can do that by improving a couple of things that already exist. Having more wrappers for different types of implementations, say Vue, or React, and finally Svelte. The routes should be updated and the selection of the type of route, car, bike, or walking should reflect the data received from maps. And define a safer business model. Open Source is more active and strong than paid and centralized services. We need to make sure that what we are offering and implementing as services to our clients can reflect a similar dedication the first have.

Open source geospatial ‘Made in Europe’
UBT C / N109 - Second Floor
16:00
30min
Selection of noise measurement points based on road network using PyQGIS
Choi Hyeong-gwan

This is a plug-in created using pyQGIS, and an example of using it as basic data for decision-making on noise measurement station selection policy will be presented.
As data for use in decision-making by public institutions, we introduce cases in which basic public data are utilized and processed to ultimately be used as core data for decision-making.
It will be time to talk about how text-based data held and provided by public institutions is being used for their spatial expression and policy making, and why the opening of public data will play a more important role in the future.

Open Data
UBT C / N111 - Second Floor
16:00
30min
Styling Natural Earth with GeoServer and GeoCSS
Andrea Aime

Natural Earth is a public domain map dataset available at 1:10m, 1:50m, and 1:110 million scales. Featuring tightly integrated vector and raster data, with Natural Earth one can build a variety of visually pleasing, well-crafted maps with cartography or GIS software.

GeoServer GeoCSS is a CSS inspired language allowing you to build maps without consuming fingertips in the process, while providing all the same abilities as SLD.

In this presentation we’ll show how we have built a world political map and a world geographic map based on Natural Earth, using CSS, and shared the results on GitHub. We’ll share with you how simple, compact styles can be used to prepare a multiscale map, including:
* Leveraging CSS cascading.
* Building styles that respond to scales in ways that go beyond simple scale dependencies.
* Various types of labeling tricks (conflict resolution and label priority, controlling label density, label placement, typography, labels in various scripts, label shields and more).
* Quickly controlling colors with LessCSS inspired functions.
* Building symbology using GeoServer large set of well known marks.

Join this presentation for a relaxing introduction to simple and informative maps.

Use cases & applications
Mirusha
16:30
16:30
30min
3D4DT: An Approach to Explore Decision Trees for Thematic Map Creation as an Interactive 3D Scene
Auriol Degbelo

Background & Problem: There are currently several software dedicated to the automatic creation of thematic maps. These can be proprietary (e.g., ArcGIS Online, Carto) or non-proprietary solutions (e.g., SDG Viz, AdaptiveMaps, the GAV Toolkit, the Geoviz Toolkit). An important drawback of these state-of-the-art solutions is that the expertise encapsulated in such software (e.g., enabling to choose a type of map or visual variables depending on the characteristics of the data contained in the map), is usually not well communicated to the user. That is, users can use these tools to create meaningful maps for their open geographic datasets but are offered little support regarding knowing why some suggestions of thematic map types were made (e.g., why a dot map is proposed by a toolkit instead of a choropleth map). Put simply, users get little insight into the decision processes of current tools/toolkits for thematic web creation.

Contributions & Target audience: To help users learn about the decision processes of software for automatic map creation, this work introduces the 3D4DT approach. The approach uses JSON (JavaScript Object Notation) as a machine-readable format to represent decision trees and subsequently maps JSON elements to user interface elements for an interactive 3D scene. The contributions of the work are twofold: 1) a controlled vocabulary to support the creation of machine-readable descriptions for decision trees of the Cartography literature; and 2) an approach to navigate these decision trees as interactive scenes in 3D. The approach is implemented as an open-source prototype. It is relevant to both developers and users of software for automatic thematic map creation. The controlled vocabulary is relevant to developers, who can encode the decision trees underlying their software as machine-readable data, and make the ‘brain’ of their software available for reuse in multiple use cases. The exploration of the decision trees as an interactive scene is relevant to users who can retrieve information about the inner workings of software for map creation in an interactive format.

Implementation: The prototype is available as a web-based application on GitHub. The server is run using Node.js. To speed up the development of the frontend, we have used Vitejs. The 3D interactive scene is implemented using the JavaScript library Three.js. The choice of Three.js was motivated by the fact that it is 1) open source, 2) expressive enough to create a variety of 3D scenes in the browser, and 3) is actively maintained by a community of contributors.

Evaluation: To evaluate the expressiveness of the controlled vocabulary (contribution 1), the work used three decision trees for thematic map creation: 1) DecisionTreeA: the decision tree of the AdaptiveMaps open-source prototype (Degbelo et al., 2020); 2) DecisionTreeB: the decision tree for the choice of thematic map types from (Kraak et al., 2020); and DecisionTreeC: the visual variable syntactics from (White, 2017), which was converted to a decision tree. To evaluate the usability of the 3D interactive scene (contribution 2), the open-source prototype was tested through a lab-based user study. The study compared the interaction with two decision trees using interactive 3D scenes to the same information displayed as a simple website (text+pictures). 12 participants were recruited via personal messages. They were asked to interact with DecisionTreeA and DecisionTreeB using both conditions (interactive 3D vs static). Six participants stated to have no experience at all in the field of geoinformatics, four claimed to be slightly experienced and two considered themselves very experienced. None of the participants was familiar with the literature which was used for DecisionTreeA and DecisionTreeB. A critical difference between DecisionTreeA and DecisionTreeB is that the latter was simpler in its hierarchical structure. We measured the efficiency (time taken to answer questions), effectiveness (number of correct answers, during the interaction with the prototype), and memorability (number of correct answers to questions asked to the users after the prototype has been shut). The key takeaways from the experiments were: 1) participants were slightly faster in the text+pictures condition, but the differences in efficiency values were not statistically significant; 2) using the 3D interactive scene, participants could answer questions pertaining to DecisionTreeB more accurately; differences in effectiveness for the more complex DecisionTreeA were not statistically significant; and 3) the differences in memorability between the two conditions (interactive 3D vs static) were not statistically significant. Hence, an interactive 3D scene could be used as a complementary means to help users understand how thematic maps are created especially when designers wish to convey this information most accurately.

Relevance for the FOSSG Community: Since DecisionTreeA is the brain of the AdaptiveMaps open-source prototype that helps create web maps semi-automatically, helping users visually explore that decision tree through the 3D4DT approach is one way of realizing the requirement of algorithmic transparency for intelligent geovisualizations. The controlled vocabulary is relatively simple and could be reused to promote algorithmic transparency for other types of open-source geospatial software, if their decision rules can be modelled as decision trees (i.e., if-then-else rules).

Reproducibility: the data collected during the user study, the script for the analysis as well as all questions answered by the participants can be accessed at https://figshare.com/s/60b1a4a12f9bd32d2759. The source code of the AdaptiveMaps prototype, which used DecisionTreeA to create various thematic maps, can be accessed at https://github.com/aurioldegbelo/AdaptiveMaps . The source code of the 3D4DT prototype, the JSON schemas, and the encoding of the decision trees as JSON can be accessed at https://github.com/aurioldegbelo/3D4DT .

References:
Degbelo, A., Sarfraz, S. and Kray, C. (2020) ‘Data scale as Cartography: a semi-automatic approach for thematic web map creation’, Cartography and Geographic Information Science, 47(2).
Kraak, M.-J., Roth, R.E., Ricker, B., Kagawa, A. and Sourd, G.L. (2020) Mapping for a sustainable world. New York, USA: The United Nations.
White, T. (2017) ‘Symbolization and the visual variables’, in J.P. Wilson (ed.) Geographic Information Science & Technology Body of Knowledge.

Academic Track
UBT E / N209 - Floor 3
16:30
30min
DGGSs and you!
James Banting

Discrete Global Grid Systems (DGGS) are gaining popularity as a new method of geospatial data representation. This presentation will provide an overview of the concept of DGGS and its advantages over traditional geospatial data representation methods.

We will explore the similarities and differences between these different DGGS frameworks, including their cell shapes, grid resolutions, and ability to handle different types of geospatial data. We will also discuss the benefits of using DGGS in geospatial data applications, such as remote sensing, climate modeling, and environmental monitoring.

Overall, this presentation will provide a comprehensive overview of the concept of DGGS and its potential applications in geospatial data analysis and visualization. Attendees will gain a deeper understanding of the advantages and challenges associated with different DGGS frameworks and will gain insights into the ongoing research efforts in this field.

Education
UBT D / N113 - Second Floor
16:30
30min
Development of tool for validity of decision support algorithm for environment impact assessment (EIA) Based on open source
jinwoo park

The open source based environmental impact assessment(EIA) decision support verification tool(verification tool) is a web-based tool for verifying the EIA algorithm based on the EIA review decision support algorithm using data for each Environmental Impact.
This verification tool was developed using open source projects such as PostGIS, GeoServer, and Openlayers. However, the flowchart library used a commercial software called GoJS.
This verification tool is intended to verify the adequacy of the implementation of the EIA algorithm developed by experts in each Environmental Impact.
It is possible to support comprehensive decision-making, including opinion gathering, by operating the review decision-making algorithm based on data by Environmental Impact and environmental impact analysis results.
The spatial analysis required to verify the algorithm was developed using OpenGXT of the OGC WPS service. It includes a way to visualize the result processed through this spatial analysis function.

This paper is based on the findings of the research project “Development of integrated decision support model for environmental impact assessment project,”(2023-003(R)) which was conducted by the Korea Environment Institute (KEI) and funded by research and development project (Project No. 2020002990003) of the Environmental Industry & Technology Institute (KEITI) and the Ministry of Environment (MOE).

Use cases & applications
UBT D / N115 - Second Floor
16:30
30min
GeoServer used in fun and interesting ways
Andrea Aime, Jody Garnett

GeoServer is the start of so many great open source success stories.

This talk introduces the core GeoServer application and explores the ecosystem that has developed around this beloved OSGeo application. Our presentation draws on the GeoServer ecosystem for use-cases and examples of how the application has been used successfully by a wide range of organizations.

Each use-case highlights a capability of GeoServer providing an overview of the technology drawn from practical examples.

  • Andrea Amie is on hand to share success stories highlighting GeoServer use in managing vulnerable ecosystems, agriculture information management, and marine data management.
  • Jody Garnett will look at how GeoServer technology powers cloud services
  • Gabriel will look at am amazing remixes for Cloud Native GeoServer
  • GeoServer technology powering the OSGeo community, including GeoNode, geOrchestra
  • A showcase of examples collected from our user list

Attend this talk to learn what GeoServer is good for out-of-the-box, and for inspiration on what is possible using GeoServer and the FOSS4G community.

Use cases & applications
Mirusha
16:30
30min
Geological Service of Kosovo - Legal Infrastructure, Responsibilities & Technical - Analytical Research Capacities in Geology
Luan Morina

The presentation will be focused on the elaboration of the relevant legal basis of the Geological Service of Kosovo. In addition, the description of the main responsibilities will be made, as well as the elaboration of technical analytical capacities which enable the development of research in the field of geology.

Open Data
Lumbardhi
16:30
30min
OGC API feature services built with Hakunapi
Teemu Sipilä

National Land Survey of Finland (NLS) has built multiple feature services based on the OGC API Features standard since 2019. These services provide cadastral and topographic data, buildings, geographic names, and addresses both as open and contract-based APIs.

The engine behind these services is Hakunapi – a high performance server implementation to easily build “off-the-shelf” Simple Features and customized Complex Features services with geospatial data backed by a PostGIS database. Currently the OGC API Features (Part 1, 2 and 3) standard is supported. The codebase is based on Java, and it utilizes also other geospatial libraries such as JTS Topology Suite and GeoTools.

Hakunapi is now Free Open-Source Software available at GitHub with the version 1.0 released in May 2023. On the last few years NLS has internally used the library for services providing both Simple Features (like traditional topographic database) and Complex Features (cadastral registry and geographic names with some hierarchical feature structures too).

This talk presents key features and benefits of using Hakunapi for implementing feature services based on the OGC API Features standard. Also experiences and best practices by NLS on developing these services and our roadmap towards modern OGC API services is discussed.

Demo: https://beta-paikkatieto.maanmittauslaitos.fi/inspire-addresses/features/v1/

Code: https://github.com/nlsfi/hakunapi

Open source geospatial ‘Made in Europe’
UBT F / N212 - Floor 3
16:30
30min
OSGeo and OGC MoU: one year later!
Tom Kralidis, Angelos Tzotsos, Joana Simoes, Codrina Ilie

In January 2022, OSGeo and OGC signed a new and updated version of the Memorandum of Understanding (MoU) that aims to maximize the achievement of the mission and goals of the two organizations: promoting the use of Open Standards and Open source software within the geospatial developer community. Identifying open source technologies that could be used as Reference Implementations for OGC Standards and validating OGC compliance tests are examples of activities that can take place within the scope of the agreement.

More than one year after the agreement was signed and almost one year after it was introduced to the OSGeo community in a keynote at FOSS4G 2022, this presentation will summarize all activities accomplished and future plans, including the establishment of the OSGeo Standards Committee within OSGeo and the organisation of the 3rd joint code sprint, in Switzerland, together with the Apache Software Foundation.

The presentation will also reiterate the benefits of the new agreement, which allows OSGeo charter members to represent the priorities of OSGeo in the development of OGC Standards and supporting documents and services.

Community & Foundation
Outdoor Stage
16:30
30min
OpenStreetMap Seasonal Differential in Citizen Science Volunteered Response Mapping of Flood Disaster Vulnerable Communities in Nigeria
Victor N.Sunday

The ever-increasing threat from disaster is an urgent call for a proactive discourse on pragmatic elimination and reduction of the challenges and stresses caused by disasters. This study, therefore, leverages on the research gap as it applies to the application of crowdsourced rapid response mapping in a developing country of Nigeria, where, critical geospatial data is grossly unavailable to respond to vulnerable resilient communities. The study deployed two research techniques namely: participatory crowdsourced mapping and gamification. The HOT tasking manager data analytics was used to analyze the level of participation and contribution of volunteer mappers over time while QGIS was used to produce maps unveiling building footprints generated in OSM, before and after Mapathon. The study delineated 8 LGAs for a mapping task of 2015 grids and 639 grids for Mapathon battle season-1 and 2 respectively.Season-1 was the months of flood(Rainy Season) while Season-2 was the flood receding months (dry season) Results unveiled analysis of flood response mapping season-1, had a total of 571,659 edits comprising 481,912 buildings and 22,244km of roads contributed by initial 7,601 participants, but completed by 1,644 volunteers, mapping 4,946 grids within a timeline of 38months at the rate of three hours 38minutes per task. 70% of volunteer mappers engaged were beginner mappers Maps showing before and after Mapathon in OSM were produced for ONELGA, Numan Sarbon Birnin and Ilorin West LGAs respectively. However, analysis of flood response mapping season-2, unveiled a total of 357,168 edits comprising 325,023 buildings and 7,438km of roads were contributed by the initial 4,006 participants, but completed by 801 volunteer mappers using 2,238 grids within a timeline of 14months at the rate of two hours 33minutes per task. Maps showing before and after in OSM were produced for Afikpo North, Warri South, Logo and Jamare LGA respectively. The Study contributed to a measurable target of SDGs 1 to 7, 11, 13, 15 and 17. The study generated massive critical open geospatial data needed for effective disaster response and SDGs, and paving way for effective geoinformation e-governance in Nigeria. Lastly, the study promotes the relevance of citizen-generated data for national geospatial data infrastructure development and participatory crowdsourced mapping using OpenStreetMap at local levels. The study has also bridged a critical scientific research gap and inquiry in OSM GIScience.

Open Data
UBT C / N111 - Second Floor
16:30
30min
Opensidewalkmap: A Project And Open Source Framework For An Web-Based Urban Pedestrian Network Inventory Using Openstreetmap Data
Kauê de Moraes Vestena

The interest on urban pedestrian networks is growing, with impacts centered at UN SDGs numbered 3, 11, 10 and 13: the improvement of accessibility helps in reducing inequalities and the fostering of non-motorized locomotion improves well-being and sustainability in urban scenarios. The idea behind OpenSidewalkMap is to leverage the multi-purpose OpenStreetMap data for the pedestrian network data. The structure of the project is decentralized, with localities deployed as nodes on a world web-map. At each node there’s a modular structure within a webpage, containing apps that have a different role, in order to create what is intended to be a full-fledged inventory, whose functionality can be expanded as new modules can be added. Currently there are four modules: “Webmap” containing an interactive cartographic representation of the data; “Optimized Routing” that uses the data to create an optimized routing, currently only for a wheelchair profile based on an empiric equation; “Dashboard” featuring statistical charts to look at the bigger picture of the data, mainly focused on value percentages, thus giving attribute completeness, also giving a look at the data aging and number of revisions; “Data quality tool” looking at most common possible errors on data, giving direct link to editors, being at this point focused on finding invalid values, with geometrical and topological error detection planned to be included; there are 4 planned modules: “data watching” to monitor changes on data, to track and combat possible vandalism against data since OSM data is universally editable; “Tiles” giving raster and maybe vector tiles; “API” giving features on request; “Surveying And Validation” to list projects in different platforms/editors to expand and validate available data. This way the inventory will include continuously the full cycle of data: creation and collection; storage, maintenance and management; application and analysis. The project is aimed to have zero-maintenance costs, as long as everything is hosted using current freely available Microsoft github infrastructure, with all code and data being maintained inside github repositories, webpages deployed with github pages, updated using github actions. In case of shutdown of any of these services, the software can still be deployed in another server infrastructure with a similar workflow. There is lots of room for improvement, with only the node for the city of Curitiba being available as of february 2023. The homepage of the code is available at: https://kauevestena.github.io/opensidewalkmap/ .

Use cases & applications
Drini
16:30
30min
Project PLATEAU ~The initiative of Digital Twin in Japan~
UCHIYAMA, Yuya

Project PLATEAU is an initiative led by the Ministry of Land, Infrastructure, Transport and Tourism of Japan (MLIT), to develop and utilize 3D city models compliant with CityGML standards. MLIT aims to establish rules of creation of 3D city models as part of general operations in each local government, and also to release them as open data to promote utilization for urban planning and business creation.

Open Data
UBT D / N112 - Second Floor
16:30
30min
Time series raster data in PostgreSQL with the TimescaleDB and postgis_raster
Jashanpreet Singh

Raster data is a type of digital image data that is stored and processed as a grid of cells, each of which represents a specific area or location in the image. This grid is known as a raster or pixel grid, and each cell contains a value that represents a characteristic of the corresponding area or location in the image, such as color, elevation, temperature, or other attributes. Depending upon the resolution of the data these raster file sizes can vary from a few MBs to few GBs. Hence reading data from a large set of raster dataset which has time dimension associated with it is challenging.

PostgreSQL can be used to store time series raster datasets, which are raster datasets that have a time dimension associated with them. This can be useful for storing and analyzing raster data that changes over time, such as satellite images, climate data, or land cover change data.

To store time series raster datasets in PostgreSQL, we will use the postgis_raster extension, which provides support for storing and manipulating raster data in the database, and the TimescaleDB extension to add time series functionality to PostgreSQL, allowing us to store and query raster data with a time dimension.

Using the TimeScaleDb extension we will partition the raster table by converting it to hypertable which is what TimescaleDB uses to optimally store and process time series data. This can help us to optimize query time.
For aggregated values from raster data over time and space, we will use the Continuous aggregate feature of TimescaleDB which is a form of materialized view to pre-compute and store raster data over time.
Moreover, TimescaleDB allows compression of data which can be very helpful in cases where the data is huge which is usually the case with raster datasets in postgres saving us space in the Database and optimizing some queries.

The proposed presentation will be of interest to developers, data scientists, and geospatial analysts who work with Raster datasets. It will provide a practical guide to querying the raster datasets in PostgreSQL with TimescaleDB and postgis_raster extension.

Use cases & applications
UBT C / N110 - Second Floor
16:30
30min
View Server - EO Data Visualization in a Cloud Native Way
Lubomir Dolezal

The View Server (VS) is MIT licensed, Docker based, cloud-native, scalable software stack providing external services for searching, viewing, and downloading Earth Observation (EO) data. Services implementations are following OGC Web services standards STAC, OpenSearch, WMS, WMTS, WCS.

Having EOxServer and MapCache as core components, enables EO Data publication in a modular and configurable way. The process starts with data harvesting, preprocessing and metadata ingestion and ends with serving pre-cached and on demand rendered images through an attached Web client based on OpenLayers and EOxC libraries or on individual service endpoints.

EOxServer allows dynamic generation of visual images from multi-spectral data. In this way, specific bands or channels of the original images can be selected as the grey or red, green, and blue output colour channels. It also supports flexible rendering based on previously extracted image statistics, pansharpening on the fly, filtering the long time periods of products intersecting with the query in CQL syntax utilizing metadata parameters and more.

VS provides both S3, OpenStack Swift, HTTP and local files support when considering data storage and can be deployed in Docker Swarm environment via docker-compose templates or in Kubernetes environment as a set of Helm charts.
The software stack was and is used by EOX in a quite a number of operational deployments for ESA, like the VirES projects, Copernicus Space Component Data Access system (CSCDA), or more recently Earth Observation Exploitation Platform Common Architecture.

Links:
https://eox.at/2021/09/eoxserver-1-0/
https://eoxserver.org/
https://github.com/EOxServer/eoxserver/
https://gitlab.eox.at/vs/vs

Open source geospatial ‘Made in Europe’
UBT C / N109 - Second Floor
17:00
17:00
30min
Türkiye and Syria Earthquakes Mapping Response
Said Turksever

Powerful earthquakes hit southern Turkey and Syria on 6 February 2023. These earthquakes in Turkey and Syria caused thousands of casualties and destroyed cities. Geospatial infrastructure is critical to respond to these earthquakes during rescue operations, humanitarian effort as well as planning recovery activities.

Yercizenler coordinated mapping activation with the collaboration of Humanitarian OpenStreeMap team to improve open geodata infrastructure in the earthquake affected region and supporting humanitarian response in the scope of mapping.

Türkiye Earthquakes Mapping Response aims to complete open map data infrastructure before and after the event in affected areas. This response is structured with following workstreams; Remote Mapping, Post-disaster Field Data Collection, Global Community Activation and Geo-data Integration.

In this talk; we will talk about how open data and community activation helped save lives after earthquakes, what challenges we faced and what we have learnt during the Türkiye Earthquakes mapping Response effort.

Open Data
Outdoor Stage
09:30
09:30
30min
How UNICEF is leveraging open-source geospatial solutions to drive better results for children
Jan Burdziej

“Whether it is to know where children are, what access they have to facilities (education, health, transportation), what environment they live in (water, air), where risks exist (hazards, diseases), where events happen or where services and resources are available; most of the operational data used by UNICEF is geospatial” (UNICEF Geospatial Roadmap, 2019). At UNICEF we realize that we need to leverage geospatial information to enhance decision-making and optimize resource allocation and drive effective interventions. Geo-enabling UNICEF’s data, systems and processes aims at transforming data into easily accessible, readily available, and actionable geospatial information that can address key questions, such as: “How many children have been affected by a flood?”, “Where children have limited access to schools and limited access to health services?”. This information is critical to support decision-making to ultimately drive better results for children.
UNICEF has recently adopted a hybrid corporate geospatial architecture, which aims at bringing together the advantages of both commercial and open-source GIS world. This presentation aims at discussing how UNICEF is leveraging modern open-source geospatial solutions to address some of the key data-management challenges.
Specifically, two open-source geospatial projects developed by UNICEF will be showcased and discussed: GeoRepo and GeoSight. GeoRepo is a web-based system that will help us store, manage and share a commonly agreed, versioned, official set of administrative boundaries and other core geospatial datasets. It will help us ensure that geospatial data is used consistently across all internal systems and will also strengthen our interoperability with external systems. GeoSight, on the other hand, is a web geospatial data platform developed by UNICEF to bridge the gap between web mapping systems and the Business Intelligence / data analytical platforms. GeoSight is specifically designed to simplify the process of harmonizing data from multiple data sources. It also allows users to easily create online maps for visualizing multiple indicators at subnational levels (e.g. at the province or district level). Both platforms are built using Django and React and use modern open-source geospatial standards and libraries, such as MapLibre and vector tiles.

Use cases & applications
Outdoor Stage
10:00
10:00
30min
Coffee-break
Outdoor Stage
10:00
30min
coffee-break
Lumbardhi
10:00
30min
COFFEE-BREAK
Drini
10:00
30min
coffee-break
Mirusha
10:00
30min
coffee-break
UBT E / N209 - Floor 3
10:00
30min
coffee-break
UBT F / N212 - Floor 3
10:00
30min
coffee-break
UBT C / N109 - Second Floor
10:00
30min
coffee-break
UBT C / N110 - Second Floor
10:00
30min
coffee-break
UBT C / N111 - Second Floor
10:00
30min
coffee-break
UBT D / N112 - Second Floor
10:00
30min
coffee-break
UBT D / N113 - Second Floor
10:00
30min
coffee-break
UBT D / N115 - Second Floor
10:00
210min
Teaching GI with FOSS tools: an update for higher education teachers and trainers at public organizations
Lucas De Oto

In recent years, the combination of technological advances and spatial data abundance revolutionised the field of geoinformation (GI). New methodologies and techniques established in other fields of knowledge proved to be relevant to keep up to date and fully benefiting from all this technological richness. Consequently, new areas of knowledge have emerged, such as geospatial artificial intelligence (GeoAI) or big geodata. Simultaneously, the formulation in 2015 of the 2030 Agenda for Sustainable Development and its multiple goals by the United Nations (UN), impose a specific framework for the application and further development of geoinformation science. Furthermore, the recent COVID-19 pandemic accelerated the transition towards different modalities of distance education as well as the arrival of multiple digital instruments to fulfil this purpose. At the same time, the use of free and open-source software (FOSS) keeps gaining momentum, standing out as the best technological solution to attain sustainable and democratic approaches to geospatial problems.

All these factors have profoundly impacted the way of teaching with GI and about GI. Both technical and socio-emotional skills required to successfully perform as a GI scientist in the near future are changing. And so are the means to learn those skills. As a result, the training curriculum for educators in this field is being revised and updated. In this presentation, we will first discuss the challenges currently faced by educators in the field of GI and explore new didactic and pedagogical proposals to overcome them. We will analyse how teaching GI science in academic settings (i.e., high school, university) differs from teaching it to staff members at public organisations. We will then explore how to successfully implement the ADDIE (i.e., Analyse-Design-Develop-Implement-Evaluate) model of instructional design in both settings. Finally, we will explore together in detail a recently designed refresher course on geodata analysis and dissemination that combines state-of-the-art pedagogical approaches and the use of FOSS4G.

Schedule

Date: June 30th, 2023

  • 09:00 – 10:15 Welcome and introduction
    Presentation I: Teaching GI with FOSS today: challenges and opportunities
  • 10:15 – 10:30 Break
  • 10:30 – 11:30 Presentation II: Differences between academic and organisational training. How to properly design education for different educational settings?
  • 11:30 – 11:40 Break
  • 11:40 – 12:00 Case study: Online refresher course “Geo-web application building with FOSS”
  • 12:00 – 12:30 Plenary discussion
  • Closing

Register here for participating in the event.

Education
UBT D / N112 - Second Floor
10:30
10:30
30min
GeoNode UI: Deep Dive on MapStore and Django integration for GeoNode
Giovanni Allegri, Stefano Bovio

GeoNode is a Web Spatial Content Management System that uses the Django Python web framework. MapStore is an open source WebGIS product and highly customizable framework that has been used as the default user interface to visualize catalog, map viewer and geospatial applications in GeoNode.

This presentation provides an overview of the integration of the MapStore framework inside the GeoNode ecosystem and the main differences with the MapStore product, along with guidelines and references to resources for its customization and the development of custom functionality.

State of software
Drini
10:30
30min
Human-wildlife conflict and road collisions with ungulates. A risk analysis and design solutions in Trentino, Italy
Marco Ciolli

Among the human-wildlife conflicts, wildlife vehicle collisions is one of the most evident to the general public. Human-wildlife conflicts can be defined as the breaking of a relationship of coexistence which occurs when the needs or the behavior of a species negatively affect human activity. Among the causes there are: land use change, especially urbanization, with the construction of infrastructures that interrupt natural habitats, but also conversion of forests to agriculture and pastures, that leads to damages of crops and predation of livestock and also the increased presence of people in wilderness area for recreational activities (Corradini et al. 2021). Often these conflicts lead to the killing and persecution of species, thus compromising the conservation of the species itself. This problem is globally widespread, both in those countries where the Land Use Change already occurred in historical times as well as where the land use change is presently occurring at a dramatic pace. In the last decades, in Europe there was actually a recover of large mammal populations, due to the legal protection and abandonment of traditional agriculture (Chapron et al 2014). The increased amount of large mammals lead to an increased human wildlife interaction, including roadkill and car accidents.
This study investigates wildlife vehicle collisions in the territory of the Italian Autonomous Province of Trento (PAT) 541,692 inhabitants, extending for 6,207 km2, a mountainous area interested by a significant summer and winter tourist presence. The species taken into account are Roe deer (Capreolus capreolus) and Red deer (Cervus elaphus) that are the most common species involved in road accidents in the area. In the last 10 years an average of 700 annual collisions were registered, the animals are often killed and the vehicles are heavily damaged leading to injuries and occasionally to human fatalities. A solution of the problem is becoming urgent in a highly anthropic environment like the Alpine one.
Different measures can be adopted to reduce the risks of collisions, e.g. underpasses, overpasses, viaducts and fly-overs, fences, animal detection systems, warning signs, nets, or also a combination of the former (van der Ree et al 2015).
The main purpose of this work was to use FOSS4G to identify the road sections characterized by a greater number of collisions and to propose and design practical solutions focusing mitigation efforts on these hotspots. The practical solutions were chosen among those more appropriate to each specific situation and when a specific project is proposed it includes the costs to realize it.
Initially the work focused on the geostatistical study of roads collisions with ungulates to determine their trends in space and time. The road sections characterized by a greater number of accidents were identified with accuracy and reliability, by combining GIS geostatistical analysis and a detailed study of the morphology, land cover and other boundary conditions.
QGIS 3.16.6 was used to import data and standardize the data set, as well as to process data and produce heat maps, analysis and most of the final maps. GRASS GIS 8.2 was used to perform data integrity check fixing data errors and resample or recombine data from different sources.
A large amount of different environmental co variates such as forest coverage, ecological corridors, roads and infrastructures were collected while others (e.g. contours and slope) were created starting from the Digital Elevation Model (DEM), the Digital Terrain Model (DTM). Data about ungulates collisions were provided by the Wildlife Service of the Autonomous Province of Trento.
Since the January 2000, every road collision caused by ungulates reported by the Forest Service or by the Hunters Association or by the Road Service was stored in a geo database. In this database are stored the date, the species of affected ungulate, the sex, an indication of the age and the geographical coordinates. Last update used for this study is 08/2022 and the datum is ETRS89, frame ETRF2000, projection UTM zone 32 N.
The ungulates are active mainly during the first at dusk and dawn when the greatest number of investments are also recorded (Mayer et al. 2021). Speed limit of the roads in the hotspots are often disregarded. In a straight tract located on the state road 47 in Valsugana, the maximum speed is set at 90 km/h and about 60% of the vehicles transit with a speed exceeding the limit (90 km/h) with a daily average of more than 19,000 vehicles per day.
Once the areas of intervention were identified with QGIS we carried out on-site inspections to define the best solutions to be adopted in each specific case. GIS processing proved to be extremely informative both in the preliminary design phase and in the final design phase in which the works and interventions were defined in detail.
The five hotspots chosen for intervention were located along four state roads and one provincial road For each case a specific analysis was carried out and a series of tailored interventions (underpasses, overpasses, viaducts and fly-overs, fences, road tunnels) and works aimed at mitigating road accidents with ungulates were identified. Each site was different and posed different construction problems and for each site we developed a specific solution. In addition, a first rough estimative metric computation is developed to determine the order of magnitude of the cost required to implement the recommended interventions.
The proposed projects may create a guideline for the future politics of the provincial government.
Moreover, with the aim of creating a tool for planning interventions at provincial scale a new map was created classifying the road sections in 5 categories based on the number of road accidents with ungulates.
Sharing the capabilities of FOSS4G to improve the procedures in designing interventions that can reduce the collisions can inspire further researchers and technicians to experiment these solutions to plan the positioning of crossing structures, thus helping to mitigate Human-wildlife conflict (HWC).

Academic Track
UBT E / N209 - Floor 3
10:30
30min
Monitoring Inland water bodies
Aman Bagrecha

This talk describes the creation of a water quantification dataset for the entire world. Tracking changes of water-bodies over time helps in timely action to combat drought and floods. The tools used to build this dataset are all free and open source (postgis, gdal, geopandas, scipy) and are built on top of data from OpenStreetMap.
The dataset is updated everyday with new measurements of lake water extent across the globe. The solution to detect and track water bodies involved fetching satellite data using STAC API, pre-processing it to remove cloud cover and invalid pixels, identifying water bodies using band ratio, converting to vector and applying post-processing filters to avoid false-positive detection to finally serve it through an API. This solution has allowed us to track and quantify changes in a lake's water extent over time with high accuracy.

Use cases & applications
UBT C / N109 - Second Floor
10:30
30min
Running OGC API - Features as Smart Contract
Jan Schulze Althoff

Motivation

New, evolving technologies allow to host data and program code (smart contracts) on distributed blockchains. Beside other aspects, like validating geospatial data and their transactions, this technology might also be interesting for building distributed services for the ‘classical’ spatial data infrastructures.

Prototype

During the last months a prototype was developed to test the capabilities of smart contracts to distribute spatial data using the OGC API – Features specification and gain some experiences in its design, typical workflows, limitations etc.
The prototype is designed as smart contract on the ‘Internet Computer (IC)’ blockchain (see https://internetcomputer.org/). This allows to store program code and the spatial data in one container on the blockchain and execute it on demand.
To simplify the test, a fixed workflow is implemented.

1) Data providers upload a spatial dataset (currently glider GNSS tracks in the IGC format) on a simple webpage running within the container

2) The spatial data is persisted on the IC blockchain

3) Users access the data via OGC API – Features with their browser (html representation) or with their GIS

Presentation

In the presentation, I would like to share some experiences on developing geospatial interfaces in a blockchain environment and show the current state of the prototype. Especially the coding with the programming language ‘Motoko’, the exposed interfaces, and the distribution on the blockchain with its costs will be addressed.
I would further like to discuss use cases of the approach, e.g. a simplified data distribution for smaller data providers, and the potential extensions on this approach, like introducing a user management, adding metadata or integrating dynamic data sources.

Links
- Entrypoint (OGC API Features): https://mtlom-hiaaa-aaaah-abtkq-cai.raw.ic0.app/
- Github page (mainly Motoko code): https://github.com/janschu/igc_tools

Related:
- Internetcomputer (IC): https://internetcomputer.org/
- IGC - International Gliding Commission – GNSS Flight Recorders Spec: https://www.fai.org/sites/default/files/igc_fr_specification_2020-11-25_with_al6.pdf
- OGC API – Features: https://ogcapi.ogc.org/features/

Open source geospatial ‘Made in Europe’
Mirusha
10:30
30min
The role of FOSS in mining sector in Malawi
Ruth Mumba

The extractive sector in Malawi has been marked as one of the development enablers to achieve the 2063 Agenda established by African nations. As the mining sector continues to develop, open-source software such as QGIS has been a vital and cost-effective tool in monitoring mining activities for the purpose of tracking the effects of mining on the environment and human populations and encouraging accountability from stakeholders in relation to the Malawi government regulations. Open-source software and data have also been vital in resolving compensation issues in communities that exist in mining areas and allow for geoscientists to give needed to advice to affected stakeholders.

Use cases & applications
UBT C / N110 - Second Floor
10:30
30min
What in-game maps can teach us
Ilya Zverev

Let's look away from familiar continents and comfortable symbolics. When you are making an entirely new world, how do you map it? When any choice can be made from scratch, why game makers sometimes use common carthographic paradigms, or circumvent them? And given we are at a GIS conference, what can we learn from imaginary maps, that can improve our real-world work? Let's connect a Nintendo Switch to a projector and dive into games!

Lumbardhi
10:30
30min
Why FOSS4G Needs a Global Open Data Platform
Christopher Brown

In recent years, the software industry has witnessed a remarkable trend away from traditional standalone applications and towards online multiplayer platforms that offer users a more integrated and collaborative experience.

As this trend continues, it is becoming increasingly important for open source tools to stay competitive by providing seamless access to data and connectivity.

In this talk I will introduce mapstack, outline our mission to bring all of the world’s open location data together in one place, and share my thoughts on how such an unprecedented open resource will benefit the wider FOSS4G ecosystem.

Open Data
Outdoor Stage
11:00
11:00
30min
B6, Diagonal's open source geospatial analysis engine
Andrew Eland

Diagonal is a steward-owned data science consultancy working with projects in the built environment. We build interactive tools to help people understand the tradeoffs inherent in their plans to evolve cities. Our tools are powered by B6 - an in-memory geospatial analysis engine we built to work with large data sets describing the built environment. We typically use it work work with OpenStreetMap and open government data. To enable others to repeat our analyses, we recently released B6 as open source. In this talk, we'll give an overview of B6, including how it's implemented, and how we use it in our commercial work.

Open source geospatial ‘Made in Europe’
Mirusha
11:00
30min
Creating The Red Book of Disaster Response for FOSS4G Community
Orkut Murat YILMAZ

As well trained and experienced members of the free software community from Turkey, we were caught off guard, when the earthquakes happened on February 6, 2023. We started mapping campaigns with HOT, we aggregate different data sources on a GeoServer installation, we did several visualizations on QGIS, but we always felt like something was missing.

If we had a guideline of disaster response for free software communities, we would feel better at the beginning.

This session's aim is, to prepare a dynamic guideline of disaster response actions for geospatial communities, focused on free software and open data.

Use cases & applications
UBT C / N110 - Second Floor
11:00
30min
Implementing Copernicus services at the Norwegian Water and Energy Directorate with Airflow and actinia
stbl

At the Norwegian Water and Energy Directorate (NVE), the OSGeo Community project actinia was introduced together with the Open Source Apache Airflow software as a platform for delivering operational Copernicus services at national scale.
In the presentation, we will illustrate how Airflow and actinia work together and present current and future applications operationalized on the platform.

Those applications cover currently:
- Avalanches
- Flooding
- snow cover
- lake ice

More services related to NVE`s area of responsibility are being investigated, like landslides, slush flows, glacier lake outburst floods, or specific land cover changes...

Use cases & applications
UBT C / N109 - Second Floor
11:00
30min
MapStore real world case study: the hybrid infrastructure of the City of Genova
Stefano Bovio

Born in 2016 thanks to the funding of the National Operational Program for Metropolitan Cities (PON METRO 2014-2020) the current Spatial Data Infrastructure (SDI) of the city of Genova is a hybrid infrastructure, where open source components and technologies are merged together with proprietary ones (such as the Oracle Database) in a well designed platform with respect of all national guidelines (promoted by AgID - Agenzia per l’Italia Digitale) and international standards.
To support the Geoportal initiative, the city of Genova has collaborated with GeoSolutions as a company closely involved in the most important Open Source projects worldwide in the geospatial field with the aim to provide the necessary support for all the SDI stack in terms of deployment, development but also the staff training to make it autonomous as much as possible in the maintenance of the overall system.
The city of Genova Geoportal as well as the wider Geospatial Infrastructure are both reachable online.
A simple and at the same time robust WebGIS based on the Open Source MapStore software is provided with the inclusion of both advanced GSI functionalities and also most common geospatial tools like:

  • Geospatial data search via OCG Web Services and Nominatim
  • 2D and 3D visualization of geospatial data using a map agnostic engine supporting OpenLayers, Leaflet and Cesium for the 3D
  • Editing and Styling of geospatial layers
  • Download functions of geospatial data working on top of OGC services
  • And many more

The aim is to provide ready-to-use tools for all users (both citizens and employed analysts worked in the PA) by leveraging the maturity of the Open Source Software as well as the simplicity of integration with the pre-existing COTS software in order to maximize the reuse of the existing infrastructure and minimize the need for customizations and a possible use of commercial support even for educational purposes.
Many cross-cutting projects usually gravitate around the SDI in the Public Administration and its own Geoportal. To date, more than 300 geospatial layers are available in the Geoportal which allows them to be viewed and consulted within preconfigured MapStore maps, dashboards and geostories and/or used through geospatial services (such as WMS, WMTS, WFS, WCS and CSW) developed according to international standards (OGC - Open Geospatial Consortium) and exposed through GeoServer and GeoNetwork with also a fine grained security tier represented by GeoFence to manage authorizations on geospatial data.

State of software
Drini
11:00
30min
Methods and Evaluation in the Historical Mapping of Cities
Michael Page

Through a (re)mapping and spatial modeling of a city’s past, we can build data-rich exploratory platforms to examine urban histories and engage both scholars and the public. Geospatial technologies can be applied to extract data from archives and other data sources to build historical data models, geodatabases, and geocoders that subsequently enable the development of web-based dynamic map interfaces connected to rich digital content. This paper outlines a project within a larger consortium of institutions and researchers that focuses on methods in open data and open-source development of the historical mapping of cities.

OpenWorld Atlanta (OWA) is an example of the possibilities of such a web map platform. OWA seeks to provide public access to historical information about Atlanta, Georgia (United States) during the late 19th century and early 20th century through engaging 3D and dynamic interfaces. Drawing upon historical maps, city directories, archival collections, newspapers, and census data, projects like OWA allow researchers to analyze spatially grounded questions.

Recent effort on this project focuses on the 1920s, a dynamic period in the city’s history that saw the rapid expansion of the urban footprint driven by an increase in population and public infrastructure. Between 1870 and 1940, the city was shaped by its primary modes of transportation, heavy rail, and the electric streetcar. By the 1940s, the commuter automobile began transforming Atlanta into the sprawling landscape it is today. These developments happened under racist “Jim Crow” laws, and as such, the project thus allows new avenues into investigating the long and contentious histories of racial discrimination and the Civil Rights Movement.

This paper addresses the development of OWA which was built on open-source methods and philosophy. The design of its interface and features, including the call of spatial data and digital objects from server resources, the function of metadata, the evaluation of the project in usability studies, and the building of consortia around these methods are explored. Further, the interdisciplinary approach of its research and development team and the engagement of students in the process from coding, building, and evaluation. With OWA being built using Leaflet and other forms of coding it is designed to pull spatial data and map overlays organized and stored on Emory’s instance of Geoserver developed by the Open Geospatial Consortium (OGC).

Furthermore, another vital component is the structure of the information, data, and digital objects that are stored on an instance of Omeka which is a free, open-source content management system (CMS) designed for the management and dissemination of digital collections and exhibitions. It is primarily used by archives, museums, libraries, and other cultural heritage institutions to create and manage their online collections and exhibitions. Omeka allows student researchers and assistants to prepare and upload non-spatial content that will be populated as features into the platform. With Omeka, users can create and manage items such as images, documents, and audio and video files, as well as add metadata to describe these items and make them searchable.

Metadata plays an especially significant role in the function of the OWA platform. Geospatial features are then linked to records and the corresponding pieces of information, data, and digital objects, including images and 3D models. A modified Dublin Core schema was utilized in Omeka with categories designed to better fit the geospatial and historical data collected. As an example, the fields for the buildings of a data layer include architects, date built/demolished, racial classification of residents or businesses, head of households (from census data and city directories), etc. To populate these fields, research teams comprised of graduate and undergraduate students. Engaging with faculty and staff, the students collect historical information from newspapers, archives, and online resources and enter the information into the database.

The spatial data in OWA comprises many vector layers including administrative boundaries, roads, rail lines, buildings, and more. The design includes multiple avenues for exploration based on specific years and special themes. A key feature is the buildings layer, which was populated with historical information including people, race, entity name, addresses, and more from the building of historical geocoders. The 1928 historical geocoder is complete and was used to populate the 1928 map layer, 1878 is currently in production and with the years surrounding the 1928 geocoder we are using machine learning to produce geocoders for 1927, 1929, and 1930.

Another important aspect is the recognition of the necessity of usability and user experience studies. Researchers at Yonsei and Emory Universities have collaborated to evaluate the ease of use and overall user experience of the platform. The usability study's goal is to find areas of improvement in the user interface and user flow and to gather feedback on the product's design and functionality. A primary goal is to serve as an example of and future framework for usability studies centered on diverse use groups (insider vs. outsider, academic/public, etc.). Test participants were grouped by level of familiarity with Atlanta to capture the diversity of users of the platform. This investigation focused on analyzing and evaluating user experience to explore data and content, conduct analyses, and contribute via feedback or to the resource directly. Therefore, our key questions in these groups sought to address how we can better design interactive web maps of city histories to accommodate diverse user groups.

The authors of this paper include collaborators from Emory University, Yonsei University, Stanford University, and The University of Arkansas. Further, other collaborators include The University of São Paulo (USP), a public research university located in São Paulo, Brazil and Kaziranga University, a private university located in the state of Assam, India both of which are engaged in similar or related projects. The collaborators of these projects seek to share ideas and methods surrounding the historical mapping of cities.

Academic Track
UBT E / N209 - Floor 3
11:00
30min
Open Data for Geospatial: Opportunities and Challenges
Dimple Jain

Open data and geospatial technology have the potential to revolutionize decision-making processes across a variety of sectors, including urban planning, disaster response, environmental management, and more. However, the use of open data in the geospatial domain poses its own set of challenges, including data quality, reliability, and standardization concerns. Managing, maintaining, and updating large datasets can also be resource-intensive, posing a challenge for organizations and communities that rely on open data.

This talk will explore the opportunities and challenges of using open data in the context of geospatial technology. I will begin by discussing the potential benefits of open data, including increased transparency, improved collaboration, and the ability to make more informed decisions. I will then delve into the key challenges of using open data in geospatial contexts, including issues related to data quality and reliability, standardization, and the sheer volume of data. We will explore strategies for managing and maintaining large datasets, such as crowdsourcing and automated data processing, and discuss best practices for ensuring data quality and reliability.

This talk is relevant to anyone interested in the intersection of open data and geospatial technology, including data scientists, GIS professionals, policymakers, and community leaders. Attendees will come away with a deeper understanding of the opportunities and challenges of using open data in geospatial contexts and gain practical insights on how to leverage this data to drive social and economic impact. By the end of the talk, attendees will be equipped with the knowledge and tools they need to make the most of open data in the geospatial domain.

Open Data
Outdoor Stage
11:00
30min
ST_LUCAS reference data for online automated land cover mapping
Martin Landa, Ondřej Pešek

ST_LUCAS is an open-source system designed for providing harmonized space-time aggregated LUCAS data. LUCAS (Land Use and Coverage Area frame Survey) is an activity managed by Eurostat that performs in-situ surveys (points in 2x2km grid) over Europe every three years since 2006. For each LUCAS point, the land cover and land use classes are examined, five photos taken, and various agro-environmental attributes collected. Eurostat is providing data in plain CSV files. LUCAS nomenclature is changing each survey year, some attributes were removed, added or renamed.

ST_LUCAS was created with the goal to provide harmonized (each LUCAS survey is translated into common nomenclature) and space-time aggregated (for each LUCAS point, a single location and set of harmonized attributes for each survey year are provided) data. The ST_LUCAS system offers analysis-ready data through the Python API and QGIS plugin (“ST_LUCAS Download Manager”), which minimizes obstacles to use the data by the wider audience. Users may easily access land cover/use information about 1 350 847 points covering 28 EU countries measured from 2006 till 2018 by Eurostat. LUCAS points are retrieved from the ST_LUCAS system based on specified spatial, temporal, attribute, and thematic filters. The Python API and QGIS plugin also allow retrieving photos (one facing photo and four landscape photos in the cardinal compass directions) for each LUCAS point. Additionally, two analytical functions are available: user-defined LUCAS land cover classes aggregation and the possibility to translate LUCAS nomenclature into other nomenclatures.

See ST_LUCAS website https://geoforall.fsv.cvut.cz/st_lucas/ for detailed information.

Open Data
UBT C / N111 - Second Floor
11:00
30min
Serving earth observation data with GeoServer: COG, STAC, OpenSearch and more...
Andrea Aime

Never before have we had such a rich collection of satellite imagery available to both companies and the general public. Between missions such as Landsat 8 and Sentinels and the explosion of cubesats, as well as the free availability of worldwide data from the European Copernicus program and from Drones, a veritable flood of data is made available for everyday usage.
Managing, locating and displaying such a large volume of satellite images can be challenging. Join this presentation to learn how GeoServer can help with with that job, with real world examples, including:

  • Indexing and locating images using The OpenSearch for EO and STAC protocols
  • Managing large volumes of satellite images, in an efficient and cost effective way, using Cloud Optimized GeoTIFFs.
  • Visualize mosaics of images, creating composite with the right set of views (filtering), in the desired stacking order (color on top, most recent on top, less cloudy on top, your choice)
  • Perform both small and large extractions of imagery using the WCS and WPS protocols
  • Generate and view time based animations of the above mosaics, in a period of interest
  • Perform band algebra operations using Jiffle

Attend this talk to get a good update on the latest GeoServer capabilities in the Earth Observation field.

Use cases & applications
UBT F / N212 - Floor 3
11:00
30min
pycsw project status 2023
Tom Kralidis, Angelos Tzotsos

pycsw is an OGC CSW server implementation written in Python and is an official OSGeo Project. pycsw implements clause 10 HTTP protocol binding - Catalogue Services for the Web, CSW of the OpenGIS Catalogue Service Implementation Specification, version 3.0.0 and 2.0.2. pycsw allows for the publishing and discovery of geospatial metadata, providing a standards-based metadata and catalogue component of spatial data infrastructures. The project is certified OGC Compliant, and is an OGC Reference Implementation.

The project currently powers numerous high profile catalogues such as IOOS, NGDS, NOAA, US Department of State, US Department of Interior, geodata.gov.gr, Met Norway and WMO WOUDC. This session starts with a status report of the project, followed by an open question answer session to give a chance to users to interact with members of the pycsw project team. This session will cover how the project PSC operates, the current project roadmap, and recent enhancements focused on ESA's EOEPCA, Open Science Data Catalogue and OGC API - Records.

State of software
Lumbardhi
11:30
11:30
30min
Agroforestry in the Alas Mertajati of Bali, Indonesia. A case study in applying AI and GIS to sustainable small-scale farming practices.
marc böhlen, Rajif Iryadi, Jianqiao Liu

Small scale food production has in the past not been a priority for AI supported analysis of satellite imagery, mostly due to the limited availability of satellite imagery with sufficient spatial and spectral resolution. Additinally, small scale food producers might find it challenging to articulate their needs and might not recognize any added benefit in new analysis approaches.

Our case study, situated in the geographically and politically complex Alas Mertajati in the highlands of Bali, demonstrates the opportunities of applying satellite assets and machine learning supported classification to the detection of one particular small-scale farming practice, agroforestry. To this end, we are collaborating with the non-governmental organization WISNU as well as BRASTI, a local organization representing the interests of the indigenous Tamblingan.

The practice of agroforestry is widespread across Southeast Asia [5]. Agroforestry plots are 3-dimensional food sources with a variety of species of trees, shrubs, and plants combined into a compact spatial unit. Agroforestry plots are typically small, ranging from fractions of a hectare to a few hectares, and they are often owned by local residents and farmers. Agroforestry plots are tended to manually due to the low cost of manual labor, the small sizes of the plots, the lack of appropriate farm automation systems, as well as a desire to maintain traditional, time-tested land use practices. Small-scale agroforestry can produce a continuous and stable source of valuable and essential foods. The assemblage of vegetation with varying root depth also assists in reducing landslides, an increasingly common event during extreme rainfall in the highlands of the Alas Mertajati. As such, agroforestry is a robust hedge against some forms of climate change than monoculture farm plots [4].

In Bali, agroforestry sites typically contain several major cash crops including clove, coffee, and banana together with a variety of additional trees such as palms, as well as plants and shrubs such as mango, papaya, and taro. Because of the small plot sizes and the diversity of plants contained in agroforestry sites, detection of agroforestry in satellite imagery with statistical approaches is difficult [2].

While other researchers see in the explosion of remote sensing systems an opportunity for the exploration of new algorithms [1], our contribution focuses on the under-valued process of ground truth data, both to improve landcover classification as well as to engage with a local community that will profit from the process.

The latest generation of Planet Labs satellite imagery (Superdove) offers additional spectral information (Coastal Blue (431-452 nm), Blue (465-515 nm), Green I (513-549 nm), Green (547-583 nm), Yellow (600-620 nm), Red (650-680 nm), Red Edge (697-713 nm), Near-infrared (845-885 nm)) at the same spatial resolution (3.7/m) as the earlier Dove constellation [3]. These new spectral sources offer a new window onto the presence of plants associated with agroforestry practices in the Alas Mertajati (Figure 1). After collecting a first set of reference data, we selected several popular machine learning algorithms (Random Forest, SVM, Neural Networks) to produce classifiers that are able to capture the distribution of agroforestry in the study area to varying degrees. These maps are the first representations of agroforestry in Bali Indonesia (Figures 2, 3).

We shared these first-generation maps with members of the Tamblingan (through our project partners) who have long-standing claims to the Alas Mertajati as ancestral lands. Their observations found some of the areas identified as agroforestry to be false, capturing errors and slippages our research team was not aware of.

Together with a local guide, we collected additional ground truth examples in the field. We re-trained the classification systems on the augmented data set to produce updated agroforestry representations. The improvements are twofold. First, as a GIS product. The new map (Figure 4) show a different distribution of agroforestry sites than the previous results.
Agroforestry seems more widely established within the dominant clove gardens. The previous result had a kappa index of 0.714815, and the new result generates a kappa index of 0.734687, and we expect this result to further improve as we fine-tune our classification process.

Second, as a science communication project. In our discussion with our partners, it became clear that the first maps were visually difficult to understand. The “natural” coloration of water, forest, and settlements made it difficult for some non-GIS schooled members to read the information. Consequently, we created a new visualization approach that limited the content to a single category. We projected this information onto an infrared image – from the same satellite asset that delivered the data – to an ‘unnatural’ image with lower barriers to readability (Figures 5, 6).

We used the same approach to visualize the hydrology of the Alas Mertajati (Figure 7). The hydrology data sourced from the Indonesian Government's Badan Informasi Geospatial is superimposed on the same infrared image for visual clarity. However, the data is over 20 years old (Figure 8). There is no updated hydrology map. As such, the image depicts a water rich region that has more recently been identified as water poor due to a rapid rise in water use by the changes in weather patterns and an expanding tourism industry. In fact, a first round of data collected in the field during the rain season of 2023 found multiple dry river beds (Figure 9). As a consequence, the Tamblingan though BRASTI are establishing this water poor ground truth by verifying water flow (or lack thereof) in river beds (Figure 10).

Finally, the project demonstrates the usefulness of our software repository COCKTAIL. Built upon GDAL, ORFEO and QGIS modules, COCKTAIL allows us to invoke popular GIS land cover classification algorithms to classify Planet Lab and Sentinel2 imagery. Moreover, COCKTAIL collects all settings used to create a classification and saves them, so the products can be easily reproduced. COCKTAIL works with remote storage providers to stash large files on low-cost servers. This is of particular interest when working in resource constrained environments.

Academic Track
UBT E / N209 - Floor 3
11:30
30min
Data Governance with Open Metadata Integrating OGC - CSW Services
Ariel Anthieni, Walter Shilman

The Open Metadata platform allows the integration of data and metadata for the management of governance within an organization to integrate different sources, control its publication, its access, standardize the processing and even to be able to analyze the lineage. What we are going to share is the adaptation of one of the data sources to the OGC - CSW service to be able to consume the cataloged metadata transparently in the system.

Open Data
Outdoor Stage
11:30
5min
New lane-detailed OpenDRIVE datasets (HD maps) from Germany openly available
Michael Scholz

Various disciplines such as traffic simulations, driving simulations and applications in autonomous driving require highly detailed road network datasets. OpenDRIVE evolved as an open industry standard for modelling of lane-level road networks (HD maps). Acquiring such datasets is very expensive tough because it has to be done through mobile mapping in most cases. We want to introduce to the FOSS4G community two recently and openly published road network datasets from Brunswick (https://doi.org/10.5281/zenodo.7071846) and Wolfsburg (https://doi.org/10.5281/zenodo.7072631). Investment in both datasets has been funded by German authorities and covered more than 100.000 Euro. We will also give a short appetiser on how to use this data with free and open GIS tools.

Open Data
UBT C / N111 - Second Floor
11:30
30min
Open EO and FOSS4G serving Sahelian farmers and herders: lessons from the GARBAL programme
Alex Orenstein

n the West African Sahel, farmers and herders are critically vulnerable to climate shocks and need access to climate information to secure their livelihoods. Herders use data on pasture and water availability to move their livestock and farmers need weather predictions for planting. While satellite imagery has made much of this information readily accessible to the spatial community, few channels exist to transmit this information to farmers and herders. As a result, climate data has become more powerful than ever before, yet mostly inaccessible to those who depend on this information for their livelihoods.

This talk will share the lessons of the GARBAL programme, an initiative that seeks to bridge this gap. GARBAL is a call center that uses Copernicus Earth Observation imagery and field data to provide farmers & herders with information on pasture, water and markets in Mali, Niger and Burkina Faso. GARBAL was first developed in 2015 and this talk will provide lessons from several years of practice.

The GARBAL interface uses an open-source stack including PostGIS and Mapserver to create a user-friendly interface for call center agents, who then use that interface to answer questions from callers on pasture conditions, market prices and weather forecasts (among others).

The talk will share lessons from the technical and programmatic aspects of the project. The technical side will go over the architecture of the data treatment, demo the interface, talk about successes and failures and show how you can play with the data yourself. The programmatic side focuses more on how the user needs evolved over the years, techniques for translating GIS data into information useful to farmers and herders, operating in areas of active conflict and how EO data fits into existing centuries-old traditional data collection systems in the Sahel.

Use cases & applications
UBT F / N212 - Floor 3
11:30
30min
Securing Your Open Source Geospatial Stack with Single Sign On
Ian Turton

(or what happens when GeoServer and PostGIS meet Active Directory)

This talk will present a case study of how Astun implemented a single sign on (SSO) system for a large
commercial client. The client stored their spatial data in a PostGIS database and provided both direct access
to the database via QGis and from QGis via WMS using GeoServer to carry out the styling and rendering of the
data. Staff are divided into 4 teams and then are subdivided by end client in to small groups. Some of the
data in the system is restricted to just the group working on a specific problem for a specific client, other
data is shared with the whole team, and some is available to the whole company.

The client brief was to move their on site system to "the cloud", and to allow staff to connect to the data
from anywhere in the world with only one user account and password for access to PostGIS and GeoServer data.
Initially, the project planned to leverage the existing corporate Azure Active Directory system to provide the
necessary authentication and authorizations. However, early experiments showed that the time between
requesting a new group and it appearing on the server was (sometimes) longer than the lifetime of the new
group.

Astun provided an open source solution, using Keycloak to handle the user and administrator facing frontends,
with user data being stored in an OpenLDAP server. It was then possible to make use of the LDAP service to
perform authentication and authorization of users to both PostGIS and GeoServer, making sure that data
restrictions applying in one were duplicated in the other.

The talk will cover details of the process and look at some of the issues that were encountered during the
project.

Use cases & applications
Drini
11:30
30min
The Swiss geometadata catalogue: new version (GeoNetwork V4) & first results of a usability study
Raphaëlle Arnaud

At the end of 2022 the Swiss geodata catalogue, geocat.ch, was migrated to GeoNetwork version 4. A more modern user interface as well as a more powerful search based on Elasticsearch makes it easier to search the more than 14000 geometadata contained in geocat.ch.

This new version of geocat.ch has been the subject of a usability study focusing on geodata search. Some developments based on the results of this study have been proposed to the GeoNetwork developer community. To discuss these proposals with the other users of GeoNetwork, a GeoNetwork user community should be founded and could be helpful in the further developments of GeoNetwork. In addition, the usability study showed that the search for geodata is very dependent on the quality of the information entered into the catalog.

The geometadata in geocat.ch come from different organizations (direct entry or harvesting), have different spatial extents, are multilingual and some have different data models. The harmonized entries of the most important information are essential and form the basis for efficient searches. The Swiss geometadata standard (GM03), which is currently under review with the aim of simplifying and updating the Swiss geometadata model, always based on international standards.

Use cases & applications
UBT C / N109 - Second Floor
11:30
30min
Unlocking the potential or Earth Observation combining Optical and SAR data
Miriam Gonzalez

We are currently living in an era for Earth Observations that maybe 20 years ago we could not image. Petabytes and Petabytes of data are being created, having so much data it is a good problem to have but the next question is how we can make sure that the Data created it is really being used to solve the challenges we are facing on Earth. The Copernicus Program has given us the opportunity of having Open Data from a variety of diverse sensors but at the same time more and more companies are part of the New Space era in the one commercial companies are launching Optical and SAR satellites that are complementing the Open Data sources.

In my daily job doing Partnerships in the industry, I have the chance to work together with most of the New Space companies trying to find the best way to promote how we all can take advantage of all the data we have available from the Open Data sources and the Commercial sources, this can be optical data working together with SAR and how it can be a game changer in many future projects in Earth Observation.

My presentation will go around how in the last few years there are more options to be able to build products helping to solve earth's challenges by taking advantage of the resources we have in the New Space Industry.

AI4EO Challenges & Opportunities
UBT C / N110 - Second Floor
11:30
30min
fAIr - Free and Open Source AI for Humanitarian Mapping
Kshitij Raj Sharma

The name fAIr is derived from the following terms:

f: for freedom and free and open-source software
AI: for Artificial Intelligence
r: for resilience and our responsibility for our communities and the role we play within humanitarian mapping

fAIr is an open AI-assisted mapping service developed by the Humanitarian OpenStreetMap Team (HOT), designed to enhance the efficiency and accuracy of mapping efforts for humanitarian purposes. By utilizing computer vision techniques and open-source AI models, fAIr detects crucial map features from satellite and UAV imagery, starting with buildings. The service fosters collaboration with local communities, enabling them to create and train their own AI models, ensuring relevance and fairness in mapping. Through a constant feedback loop, fAIr progressively improves its computer vision models, contributing to the continuous advancement of humanitarian mapping.This talks will talk about our journey and vision for using AI

Mirusha
11:30
30min
pygeoapi project status 2023
Tom Kralidis, Francesco Bartoli, Angelos Tzotsos, Just van den Broecke

pygeoapi is an OGC API Reference Implementation. Implemented in Python, pygeoapi supports numerous OGC APIs via a core agnostic API, different web frameworks (Flask, Starlette, Django) and a fully integrated OpenAPI capability. Lightweight, easy to deploy and cloud-ready, pygeoapi's architecture facilitates publishing datasets and processes from multiple sources. The project also provides an extensible plugin framework, enabling developers to implement custom data adapters, filters and processes to meet their specific requirements and workflows. pygeoapi also supports the STAC specification in support of static data publishing.

pygeoapi has a significant install base around the world, with numerous projects in academia, government and industry deployments. The project is also an OGC API Reference Implementation, lowering the barrier to publishing geospatial data for all users.

This presentation will provide an update on the current status, latest developments in the project, including new core features and plugins. In addition, the presentation will highlight key projects using pygeoapi for geospatial data discovery, access and visualization.

State of software
Lumbardhi
11:35
11:35
5min
Providing a libOpenDRIVE-based GDAL driver for conversion of lane-detailed road network datasets commonly used in automotive engineering into GIS tools
Michael Scholz

Various applications with the need of highly detailed road network models emerged within the last decade. Apart from traffic simulations in context of urban planning, especially the automotive industry plays an important role in geodata consumption for development, testing and validation of autonomous driving functions. In this domain, human-centred driving simulation applications with their realistic 3D virtual environments pose the highest demands on real-world data and lane-level road network models. It is not uncommon for such road network data to not only be mathematically continuously modelled, but also to contain all the necessary topological links and semantic information from traffic-regulating infrastructure – such as signs and traffic lights. Schwab and Kolbe [1] give a compact overview of the requirements of such fields of application and describe different domain-specific road data formats, which are commonly used for such tasks. Of these peculiar road description formats, OpenDRIVE [2] evolved as an open industry standard. In 2017 we proposed a driver for conversion of OpenDRIVE’s continuous road geometry elements into standardized GIS geometries according to OGC Simple Features Access [3] via the free and open-source Geospatial Data Abstraction Library (GDAL) [4]. By then, this was the first open source conversion tool from OpenDRIVE into more GIS-friendly encodings. Since then, other OpenDRIVE conversion tools have popped up, such as [5], [6], [7], [8]. But none of those allows such a comfortable integration into common GIS tools like our proposed GDAL extension by, for example, simply dragging and dropping an OpenDRIVE dataset into QGIS. We now present a refurbished version of our OpenDRIVE GDAL driver which is based on the novel C++ library libOpenDRIVE. It integrates well in GDAL’s new CMake building process and offers a more convenient starting point for developers and researchers who want to bring OpenDRIVE data easily into context with other geodata such as with aerial images, OpenStreetMap or cadastral data. Apart from OpenDRIVE, other specialized road network description formats are crucial to the automotive engineering and research domain. Where Road2Simulation [9] and laneLet2 [10] already come along in GIS-friendly encodings, RoadXML and NDS Open Lane Model [11] could also profit from such a GDAL-based conversion approach. By bringing the domains of automotive engineering and GIS closer together we hope to stimulate interdisciplinary knowledge transfer and the creation of an interconnected research community.

[1] https://doi.org/10.5194/isprs-annals-iv-4-w8-99-2019
[2] https://www.asam.net/standards/detail/opendrive
[3] https://www.ogc.org/standards/sfa
[4] https://elib.dlr.de/110123
[5] https://doi.org/10.5281/ZENODO.7023152
[6] https://doi.org/10.5281/zenodo.7771708
[7] https://doi.org/10.1109/itsc48978.2021.9564885
[8] https://doi.org/10.5281/zenodo.7702312
[9] https://doi.org/10.5281/ZENODO.3375525
[10] https://doi.org/10.1109/itsc.2018.8569929
[11] https://olm.nds-association.org

State of software
UBT C / N111 - Second Floor
11:40
11:40
5min
Virtual Constellations-as-a-Service and Virtual Image Catalogs
Denis Rykov

Virtual Constellations-as-a-Service and Virtual Image Catalogs

Sharing remote sensing assets among multiple tenants is crucial to unlock the value of new space earth imaging constellations. In these schemes, a tenant has access to a so-called virtual constellation consisting of dedicated access to a number of assets as well as automated mechanisms to procure additional imagery from other assets. Access to this virtual constellation is mediated through a virtual catalog client-side that looks to the user as if it comes from its own dedicated assets and is fully interoperable with open-source standards for cloud optimized pipelines, such as STAC and COG.

Satellogic Inc., a leader in sub-meter resolution Earth Observation data collection recently reached a three-year agreement with the Government of Albania to develop a Dedicated Satellite Constellation. This unique program derives from Satellogic's Constellation-as-a-Service model and will provide Albania with responsive satellite imagery capabilities across its sovereign territory. Two satellites, ALBANIA1 and ALBANIA2 were launched in January 2023, to provide imagery for national map generation to support emergency response, land use management as well as environmental monitoring of sustainability goals.

To support this government effort we have developed a secure, encrypted end-to-end data platform, continuously updated archival imagery in dedicated client-side cloud along with support for open source standards such as STAC and COG. We also discuss future directions in terms of the resulting ability to build integrations with external image processing platforms and open source data exploitation projects.

Use cases & applications
UBT C / N111 - Second Floor
12:00
12:00
90min
lunch
UBT C / N110 - Second Floor
12:00
90min
lunch
UBT C / N111 - Second Floor
12:00
90min
lunch
UBT D / N112 - Second Floor
12:00
90min
lunch
UBT D / N113 - Second Floor
12:00
90min
lunch
UBT D / N115 - Second Floor
12:00
30min
COMPARING DIFFERENT MACHINE LEARNING OPTIONS TO MAP BARK BEETLE INFESTATIONS IN REPUBLIC OF CROATIA
Nikola Kranjčić

This paper presents different approaches to map bark beetle infested forests in Croatia. Bark beetle infestation presents threat to forest ecosystems and due to large unapproachable area presents difficulties in mapping infested areas. This paper will analyse available machine learning options in open-source software such as QGIS and SAGA GIS. All options will be performed on Copernicus data, Sentinel 2 satellite imagery. Machine learning and classification options that will be explored are maximum likelihood classifier, minimum distance, artificial neural network, decision tree, K Nearest Neighbour, random forest, support vector machine, spectral angle mapper and Normal Bayes. Maximum likelihood algorithm is considered the most accurate classification scheme with high precision and accuracy, and because of that it is widely used for classifying remotely sensed data.
Maximum likelihood classification is method for determining a known class of distributions as the maximum for a given statistic. An assumption of normality is made for the training samples. During classifications all unclassified pixels are assigned to each class based on relative probability (likelihood) of that pixel occurring within each category’s probability density function.
Minimum distance classification is probably the oldest and simplest approach to pattern recognition, namely template matching. In a template matching we choose class or pattern to be recognized, such as healthy vegetation. Unknown pattern is then classified into the pattern class whose template fits best the unknown pattern. Unknown distribution is classified into the class whose distribution function is nearest (minimum distance) to the unknown distribution in terms of some predetermined distance measure.
A decision tree is a decision support tool that uses a decision tree model and its possible consequences, including the outcomes of random events, resource costs, and benefits. It's a way of representing an algorithm that contains only conditional control statements. Decision trees are commonly used in operations research, particularly in decision analysis to identify the strategy most likely to achieve a goal, but they are also a popular tool in machine learning.
K Nearest Neighbour is a simple algorithm that stores all the available cases and classifies the new data or case based on a similarity measure. It is mostly used to classifies a data point based on how its neighbours are classified.
Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned.
Support vector machines (SVM) are supervised learning models with associated learning algorithms that analyse data for classification and regression analysis. SVMs are one of the most robust prediction methods, being based on statistical learning frameworks. Given a set of training examples, each marked as belonging to one of two categories, a SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier.
Spectral image mapper is a spectral classifier that can determine spectral similarity between image spectra and reference spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands used each time. Small angles between the two spectrums indicate high similarity, and high angles indicate low similarity.
Bayesian networks (normal Bayes) are a type of probabilistic graphical model that uses Bayesian inference to calculate probability. Bayesian networks aim to model condition dependence by representing conditional dependence by edges in directed graph. Bayesian networks are designed for taking an event that occurred and predicting the likelihood that any one of possible known causes was a factor.
Copernicus, also known as Global Monitoring for Environment and Security (GMES) is a European program for the establishment of European capacity for Earth observation. European Space Agency is developing satellite missions called Sentinels where every mission is based on constellation of two satellites. Main objective of Sentinel-2 mission is land monitoring and it is performed using multispectral instrument. Sentinel-2 mission is active since 2015. Sentinel-2 mission carries multispectral imager (MSI) covering 13 spectral bands. Sentinel 2 mission produces two main products, level-1C and level-2A. Level-1C products are tiles with radiometric and geometric correction applied. Geometric correction includes orthorectification. Level-1C products are projected combining UTM projection and WGS84 ellipsoid. Level-2A products are considered as the mission Analysis Ready Data.
Each method is evaluated with error matrix and each method is compared to each other. A confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in the literature. The name stems from the fact that it makes it easy to see whether the system is confusing two classes. Each error matrix contains Kappa value. Kappa coefficient is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as κ considers the possibility of the agreement occurring by chance.
All analyses are performed on data located in Republic of Croatia, Primorsko-goranska county.

Academic Track
UBT E / N209 - Floor 3
12:00
30min
Correlation between the greening rate of a city and local climate zones using free and open source data and tools (case study: city of Tirana)
Anja Cenameri

Albania is one of the most vulnerable countries in terms of the trend of climate change in the Western Balkans. Changing weather patterns have already been observed over the last 15 years with increasing temperatures, decreasing precipitation, and more frequent extreme events like floods and droughts. Among the most affected cities is Tirana, where a time series analysis was done using FOSS data and tools. Our aim was to provide accurate map representations of local climate zones (LCZs) to track the changes of the last decade based on an open online platform running on Google Earth Engine. This is called LCZ generator and aims to use free data sources from the Copernicus Hub (Demuzere et al. 2021). The satellite data based analysis was done by using 5-15 training areas for each LCZ types. It provided a 100 by 100 m ground resolution supervised classification for the entire municipality of Tirana. The analysis shows that the quick urbanization process resulted in a decreasing proportion of green areas, and unpaved surfaces in the municipality of Tirana, which consequently increased the vulnerability of the city to extreme weather events.
A large-scale map was also compiled using a free and open source Geographic Information System (QGIS), which seems to be the most effective in identifying the varying urban climate zones on the city planning level, since it shows the city's structures and even highlights the role of a building or small park (Cenameri, 2021).

Use cases & applications
UBT C / N109 - Second Floor
12:00
30min
Mergin Maps: capture geo-data and share your QGIS projects with ease
Peter Petrik

We show how Mergin Maps can be used in various real-world situations to use the power of QGIS ecosystem to speed up and effectively capture data in the field and reliably collaborate with your team. We will not dive into technical details, but focus more on general understanding of what can be done nowadays in the field of professional geo-data capturing.

Do you need to capture the location of plants or animals with your personal phone? Or distribute this task to a group of volunteers without need to train them? Or your company has a network of pipes or fiber cables, you use QGIS in the office for analysis and you want to use the same map as your colleagues on site? Are you fed up with using for such tasks a camera and MS Excel or even pen and paper? This talk can show you how others solve these challenges with Mergin Maps.

Mergin Maps is a free and open-source platform powered by QGIS rendering engine to capture and share geo-data with ease. It has been developed by Lutra Consulting since 2017 and it has served thousands of companies and individuals in full production for more than 2 years. It comes with Android, iOS apps that do not need any training to be used by the general public. Also a powerful server to store, version and collaborate on your QGIS projects.

Open source geospatial ‘Made in Europe’
Mirusha
12:00
30min
Modernising Tasking Manager infrastructure using Terraform, cloud-native tools and good sense
Yogesh Girikumar

Learn how Humanitarian Openstreetmap Team uses modern tools like Terraform, AWS serverless, and other tools to modernise the collaborative mapping tool - Tasking Manager. The talk will focus on balancing infrastructure costs, cloud vendor lock-in, performance and DevOps processes.

Tasking Manager is an important collaborative mapping tool that is considered a public good. In recent times, the tool has left a lot to be desired in terms of performance and availability. The HOT Tech team set out to overhaul the architecture, and deployment processes of Tasking Manager. I discuss the soon-to-go-live improvements that touch upon Terraform, AWS Serverless, CircleCI, Observability processes, and Developer Experience.

Links: https://github.com/hotosm/tasking-manager

Drini
12:00
30min
Open source mapping library shoot out
Anita Kemp

MapLibre GL JS, Leaflet, Esri Leaflet, OpenLayers, and Cesium JS are all great mapping libraries. However, it can be difficult to decide which one to use for different applications. In this talk, I compare the strengths and weaknesses of each library based on different criteria. The criteria include the following:

  1. Library footprint and modularity.
  2. Load times for vector tile and image tile layers.
  3. Rendering performance of GeoJSON data.
  4. Styling and rendering features.
  5. Viewport performance and screen size responsiveness.
Use cases & applications
UBT F / N212 - Floor 3
12:00
30min
Rethink geo/open metadata edition in GeoNetwork
Olivia Guyot, Florent Gravin

This presentation is the follow up of the datahub paradigm presented last year: The confluence of geo data and open data. This time we will look at the metadata edition and maintenance aspect.

Writing metadata to describe a dataset is an essential part of managing a catalog. Each record in a catalog has been written, or at the very least enriched, by actual humans. GeoNetwork is a very widely used open-source metadata catalog; as such, it offers powerful tools in this regard: custom edition forms, batch editing, templates, custom XSL processing, advanced edition in XML, etc.
Despite all these features, authoring metadata is often felt as a difficult process, involving complex actions, convoluted validity rules and an intricate knowledge of metadata schemas like ISO19139.

Our vision for this new metadata editor can be summed up in three phrases:
- Make metadata accessible to everyone
- Forget about metadata schemas
- Build your own editor

This editor is made to feed content into your Datahub. Whether you want to describe open data, geo data or anything else, the editor will make it simple for you! Come and discover the concepts behind the scenes.

Open Data
Outdoor Stage
12:00
30min
geOrchestra - project status
Emmanuel Belo, VAN DER BIEST François

geOrchestra is a complete spatial data infrastructure (SDI) and combines a number of widely used open source components. These include GeoNetwork as a metadata catalogue, GeoServer, GeoWebCache, GeoFence, and Jasig CAS. During this talk we will present the project and its latest developments.

geOrchestra is an open source, modular, interoperable and secure spatial data infrastructure designed by people for people.

The technical architecture is based on modularity and interoperability. The extensive use of the Spring Framework allows the integration of additional components. Compliance with OGC standards is central, because only then can the various components and any external IDS work together.

geOrchestra is supported by an underlying server infrastructure, which can be configured in an automated way if necessary. We support deployment on Kubernetes as well as Ansible. geOrchestra has proven to be an innovative IDS in a highly orchestrated environment. Its modular architecture allows it to deploy individual components as microservices. Individual components such as GeoServer-cloud or GeoNetwork Microservices can therefore be scaled as needed.

Nevertheless, an SDI must be user-friendly and adopt a user-centric approach. This is the latest development that the geOrchestra community has started to follow. New modules such as the Datafeeder simplifies the data registry and the Datahub portal makes it very easy for a user to find the right dataset.

Current developments related to geOrchestra include a rewrite of the GeoNetwork metadata catalogue to provide a complete new user interface for editing metadata.

State of software
Lumbardhi
12:30
12:30
90min
lunch
Outdoor Stage
12:30
90min
lunch
Lumbardhi
12:30
90min
LUNCH
Drini
12:30
90min
lunch
Mirusha
12:30
90min
Lunch
UBT E / N209 - Floor 3
12:30
90min
lunch
UBT F / N212 - Floor 3
12:30
90min
lunch
UBT C / N109 - Second Floor
13:30
13:30
30min
Earthquakes and OpenStreetMap
Danijel Schorlemmer

The substantial reduction of disaster risk and life losses, a major goal of the Sendai Framework by the United Nations Office for Disaster Risk Reduction (UNISDR), requires a clear understanding of the dynamics of the built environment and how it affects, in case of natural disasters, the life of communities, represented by local governments and individuals. The framework states that communities participating in risk assessments should increase their understanding of efficient risk mitigation measures.

Earthquakes are threatening many regions in the world with constantly increasing risk due to rapid urbanization and industrialization. Earthquakes do not kill people, buildings do. Thus, the main threat of earthquakes comes from building damage and collapse. To improve resilience and preparedness, we need to estimate the risk, the possible damage of buildings and the related human and financial losses. This requires not only the position, size and class of buildings, but also the reconstruction value and the number of people inside the building at any time. For this, exposure models are used that translate the physical earthquake hazard to building damage, human and financial losses. Exposure models usually describe the built environment of administrative regions as groups (aggregates) of different building classes and their frequency.

We present our open, dynamic, and global approach to describe, model, and classify every building on Earth with the greatest level of detail. Our model is based on the building data from OpenStreetMap and engineering information from open exposure models, combining these two sources to a building-by-building description of the exposed assets. We retain the aggregated descriptions where the building coverage in OpenStreetMap is incomplete and describe every building separately where building data is available. Due to the near-real-time computations of our model, it directly profits from the growth of OpenStreetMap and with about 5 million buildings added each month (or approx. 2 per second), the areas of incomplete coverage are constantly shrinking, making way for our building-specific exposure model.

Here, we introduce shortly the earthquake phenomenon, how it affects the built environment, why a high level of detail is necessary for useful assessments of the impact and the consequences of earthquakes, how OpenStreetMap and other open data helps us to achieve this goal and how communities can benefit for the model for their own risk assessments.

Open Data
UBT C / N110 - Second Floor
13:30
30min
Notebooks in (geo)datascience
Nicolas Roelandt

In the FOSS4G 2021 programme, the word 'notebook' appeared ten times and the word 'jupyter' ten times too in the abstracts of four workshops and four presentations.

In 2022, 'jupyter' and 'notebook' appear in two workshops and two presentations abstracts.
More discreetly, at least three workshops and one scientific paper used notebooks without mentioning them.
As we can see, notebooks are becoming increasingly common in data science and the geospatial world.

But what is a notebook? What is it useful for? What are its limitations?
Are there other platforms than Jupyter?
Can we do anything other than Python? What about geospatial? Are these tools FOSS?
These are some of the questions that this presentation will try to answer.
(TL;DR: yes!)

If you have never heard of Quarto, Observable or Org-mode, this presentation is for you.

Use cases & applications
UBT C / N111 - Second Floor
14:00
14:00
30min
Digital Earth Observation infrastructures and initiatives: a review framework based on open principles
Margherita Di Leo

In recent years, the democratisation of access to Earth Observation (EO) data, in parallel to the increased volume and variety of such data, have led to the paradigm shift towards “bringing the user to the data” [4]. This is exemplified by the European Copernicus Programme, which on a daily basis makes available terabytes of high quality, openly-licensed EO data suitable for a wide range of research and commercial applications. The computational power required to work with these large amounts of data, as well as a renewed interest for Artificial Intelligence models, and the need for large storage volumes were met with a rise of cloud-based digital infrastructures and services. These infrastructures provide environments that can be readily instantiated and equipped with the necessary data and processing tools all accessible in one place, in a highly automated and scalable manner to support users in analysing EO data in the cloud. Several such infrastructures as well as other initiatives (the latter also including services and components offering specific capabilities) have been developed, either as a byproduct of single companies leveraging enormous hyperscale computing powers (such as Google Earth Engine, Microsoft Planetary Computer and Earth on AWS) or as projects funded and operated by international communities that are primarily driven by specific policy objectives. Examples are projects publicly funded by the European Commission and the European Space Agency, such as the Data and Information Access Services (DIAS) platforms, and the Thematic and Regional Exploitation Platforms.
The current landscape of digital infrastructures and initiatives for accessing and processing EO data is fragmented, with varying levels of user onboarding and uptake success, see e.g. [3]. Within this context, we offer a user-centric framework used to review 50+ existing digital infrastructures and initiatives for EO. Our work is expected to extend the scope and outlook of similar smaller reviews [1], where 7 digital infrastructures are qualitatively compared according to a set of ten criteria, mainly of a technical nature. The proposed review framework is conceptualised from a user-driven perspective by mapping user needs to current infrastructure and service offers, ultimately aiming at identifying overlaps and gaps in the existing ecosystem. The framework is organised around 5 pillars corresponding to common problem areas: 1) sustainability of the service, 2) redundancy of service, 3) user onboarding, 4) price and 5) user needs. Within each problem area, we further identified a number of good practices for user-centric developments of infrastructure and services. The good practices are derive from the authors’ longstanding experience in using digital EO infrastructures and are framed around several aspects related to open principles, both from the technical and the organisation side.
The first pillar is the sustainability of the infrastructure/initiative after the initial funding phase. Good practices include: fostering the creation of a community of users/developers that ensures preservation/evolution of the infrastructures/tools; releasing software under open source licenses, which encourages the reuse and growth of products considered to be useful by the community; adopting open standards and releasing specifications in the public domain, facilitating interoperability and reuse.
The second pillar is the fragmentation between infrastructures/initiatives causing redundancy of services. Relevant good practices involve the use of open source licensing models in favour of collaboration and reuse, the adoption of common open standards and Application Programming Interfaces (APIs), the federation of resources and federated authentication.
The third pillar consists of the steep learning curve often needed to start using digital infrastructures/initiatives; related good practices include, in addition to well-written and openly available documentation (including resources such as step-by-step videos and tutorials), the availability of sandboxing solutions that allow users to experiment with the infrastructure/initiative to understand if the offer matches the needs.
The fourth pillar is the price of using infrastructures, which is not always transparent and/or clearly describing the services offered. The related good practice consists in the provision of a full and transparent list of services and related costs.
The fifth and last pillar is the top-down design and implementation of the infrastructure/initiative, with limited consideration of user’s needs. Good practices include co-design approaches, where users are actively involved in all phases and their feedback used to adjust the developed prototype [2], the establishment of helpdesks, forums, mailing lists and channels fostering community growth around the project, and the adoption of open source development and open governance.
The results of applying this review framework to 50+ digital EO infrastructures and initiatives shed light on a first set of limitations (from a user-driven perspective) common to many platforms. The most important include: discoverability of available datasets; steep learning curve to start using their services; difficulty to understand what the offered services are and whether they fit user needs; not fully transparent pricing; no reusability of software components; poor interoperability; vendor lock-in; no facilitation for code sharing/reuse; lack of guarantee of long-term sustainability of the infrastructure; internal policies hampering publication of commercial added-value code/algorithms. At the same time, the review identified some promising digital EO infrastructures and initiatives that already adopt most of the aforementioned good practices. These include, among others, the OpenEO API initiative, which aims to facilitate interoperability between cloud computing EO platforms, and the infrastructure of the Open Earth Monitor project, which adopts an open source, open data and open governance model by default.
This review, which is currently being applied to a growing number of infrastructures and initiatives, is expected to help the user community identify overlaps, gaps and synergies as well as to inform the providers of infrastructures and initiatives on how to improve existing services and steer the development of future ones.

Academic Track
UBT E / N209 - Floor 3
14:00
30min
How to improve OpenStreetMap for the production of a hiking map
Mathias Gröbe

In preparation for a new Alpine Club map by the Institute of Cartography of the TU Dresden around Mt. Ushba in Georgia in the Great Caucasus, the decision was made to use OpenStreetMap as the primary data source for the map. As a result, the fieldwork in place contributed to OpenStreetMap to use gained information for map production by using OpenStreetMap. In the past, data import and organized mapping had already happened, leaving gaps only fillable by fieldwork.

Mapping campaigns took place in 2021 and 2022. In preparation, it was necessary to identify missing or uncertain information. The catalogue of objects which should be mapped was derived from existing Alpine Club maps and the feature tags of OpenStreetMap. Several trails currently missing in OpenStreetMap were identified by collecting and comparing openly available GPS tracks, hiking guides, and old maps. The comprehensive information collection summarized the knowledge of all the sources. It became central for planning the office work on the data and organizing the extensive on-site mapping.

Based on the collected information, the routes were planned in advance and during the fieldwork assigned to the mapping teams. On tour, new data was collected, which could not be obtained from aerial images such as small paths, hiking routes, guideposts, and POIs.

The collection of geographical names worked similar to the collection of missing paths. After reviewing and selecting various sources, an updated set of names has been compiled. Old maps play an important role because they sometimes contain names that need to be added or allow updates for more recent documents. Combined with background literature on the region, uncertainties in assigning geographical features can frequently be solved. Asking locals helped in finding the ideal spelling. The result is a much more consistent toponym base both in the OpenStreetMap database and in the derived produced map.

The presentation will share the knowledge on preparing and organizing the fieldwork for such a project. Significant aspects are how to identify missing ways and to collect geographic names.

Open Data
UBT C / N110 - Second Floor
14:00
30min
Implementing OGC API - Processes with prefect and pygeoapi
Francesco Bartoli, Ricardo Garcia Silva

The Open Geospatial Consortium API family of standards (OGC API) are being developed to make it easy for anyone to provide geospatial data to the web, and are the next generation of geospatial web API standards designed with resource-oriented architecture, RESTful principles and OpenAPI. In addition, OGC APIs are being built for cloud capability and agility.

The OGC API - Processes standard supports the wrapping of computational tasks into executable processes that can be offered by a server through a Web API and be invoked by a client application. The standard specifies a processing interface to communicate over a RESTful protocol using JavaScript Object Notation (JSON) encodings. Typically, these processes execute well-defined algorithms that ingest or process vector and/or coverage data to produce new datasets or analyses.

pygeoapi is an open source Python server implementation of the OGC API suite of standards. The project emerged as one of the most effective reference implementations that provides the capability for organizations to deploy OGC API endpoints using OpenAPI, GeoJSON, and HTML. pygeoapi is built on an extensible plugin framework in support of clean, adaptive data integration and easy customization.

Prefect is an open source data workflow orchestration platform developed in Python. It provides robust orchestration of workflows and offers a large set of features that range from monitoring to supporting cloud storage, to periodic execution, etc. It is a robust and very capable workflow engine, which is a perfect fit for managing execution of OGC API – Processes requests in pygeoapi.

This presentation will provide an overview of the prefect process manager plugin for pygeoapi and will demonstrate:

  • How to use pygeoapi for handling OGC API - Processes use cases
  • How the pygeoapi prefect plugin is a good match for managing the execution of processes and what are its main strengths as a geospatial data processing platform
State of software
UBT C / N109 - Second Floor
14:00
30min
Interfacing QGIS processing algorithms from R
Floris Vanderhaeghe

R is well-known for its unsurpassed provision of well documented statistical functions and packages in the default installation. Less well-known is its excellent support for spatial data through packages such as sf, terra, and stars. A thriving ecosystem of diverse and often topic-specific packages build on these foundations, making R a powerful command-line GIS (Geographic Information System) for reproducible research. However, dedicated GIS software (e.g. QGIS) offers specific processing algorithms that are either not available in R, or may achieve a higher level of performance than their equivalents in R. This presentation describes how it is now possible to combine the strengths of R and QGIS through R packages that interface processing algorithms provided by QGIS. These packages (qgisprocess, qgis) allow users to create data processing pipelines that combine R and QGIS algorithms almost seamlessly. We discuss the current state of these R packages and demonstrate the usage of their most important functions by example. Finally, we shed light on future development directions and seek feedback from the community.

State of software
Lumbardhi
14:00
30min
Lake bottom DEMs from open data with GDAL and GMT
Jukka Rahkonen

Finland is reputed to be the Land of a Thousand Lakes, but a more precise estimate is that Finland has 57000 lakes which are larger than one hectare. The precise shorelines of all the lakes have been available as open data since 2012 but the situation with the bathymetric data is not as good. Depth contours are available for about 80% of the total lake area, but oldest soundings are from the end of the 19th century. Bathymetric data of the lakes has not been considered particularly important and the old measurements have not been systematically updated and verified. Therefore, the most common acquisition method in the existing bathymetric data is still manual measurement with a plumb line through the ice. Because the depth points are frequently 75-100 meters apart, such data are only usable for creating rather approximate depth contours.

However, since mid 1980s the Finnish Environment Agency, the Finnish Transport and Communications Agency Traficom, and their predecessors, have been mapping lake bathymetry with sonar sounding. In recent years these agencies have published their depth point data as open data under the CC-BY 4.0 license. These new datasets are essentially XYZ point clouds. Thanks to open source GIS programs anybody can take these datasets and create digital elevation models (DEM) of the lake bottoms, colored hillshade visualizations, 3D-models, and even traditional depth contours.

This presentation will dig into the nature of the data that is collected with sonar soundings and how it affects the selection of the interpolation method. A complete open source workflow that is using GDAL and Generic Mapping Tools (GMT) will be presented. The workflow begins from raw point measurements and lake shoreline vectors, and yields a DEM, hillshade visualization with a color table, and depth contours. Results for more than 1800 Finnish lakes will be available online, but the main outcome is the workflow itself. Because only command line tools which can be scripted and parameterized are used, it is simple to tune the process so that the output will suit different needs.

Open Data
UBT D / N112 - Second Floor
14:00
30min
Navigate urban scenarios with MapStore 3D tools
Lorenzo Natali, Stefano Bovio

This presentation focuses on the use of MapStore to navigate urban scenarios using its 3D tools and capabilities. Latest versions of MapStore include improvements and tools related to the exploitation of 3D data such as Map Views, Styling, 3D Measurements and more. Support for 3D Tiles and glTF models through the Cesium mapping library has also been greatly enhanced to provide support for more powerful integration.

Attendees will be presented with a selection of use cases around the following topics: visualization of new projects for urban planning, relations between different levels of a city and descriptions of events inside a city. At the end of the presentation attendees will be able to use the presented workflows to replicate them on different urban scenarios using the 3D tools of the MapStore WebGIS application.

Use cases & applications
UBT C / N111 - Second Floor
14:00
30min
Open source tooling for hydrodynamic simulation software development
Leendert van Wolfswinkel

In this talk we give an example of how open source tooling enables companies to fast-track software development, while simultaneously benefitting the FOSS4G community. Our use case is the development of the user interface for hydrodynamic simulation software, including editing and analysis, called the 3Di Modeller Interface.

Traditionally hydrodynamic simulation software companies develop their own user interfaces, usually closely resembling GIS packages, (re-)implementing features like background maps, layer management, geoprocessing tools, and styling options. In our approach we turned it around. Instead of developing our own GIS-like software, we used QGIS to leverage development. Specifically for larger governmental agencies (where a certain well-known proprietary GIS suite is often the only GIS that employees are allowed to use), we packaged our implementation in an installer, enabling modellers to use QGIS for hydrodynamic analysis within their organisations.

This approach has several advantages for users and for the FOSS4G community. For users, hydrodynamic modelling tools seamlessly integrate with the ever expanding GIS capabilities that QGIS has to offer; and users can built their own custom tooling, combining our own open libraries for hydrodynamic modelling with FOSS4G libraries like PyQGIS, Shapely, NetworkX, GDAL or QGIS.
For the FOSS4G community, this approach increases the user base, including users that are into developing their own plugins, it increases sustainable memberships, and creates job opportunities for FOSS4G developers.

The 3Di Modeller Interface is developed by Nelen & Schuurmans, a Dutch water and IT company, in collaboration with Lutra Consulting, a European FOSS4G company. Its development relies on several open source projects: QGIS, Shapely, GDAL, GeoAlchemy2, and NetworkX, amongst others. When we started in software development, we used open source mainly because it was free of cost. During the development, the board of directors became convinced that contributing to several open source projects (financially and/or developing) is the way forward.

Use cases & applications
Drini
14:00
30min
Orfeo ToolBox : roadmap to a more modular and pythonic OTB
Yannick TANGUY

Orfeo ToolBox is now a mature software with more than 100 applications dedicated to remote sensing and data extraction.
It is used both in academic works, in operational processing chains.
OTB now needs to be more modular ("core", "machine learning", "SAR", "feature extraction") and also easier to use through Python.
We will present the recent developments and our roadmap.

State of software
Outdoor Stage
14:00
30min
State of deegree: The 2023 update
Torsten Friebe, Dirk Stenger

Initiated in 2002, the OSGeo project deegree has developed to an important and mature building block for Spatial Data Infrastructures (SDI) over the last 20 years. The project provides 9 official Reference Implementations of OGC Standards such as GML, WFS, WMS, and OGC API - Features.

In this talk, we will focus on the recent improvements available in deegree webservices v3.5 and the updated roadmap for the next version which lists support of Java 17. We will also show how the OGC Standards OGC API - Features Core and CRS have been implemented and can be used with existing configurations.

Finally, we will present the future directions of the project and what developments are currently planned.

State of software
UBT F / N212 - Floor 3
14:00
30min
When vector tiles are not enough: advanced visualizations with deck.gl
Marti Pericay, oscarfonts

Deck.gl is a framework for visualization, animation and 3D editing of large volumes of data (up to millions of points), in the browser, with optimal performance thanks to WebGL technology and the computing power of the GPU.

Deck.gl is prepared to work seamlessly with WebGL based map libraries such as MapLibre GL JS, Mapbox GL JS or Google Maps. It extends their capabilities with a large number of formats, data types and layer visualizations, such as point clouds (tessellated or not), real 3D vector data, 3D models, on-the-fly clustering, trip animations, GPU filtering, etc. The deck.gl code is not only free, but designed with extensibility in mind, making it very easily customizable.

In this presentation we will show 4 use cases developed for companies and administrations with specific needs. We chose deck.gl (over Mapbox/MapLibre alone) to provide rich interactivity and the ability to visually analyze large amounts of data.
We will expose the challenges we faced and how deck.gl was used:
1. Information system for precision irrigation: in a region of 25,000 plots, we show animated time series of evapotranspiration data, vegetative vigor, or water needs during an annual cycle.
2. Biodiversity world map: instant loading of a dataset of 200,000 points with GPU filtering, providing interactivity and refresh rates far beyond the ones offered by Mapbox or MapLibre.
3. Precision topographic measurements on terrain surface models: visualization of point clouds, terrains, textures, contour lines and other vector cartography in 3D, multi-profiles, and in-browser 3D editing.
4. Urban data control panel: from a dataset of 40,000 georeferenced records, we apply spatiotemporal and categorical filtering, 3D dynamic aggregation and symbolization, and computation of indicators and graphs in real time.

Use cases & applications
Mirusha
14:30
14:30
30min
An application-oriented implementation of hexagonal on-the-fly binning metrics for city-scale georeferenced social media data
Dominik Weckmüller

Introduction

The analysis of georeferenced social media (SM) data holds broad potential for informing municipal policy-making. Local adaptation to climate change and disaster resilience, transforming city centers, gentrification, and demographic change are significant challenges for municipalities.
In light of these pressing topics, a growing awareness for data-driven decision making has fostered geospatial interfaces that allow practitioners to interactively explore data source.
Particularly SM offers the potential of a live feed and continuous reflection of events at scale. Although many studies have an urgent need for a purpose-driven, customized visualization of spatial data, little emphasis has been put on how to display these data.
Many studies on map-based visualization in SM use traditional cartographic methods, such as pins or choropleth maps, with varying color scales or heatmaps to represent absolute or relative values. However, SM data presents challenges that require more sophisticated statistical metrics and flexible visualization techniques. We assess the signed chi metric, specifically designed for mapping via binning, and expand its use in a Bonn case study using an on-the-fly hexagonal binning method for frontend applications like dashboards. We then evaluate the advantages and disadvantages of the various proposed metrics and visualizations in terms of their practical applications.

Problem Statement

As the overview by (Teles da Mota & Pickering 2020) has shown, research involving geo-SM from different platforms has become increasingly popular but bears specific problems inherent to the characteristics of volunteered geographic information (VGI) – volume, veracity, velocity, variety are just broad categories used to characterize these.

Firstly, Access to SM databases, such as Meta or Twitter, is usually limited to capital intensive partner companies. Instagram's public-facing API is largely undocumented and opaque to end-users, causing uncertainty about data selection criteria (Dunkel 2023). Hence, the lack of knowledge about data context and possible biases can affect the representativeness of the data subset.

Second, "super users" sharing repeated content may create noise and skew analysis outcomes if absolute values are solely considered.

Third, as Teles da Mota & Pickering (2020) point out, research has been conducted mainly for large areas ranging from national parks to entire countries or seldomly even the whole world (cf. Dunkel et al. 2023). Studies working with data on the municipal level where individual locations and differences of only a few meters play a significant role, are usually not focusing on methodological cartographic issues or appropriate metrics but rather on effectively communicating core research results. Due to this lack of reference material for the municipal level, a research gap of proper visualization methods is identified.

Lastly, VGI, as practiced by Instagram, poses a unique problem for researchers. Users are allowed to create public "Instagram Locations" and tag their posts with a coordinate of their choice, which can then be referenced by other users as well. However, the user is not obligated to provide a clear definition of what exactly is meant by the location they choose, creating ambiguity. For instance, the "Bonn" location's coordinates (50.7333, 7.1) are situated in the city's center. What it actually refers to is entirely subject to the interpretion of the user. It could refer to different extents of the city center, the official administrative boundaries of Bonn or anything loosely associated with Bonn, including cultural references or events. This ambiguity which Meta is aware of (Delvi et al. 2014) can be observed on different zoom levels such as city districts, cities, countries or continents throughout Instagram data and poses an enormous challenge to researchers working with city-scale areas of interest.

Research Interest

In order to deal with these challenges, a thorough data cleaning is insufficient. We propose an application-oriented system of metrics for data processing and visualization depending on the user’s needs, by comparing possible application scenarios as well as limitations based on a case study for the city of Bonn with Instagram data from 2010 - 2022:
1. Absolute values – absolute number of observed posts per location or bin
2. Relative values – relation between observed and expected posts per location or bin
3. Signed chi – statistic value indicating significance and direction per location or bin

The observed value usually refers to a quantity found at a specific bin, using a specific query such as a thematic filter. In contrast, the expected value often refers to an average quantity of a generic query, such as the average of all SM posts in Bonn, and it is used to identify over- or underrepresented spatial patterns at local bins (Visvalingam 1978). However, what is considered as the observed value for normalization is up to the analyst (Wood et al. 2007). One could also compare average thematic posts in all German cities (the expected value) to those found in Bonn, as a means to concentrate on the difference of the subject under analysts (posts in the city of Bonn). Or, another option could be to use discrete periods of historical time intervals as the expected value, and compare to the recent posts quantities to identify recent and unusual spatial posting behavior trends.

We evaluate these metrics through a hexagonal on-the-fly binning approach with different color scaling and propose easily customizable scripts for the leaflet-d3 plugin. We provide all our scripts for reproduction with explanations and usage recommendations as well as a demo dashboard in a public GitHub repository.

Our findings suggest that all of the investigated metrics can offer insight into data, but their appropriate use highly depends on the research question at hand. When using the dashboard frontend, outliers should be highlighted, non-significant values reduced in opacity, or intra-dataset validations being carried out through automatic comparisons across metrics and filters. Overall, the absolute metric is to be used sparingly. The relative metric generates only a very narrow gain in knowledge whereas the signed chi metric yields the best overall results and deals very well with the above issues.

Academic Track
UBT E / N209 - Floor 3
14:30
30min
BBOX – a modular OGC API server
Pirmin Kalberer

BBOX is a new OGC API Open Source implementation, with support for established OGC services driven by MapServer or QGIS Server. BBOX is implemented in Rust, with a built-in high-performance web server.

Supported OGC API Services:
* OGC API - Maps, with support for OGC WMS 1.3
* OGC API - Tiles, with support for WMTS and XYZ endpoints
* OGC API - Features
* OGC API - Processes, with multiple processing engine backends

Enterprise ready:
* Authentication / Authorization
* Instrumentation + Monitoring
* First class Docker support

Simple usage:
* bbox-server serve –map alaska.qgz

State of software
UBT C / N109 - Second Floor
14:30
30min
Development of QGIS based topographic data management system
Eero Hietanen

The National Land Survey of Finland (NLS) is rebuilding its topographic data management system using open source components. The new system will be based on QGIS and PostgreSQL. The goals of the renewal are:
- Utilization of new technologies and standards
- Advancement in the transition from producing map data to producing spatial data
- Enhancement of the quality and timeliness of data
- Enhancement of the production through automation and better tools

The current system has been in use for over 20 years and has been developed throughout its lifespan. NLS is planning to replace the current production system after the first phase of development in 2025.

In this talk, I will talk about the status of the development, elaborate the main objectives of the first phase and introduce the published OS components so far. In the first two years of the development the focus was on concurrent data management by 100 operators and on the integration of the stereo mapping tools (proprietary). In addition, we have designed and implemented OS quality assurance tools to ensure the logical consistency of the features concerning the attributes, the geometries and the topology. These tools also include a topological rule set for topographic data management in PostgreSQL.

We have also published some plugins for the operators to improve the digitizing workflow. To facilitate the development work, we have contributed some development tools for QGIS plugin developers. The OS publications of the service and client components of the concurrent data management tools are not yet on the roadmap although our final goal.

The current process of maintaining topographic data includes some field work too. QField is the chosen OS tool for that purpose. Now, we are defining the additional functionalities needed to make the field work efficient enough and to smooth out the data transfer between the main system and the mobile application.

Afterwards, we have yet to make significant progress in the integration of TDMS with the systems that produce and provide products. In relation to our products, we need to find a way to easily maintain base topographic data and its enriched cartographic derivates and place names, as part of the production process.

Use cases & applications
UBT C / N111 - Second Floor
14:30
30min
Easily publish your QGIS projects on the web with QWC2
Sandro Mani

QWC2 (QGIS Web Client 2) is the official web application of QGIS, that allows you to publish your projects with the same rendering, thanks to QGIS Server. The environment is composed of a modern responsive front-end written in JavaScript on top of ReactJS and OpenLayers, and several server-side Python/Flask micro-services to enhance the basic functionalities of QWC2 and QGIS Server.

QWC2 is modular and extensible, and provides both an off-the-shelf web application and a development framework: you can start simple and easy with the demo application, and then customize your application at will, based on your needs and development capabilities.

This talk aims at introducing this application and to show how easy it is to publish your own QGIS projects on the web. An overview of the QWC2 architecture will also be given. It will also be an opportunity to discover the last new features that have been developed in the past year and ideas for future improvements.

State of software
Lumbardhi
14:30
30min
GeoHealthCheck - QoS Monitor for Geospatial Web Services
Tom Kralidis, Just van den Broecke

Keeping (OGC) Geospatial Web Services up-and-running is best accommodated by continuous monitoring: not only downtime needs to be guarded,
but also whether the services are functioning correctly and do not suffer from performance and/or other Quality of Service (QoS) issues.
GeoHealthCheck (GHC) is an Open Source Python application for monitoring uptime and availability of OGC Web Services.
In this talk we will explain GHC basics, how it works, how you can use and even extend GHC (plugins).

There is an abundance of standard (HTTP) monitoring tools that may guard for general status and uptime of web services.
But OGC web services often have their own error, "Exception", reporting not caught by generic HTTP uptime
checkers. For example, an OGC Web Mapping Service (WMS) may provide an Exception as a valid XML response or
in a error message written "in-image", or an error may render a blank image.
A generic uptime checker may assume the service is functioning as from those requests and an HTTP status "200" is returned.

Other OGC services may have specific QoS issues not directly obvious. A successful and valid "OWS GetCapabilities" response may not
guarantee that individual services are functioning correctly. For example an OGC Web Feature Service (WFS) based on a dynamic database may
return zero Features on a GetFeature response caused by issues in an underlying database. Even standard HTTP checkers supporting "keywords"
may not detect all failure cases. Many OGC services will have multiple "layers" or feature types,
how to check them all?

What is needed is a form of semantic checking and reporting specific to OGC services!

GeoHealthCheck (GHC) is an Open Source (MIT) web-based framework through which OGC-based web services can be monitored. GHC is written in
Python (with Flask) under the umbrella of the GeoPython GitHub Organization. It is currently an OSGeo Community Project.

GHC consists of a web-UI through which OGC service endpoint URLs and their checks can be managed,
and monitoring-results can be inspected, plus a monitoring engine that executes scheduled "health-checks" on OGC service endpoints.
A database stores results, allowing for various forms of reporting.

GHC is extensible: a plugin-system is available for "Probes" to support an expanding number of
cases for OGC specific requests and -checks. Work is in progress to provide a GHC API for various integrations.

Info, sources, demo: https://geohealthcheck.org

State of software
UBT F / N212 - Floor 3
14:30
30min
Kart: Practical Data Versioning for rasters, vectors, tables, and point clouds
Robert Coup

We’re drowning in data, but the geospatial world lags badly behind in versioning tools compared to our software counterparts. Kart (https://kartproject.org) is solving this with a practical open tool for versioning datasets, enabling you to work more efficiently and collaborate better.

We will introduce you to Kart and demonstrate some of the key features, including our QGIS plugin. And we'll highlight what’s coming next on our roadmap.

Since 2022 we have added support for Raster and Point Cloud datasets, and we'll be showing how we build on Kart's versioning and spatial filtering techniques to efficiently navigate, access, and use large and small datasets. For rasters and point-cloud datasets, we'll show how you can get the benefits of Kart without having to duplicate data that is already hosted in S3 in a useful format.

Kart allows you to quickly and easily manage history, branches, data schemas, and synchronisation for large & small datasets between different working copy formats, operating systems, and software ecosystems.

Modern version control unlocks efficient collaboration, both within teams and across organisations meaning everyone stays on the same page, you can review and trace changes easily: ultimately using your time more efficiently.

State of software
Outdoor Stage
14:30
30min
The Survey of Vectortile techniques: Static vs Dynamic
IGUCHI Kanahiro

Vectortile ecosystem have made big changes in Web Mapping, especially in terms of Client-side map rendering. Thesedays, costs of producing and streaming tiles have been dramatically reduced by some techniques - tippecanoe, PMTiles… and so on. However we have the problem important but unsolved yet: Dynamic tiles. Techniques which are matured and widely used are for Static tiles. Static tiles are not good at streaming data frequently updated but we sometimes need to dynamically serve such data. In this talk, I’ll survey techniques for Dynamic tiles which already exist and propose the solution for this.

Use cases & applications
Mirusha
14:30
30min
The state of OpenStreetMap buildings: completeness assessment using remote sensing data
Laurens Oostwegel, Danijel Schorlemmer

OpenStreetMap (OSM) is the largest crowd-sourced mapping effort to date, with an infrastructure network that is considered near-complete. The mapping activities started as any crowd-sourced information platform: the community expanded OSM anywhere there was a collective interest. Initial efforts were found around universities or hometowns of mappers. Events, such as natural disasters can also trigger a major update. The recent earthquakes in Turkey and Syria lead to a massive contribution by the Humanitarian OSM Team (HOT) of more than 1.7 million buildings in the region in less than a month after the event1. This type of activities result in a map that is of non-uniform completeness, with some areas having all building footprints in, while other areas remain incomplete or even untouched. Currently, with 550 million footprints, OSM identifies between a quarter and half of the total building footprints in the world, if we estimate that there are around 1-2 billion buildings in the world.

A global view on the local completeness of buildings in OSM did not yet exist. Unlike other efforts, that only look at a subset of OSM building data (Biljecki & Ang 2020; Orden et al., 2020; Zhou et al., 2020), we have used the Global Human Settlement Layer (GHSL) to estimate completeness of the entire dataset. The remote sensing dataset is distributed onto a grid of approximately 100x100 meter tiles. In each tile of the grid, the built area of GHSL is compared to the total area of OSM building footprints. The computed ratio is measured against a completeness threshold that is calibrated using areas that were manually assessed.

Using information derived from remote sensing datasets can be problematic: GHSL does not only measure building footprints: it includes any human-built structures, including infrastructure and industrial areas. Next to that, due to sub-optimal input data or failing algorithms, the dataset is not of the same quality as the crowd-sourced data in OSM in areas that are complete. Even with these limitations, a comprehensive global completeness assessment is created. The assessment should not be used as ground truth, but rather as reflection on the OSM building dataset as is and as a guideline for priorities for the future. Statistics on regional completeness can be created and the quality of GHSL could be assessed on countries that are considered to be complete, such as France or the Netherlands.

Community & Foundation
UBT C / N110 - Second Floor
14:30
30min
Tracking Climate Change in Africa with open data
Peter Hoefsloot

Climate Change is affecting our daily lives. Already for many years, we are interested in how this will influence agriculture and livelihoods on the African continent. In this talk we will show a tracking methodology with open data and opensource software. The main data source is satellite imagery from METEOSAT (MSG) as well as rainfall estimates by NOAA to show trends in the last 15 years. We will share links to free data and scripts and make a list of all software used in a step-by-step guide.

Open Data
Drini
15:00
15:00
5min
Adaptation of QGIS tools in high school geography education
Jakub Trojan

Geographic Information Systems (GIS) has been around for more than 60 years. It has become a significant part of many scientific disciplines with a spatial component. In the last decades the educators have been trying to figure out a way, how to adopt its tools for their own field of study, the classrooms (Milson et al., 2012). Since then, several studies of their efforts have been carried out. Thanks to the emergence of open source software and open data, new opportunities for their visions have unfolded (Petráš, 2015). Particularly QGIS, in environments where teachers do not have access to sufficient funding, has been lately getting more attention.

Educators, backed up by years of research, believe that by collecting, displaying and analyzing spatial data, students can solve local problems, foster and drive their learning process of geography phenomena. After the use of GIS they are supposed to gain digital skills and extraordinary thinking that can be essential for their future careers and be motivated to pursue a career in science and engineering (Bednarz, 2004).

Implementation of GIS software into high school geography classes is, however, a lengthy process that requires a lot of patience and confidence. A teacher may come across four major obstacles: 1) lack of hardware, software or data, 2) lack of teacher training and materials, 3) lack of support for innovations, and 4) lack of time to learn and teach GIS (Kerski, 2003). The biggest issue has come to be the insufficient pre-service and in-service teacher training in geoinformatics and its application. A recent systematic study (Bernhäuserová et al., 2022) has concluded that the majority of the limits were related to teachers and resources.

In our study, we have tried to create strategies that can lead to the successful adaptation of QGIS tools in high school geography education. To reach out the goal and answer more questions, we have designed ten lectures that focus on the basics of QGIS. We drew inspiration from several official QGIS cookbooks and manuals. In each lesson, we applied a set of the most essential tools. For our study, we chose a qualitative method of design-based research (DBR), which focuses on designing study materials, testing them in classes and coming up with a theory (methodic) that can innovate learning environemnts (Bakker, 2018). To pilot our ready-to-use lectures and data, we have partnered with a 4-year South Moravian high school based in Brno, Czechia, which offered us two classes of second and final-year students. The research lasted three months, during which we taught 12 courses. Older students tried out lectures 1 to 7, except 6 (1 and 2 at home) and younger students tried lectures 1 to 3 and 8 to 9. After every class, students had to fill out a short questionnaire reflecting on their feelings and experience. They had to do a set of exercises for each lecture as homework and turn it in along with the finished maps. At the end of each trial, the groups were tested on their knowledge. Based on the observation that was carried each class, three categories according to the students' experience were drown out: ones that had no problem following the lecturer´s instructions, ones that often faced problems and those that worked individually. Students were asked to identify with one of them and then asked to participate in a voluntary interview, in which their experience would be discussed.

During both trials, students had to bring their own computers, which for some, caused several issues, from failed installations to technical complications during each lecture. The large number of students in each class (app. 30) also proved that the lecturer cannot assist every student in such conditions. Students chose different approaches and strategies. Most of them wanted to finish the task and faced no problems. A much smaller amount focused on understanding and worked individually. Only a few played with the program and found interest in it. In each group, only one student had previous experience with QGIS. However, most of the students understood every lecture, and found its content enjoyable, and in the test, they have proven to learn the basics of the program. If it would be up to them, they would implement GIS in the geography curriculum, change the tempo of the lectures (to progress more slowly) and divide them into smaller groups, which would benefit both parties. The older students were less motivated to participate; they were used to classes that were more passive and did to have enough free time to focus on anything except their graduation exam. Younger students were easier to motivate; more of them were interested in geography and had more time for homework. Both groups have produced unique maps, which display their gradually gained cartography skills and knowledge. They advise anyone interested in learning QGIS to have enough patience, gather good learning materials (referring to the ones we made) and work on a computer they know very well.

Academic Track
UBT E / N209 - Floor 3
15:00
30min
Building heights: From open data to open maps
Yunzhi Lin

In the US, less than 20% of OpenStreetMap (OSM) buildings have a height tag (less than 10% globally). Providing buildings with height tags helps several use cases including 3D map visualization. At Meta, we have begun using open mapping data to estimate building heights and providing them back to the community. At the end of 2022, we used data from city GIS departments to estimate millions of heights and release them to the public through the Daylight Map Distribution (https://daylightmap.org/2022/12/02/building-heights.html). In 2023, we are using publicly available USGS/3DEP aerial lidar and releasing to the public through the Overture Maps Foundation – processing millions of square kilometers. This talk will cover the challenges, algorithm, QA process, and accuracy metrics from this effort. It is our hope that over the course of the year, we can estimate and publish heights for the majority of the buildings in the US and begin work on non-US open data sources as well.

Open Data
UBT C / N110 - Second Floor
15:00
30min
Investigating war crimes, animal trafficking, and more with open source geospatial data (encore)
Logan Williams

At Bellingcat, a non-profit investigative organization in the Netherlands, we research war crimes, find tiger smugglers, monitor environmental degradation and track extremist hate. To do this, we use "open sources", including public databases, social media posts, and a wide range of geospatial data and tools. The use of these new online sources has dramatically changed investigative journalism and humanitarian accountability research in the past five years, and there remains tremendous potential for further development, especially in the geospatial realm.

In this talk, Bellingcat data scientist Logan Williams will present case studies from our research to illustrate how invaluable open source geospatial tools and data are for "open source" investigative research. Some of the most useful tools for investigators are designed for very different purposes, from academic meterology to outdoor recreation. Additionally, some of Bellingcat's own FOSS geospatial tools, based on Open Street Map and Copernicus satellite data, will be showcased. Finally, the talk will discuss opportunities for deepening the connections between the open source geospatial community and the open source investigation community.

By popular demand this is an encore of the talk held on 28.06 @ 15:00

Use cases & applications
Lumbardhi
15:00
30min
Kobo Toolbox Automation with Geonode for Risk Management
Walter Shilman

Based on the implementation of a set of forms in Kobo Toolbox, an information flow for the Fire Management Commission of the Argentine Republic was created to be able to integrate from the field the fire reports (on line / off line) in a simple way and their different stages of evolution. The automation of the ingestion to a Geonode, as a geospatial data manager allows the integration with weather forecast data, near real time information, fire incidences, hot spot detection and predictive fire indexes.
The integration is done with the Airflow tool, which guarantees integration and monitoring of information flows, simplifying the process during incidents.

Use cases & applications
UBT C / N111 - Second Floor
15:00
30min
Mastering Security with GeoServer, GeoFence, and OpenID
Andrea Aime, Alessio Fabiani

The presentation will provide a comprehensive introduction to GeoServer's own authentication and authorization subsystems. The authentication part will cover the various supported authentication protocols (e.g. basic/digest authentication, CAS, OAuth2) and identity providers (such as local config files, database tables, and LDAP servers). It will also cover the recent improvements implemented with the OpenID integrations and the refreshed Keycloak integration.

It will explain how to combine various authentication mechanisms in a single comprehensive authentication tool, as well as provide examples of custom authentication plugins for GeoServer, integrating it in a home-grown security architecture. We’ll then move on to authorization, describing the GeoServer pluggable authorization mechanism, and comparing it with an external proxy-based solution. We will explain the default service and data security system, reviewing its benefits and limitations.

Finally, we’ll explore the advanced authorization provider, GeoFence. The different levels of integration with GeoServer will be presented, from the simple and seamless direct integration to the more sophisticated external setup. Finally, we’ll explore GeoFence’s powerful authorization rules using:

  • The current user and its roles.
  • The OGC services, workspace, layer, and layer group.
  • CQL read and write filters.
  • Attribute selection.
  • Cropping raster and vector data to areas of interest.
State of software
UBT C / N109 - Second Floor
15:00
30min
OpenMapTiles - vector tiles from OpenStreetMap & Natural Earth Data
Tomáš Pohanka

OpenMapTiles is an open-source set of tools for processing OpenStreetMap data into zoomable and web-compatible vector tiles to use as high-detailed base maps. These vector tiles are ready to use in MapLibre, Mapbox GL, Leaflet, OpenLayers, and QGIS as well as in mobile applications.

Dockerized OpenMapTiles tools and OpenMapTiles schema are being continuously upgraded by the community (simplification, performance, robustness). The presentation will demonstrate the latest changes in OpenMapTiles. The last release of OpenMapTiles greatly enhanced cartography and map styling possibilities, especially the enrichment of Points of Interest and improvement of land use or land cover layer. The new version of Natural Earth brought updated data to upper zoom levels and included a new OSM OpenMapTiles style, showing all features in well know colors for vector tiles. OpenMapTiles is also used for generating vector tiles from government open data secured by Swisstopo.

State of software
UBT F / N212 - Floor 3
15:00
30min
Spatial Analysis with the CARTO Analytics Toolbox
Víctor Olaya

The CARTO Analytics Toolbox (AT) is a collection of spatial functions that add spatial capabilities to Data Warehouses. At the moment, BigQuery, Snowflake, Redshift and PostgreSQL versions are available.

This talk will show some of the main functions of the AT, and discuss some examples of spatial data analysis performed in different DWs. Special emphasis will be put on the functionality related to spatial indexes, particularly H3 and Quadbin.

The Analytic Toolbox functions are also the building blocks for other tools both from CARTO and outside of CARTO, which will be briefly introduced as well.

State of software
Outdoor Stage
15:05
15:05
5min
Teaching Geographic Information Science concepts with QGIS and the Living Textbook – towards a sustainable and inclusive Distance Education
Andre da Silva Mano

In recent years, the need for distance education solutions has been a point of attention for the Faculty ITC of the University of Twente (The Netherlands). Starting in 2017, a fully online program spread over nine months offered an alternative path to start an MSc in Geo-Information Science and Earth Observation. As using proprietary software is more difficult in distance courses, the focus shifted towards open-source alternatives. The experience and lessons learned came to their full potential when, in 2020, many students could not travel due to the travel restrictions imposed by the COVID pandemic. In response, ITC offered the fully online course Principles and Applications of Geographic Information Systems and Earth Observation as the first quartile of what is supposed to be a fully presential MSc Program. The course was developed around four fundamental principles: (1) The course was exercise led; (2) Every concept taught should be demonstrated and operationalized; (3) The number of different software tools should be minimized; (4) The software tools should be inclusive and encourage technological independence. Two Open-Source tools were selected: The Living Textbook a digital textbook developed and maintained by us [1], and QGIS to operationalize the concepts. For synchronous communication and iteration, Big Blue Button Conferences were integrated into the Learning Management System environment and organized according to time zones to serve a student population spread across eight time zones.

After running the course, we evaluated the impact of the new set-up on students (satisfaction and performance) and staff (attitude towards open source tools and open courseware). Additionally, we also evaluated the impact of the course in strengthening the wider Open Science initiative. Results show that for students, both satisfaction levels and attainment levels of the course’s learning outcomes were high. For the teachers, the feedback was generally positive, highlighting the importance of using flexible and inclusive tools. The courseware developed for the course is now offered to the Open Science community as open courseware [2] . It is the basis of having the Faculty recognized as a QGIS Certified Organization, thus strengthening the relationship between academia and FOSS4GIS, particularly QGIS.

Internally, this experience brought essential insights into successful online course design. These include but are not limited to (A) consistency – the tools and support materials of the course should remain the same during the course; and (B) accessibility – the tools used should not have any accessibility barrier, especially when it comes to licenses, but also when it comes to imposing operating system platforms or assuming file format preferences. Important results include changing the teaching staff attitude towards a more aware and confident use of FOSS4GIS. That change resulted in new paradigm shift faculty-wide paradigm where FOSS4GIS is now the primary choice for teaching. Finally, on a larger plane, the commitment of ITC to the Open Science agenda has, in its compromise to adopt and contribute to the development of Open Source Software, an essential element of the Open Science agenda.

[1] https://www.itc.nl/about-itc/organization/resources-facilities/living-textbook/
[2] https://principles-and-applications-of-rs-and-gis.readthedocs.io/en/latest/

Academic Track
UBT E / N209 - Floor 3
15:10
15:10
5min
Mobile mapping solutions for the update and management of traffic signs in a road cadastre free open-source GIS architecture
Federica Gaspari

Digitization and update of road network databases represents a crucial topic for a good management of critical infrastructures by public administrations. Similarly to other European countries such as Cyprus (Christou et al., 2021), since 2001, Italian road-owning agencies have been required by the Ministry of Infrastructure and Trasport to build and maintain a road cadastre, i.e., a mapping inventory of their road networks. Such architecture should include georeferenced information about streets as well as all ancillary elements regulated by road regulations, ranging from safety and protection assets to traffic signs. In particular, due to the high frequency of new signals installation and substitution, traffic signs require a well-structured, flexible and efficient workflow for collecting and manipulating georeferenced data.

In agreement with the official national requirements, in 2019 the Province of Piacenza adopted and implemented a digital cadastre with GIS and WebGIS functionalities built on top of free and open-source software like PostgreSQL as Database Management System and QGIS for the manipulation of geodata. Such software infrastructure ensures flexibility of usage as well as the possibility to expand its functionalities with other easy-to-use open-source applications in an architecture (Gonzalez Alba et al., 2019, Gharbi & Haddadi, 2020). In this framework, this work illustrates a case study of a flexible and low-cost mapping methodology for documenting the current state of traffic signs. Indeed, mobile applications are able to substitute the old procedure that consisted in the documentation of element installation on paper support, implying the risk of transcript errors as well of loss or deterioration of the original survey document.

Before defining the required steps of mobile mapping, understanding how traffic signs are modelled inside the adopted DB model was crucial. Such elements are indeed implemented through a one-to-many relationship between an entity representing the sign holder (parent table) and another one for the signs themselves (child table). In this way, it is possible to collect and manage individually information about each sign (main ones and supplementary ones as well) always linked to their support pole.

Together with the road cadastre responsible, an analysis was conducted to understand the specific needs for the application and the type of users involved in the in-situ survey process. This phase resulted in the choice of two possible open-source solutions to be tested and compared in terms of compatibility and usability by users with different technical background, integration with the actual infrastructure and possibility of customisation: Qfield because of its native compatibility with QGIS libraries and ODK Collect thanks to its simplified graphic user interface (GUI) that resembles commonly used data collection forms without a visible GIS GUI.

For the entire workflow, differences in the two applications were evaluated. For instance, having a direct inheritance of the original QGIS attribute table for Qfield, while in the case of ODK Collect the definition of each attribute of the form is required. Peculiarities in implementing 1-n relationships and widget formats have been identified too, aiming to understand the reproducibility of both procedures. Once the form design was finalised for both the applications, a guided field survey was conducted in order to train new users and to test the usability of the mobile mapping solutions. For this purpose, a series of test sites was chosen, identifying roads to be surveyed with different features or conditions. A diverse sample of test users was involved in the data collection activity, ranging from people with no previous experience in geospatial technologies to GIS technicians.

Finally, the data collected with both applications were reviewed in QGIS environment in a validation phase aimed at identifying differences between the dataset, their completeness, their position accuracy and their coherence with a ground truth represented by photos of corresponding traffic signs taken on field with mobile devices. First, the validation consisted in checking if the mapped elements were located within buffers (of 5, 25 and 50 meters) calculated along the surveyed streets and then evaluating the coherence between the street code inserted in the element field and the one of the roads in whose buffer such sign is included. Hence, a similar approach was adopted for comparing the value of municipality associated to the single sign field to the administrative boundaries within which it falls. A semantic validation on the traffic sign type documented with the mobile mapping was conducted by comparing values with what was depicted in photos taken on field. The entire validation routine process was automatized as much as possible with Python scripts using the PyQGIS library. All the validation scripts together with sample dataset will be included in a Github repository in order to make them openly reusable and adaptable to other specific project needs. An analysis on the synchronization process of the collected data on the original main database was evaluated too, marking different approaches involving plugins or automatic scripts.

In order to evaluate user experiences with the different mobile applications, a LimeSurvey feedback form was provided to users who tested the tools on field. Such form was designed to collect insights on different steps of the workflow – form design, data collection and post-processing -, tracking and evaluating possible differences between users with different background and no previous knowledge of geospatial concepts. This resulted in highlighting potentials and issues linked to the adoption of Qfield or ODK Collect for traffic signs mapping.

This work aims at presenting a case study for the adoption of a mobile mapping solution in the field of public administration, understanding potentials and limitations of these possible approaches, also in terms of introducing new users to FOSS4G applications. Because of this, the transparency of the entire workflow is being documented on a dedicated Github repository with informative guides, a QGIS demo project, ODK format definition files and all codes adopted for validation and synchronization purposes.

Academic Track
UBT E / N209 - Floor 3
15:15
15:15
5min
COMTiles: a case study of a cloud optimized tile archive format for deploying planet-scale tilesets in the cloud
Markus Tremmel

Motivation

The state-of-the-art container formats for managing map tiles are the Mapbox MBTiles specification and the OGC GeoPackage standard. Since both formats are based on an SQLite database, they are mainly designed for a block-oriented POSIX-conform file system access. This design approach makes these file formats inefficient to use in a cloud native environment, especially in combination with large tilesets. To depoly a MBTiles database in the cloud, the tiles must be extracted and either uploaded individually to an object storage or imported in a cloud database and accessed by an additional dedicated tileserver. The main disadvantages of both options are the complex workflow for the deployment and the expensive hosting costs. The Cloud Optimized GeoTIFF (COG) format already solves the problem for providing large satellite data in the cloud, creating a new category of so-called cloud optimized data formats. Based on the concepts of this type of format, geospatial data can be deployed as a single file on a cheap and scalable cloud object storage like AWS S3 and directly accessed from a browser without the need for a dedicated backend. COMTiles adapt and extend this approach to provide a streamable and read optimized single file archive format for storing raster and vector tilesets at planet-scale in the cloud.

Approach

The basic concept of the COMTiles format is to create an additional streamable index which stores the offset and size to the actual map tiles in the archive as so-called index entries. In combination with a metadata document, the index can be used to define a request for a specific map tile in the archive stored on a cloud object storage based on HTTP range requests. The metadata are based on the OGC “Two Dimensional Tile Matrix Set” specification which enables the usage of different tile coordinate systems. To minimize the transferred amount of data and to optimize the decoding performance, a combination of two different approaches for the index layout is used. As lower zoom levels are accessed more frequently and the number of tiles is manageable up to a certain zoom level (0 to 7 for a planet-scale tileset), all index entries are stored in a root pyramid and retrieved at once when the map is initially loaded. To minimize the size, the root pyramid is compressed with a modified version of the RLE V1 encoding of the ORC file format. For lazy loading portions of the index on higher zoom levels index fragments are used. To enable random access to the index without any additional requests, the index entries are bitpacked per fragment with a uniform size. Since the data are only lightweight compressed, the index entries can also be stream decoded and processed before the full fragment is loaded. To further minimize the number of HTTP requests the queries for the index fragments and tiles can be batched as they are both ordered on a space-filling curve like the Hilbert curve.

Results

One advantage that became obvious during the evaluation of COMTiles is the simplified workflow of deploying large tilesets. As only a single file must be uploaded to a cloud storage and no dedicated tile backend to be setup, COMTiles can also be deployed by non-GIS experts in a quick and easy way. During evaluation the main hypothesis could be confirmed that COMTiles can be hosted on a cloud storage with only fraction of the costs compared to the usage of a dedicated tile backend or an individual tile deployment. To determine the actual hosting costs of a planet-scale OSM tileset with 90 gigabytes in size was deployed on a Cloudflare R2 storage and accessed with 35 million tile requests. With the pricing plans of Cloudflare at the time of writing, only a cost of $1.35 per month has been incurred for the specified deployment. In this context the tile batching approach turned out to be an additional effective way of reducing the number of tile requests and therefore the costs. For example, when displaying a map in fullscreen mode the number of requests could be reduced by up to 80% on HD display and up to 90% on a UHD display. In terms of user experience, test users rated the additional latency for the index requests as negligible, especially when an additional CDN was used. Testing COMTiles against PMTiles, another cloud-optimized tile archive solution, was performed using two different map navigation patterns to measure the differences in the number of requests, data size transferred, and decoding performa