A Mapathon is a coordinated mapping event, specially connected to the OpenStreetMap projecy. It is often held inside (armchair mapping) but can also be an outside or combined activity. People will join all together in a place, or online, where they can improve OpenStreetMap data, usually working together on a emergency project provided by HOT Tasking manager
A Mapathon is a great opportunity for volunteers to digitally connect and create urgently needed data for communities around the world so that local and international decision-makers can use these maps and data to better respond to crises affecting these areas. Up-to-date maps are important for the success of many humanitarian initiatives around the world in responding to disasters as soon as possible, it help to save lifes.
This Mapathon will be held in Firenze the 23 August 2022 and it is co-organized by HOT, OSGeo, OSM Foundation, Youth Mappers.
Join the Humanitarian OpenStreetMap team staff and volunteers to support humanitarian mapping activities around the world. The session will be an organized mapping activity, focusing on an active project which requires mapping support from the worldwide OpenStreetMap community. The mapathon is open to all, including people who don’t have previous OpenStreetMap experience. It is suggested that you join with a laptop, however it is also possible to contribute with other mobile tools.
YouthMappers is a global movement of university students addressing development challenges and community resilience using public geospatial tools and data. Meet the student leaders effecting change in their communities across the globe and follow the work of a student-led project in Sierra Leone supporting rural electrification for expanded electricity access. Hear the voices of YouthMappers members who have not only built their mapping skills through YouthMappers but have grown as global citizens. This short documentary tells their stories and will be screened for the first time in Florence on 22 August at 3:30pm in Auditorium A of the University complex in Viale Morgagni.
- 3:30 pm setup / preparations to begin
- 3:45 pm About YouthMappers (Solis, Zeballos, Stokes)
- 4:00 pm Film Showing
- 4:30 pm Panelists of YouthMappers
- 5:00 pm Launch of the Book on SDGs
- 5:15 pm adjourn
- 5:30 pm all take down is finished / group is departed
Opening session with institutional greetings.
Increasingly ubiquitous open spatial technologies offer the opportunity for new actors to participate in creating knowledge about the places where they live and work, and where they navigate the shocks and stresses of an uncertain world. This presentation offers insights about university student engagement in open mapping through the experiences of YouthMappers around the world. This inclusive international network of youth-led, faculty-mentored, and technology-enabled chapters on more than 300 campuses in 65+ countries work together to organize, collaborate, and implement mapping action that respond to needs around the globe. Specific examples will illuminate how students are creating and using spatial information that is collected using open software tools and made publicly available through open platforms, and in turn inform us about both the power and the limits of open spatial technologies in the hands of students working to build a more resilient, equitable and sustainable future.
Nowadays, we observe a rise in publicly accessible Earth Observation (EO) data. Together with it, there is more and more EO data providers, each potentially having a different data access policy. This difference is visible at various levels: in the data discovery (CSW, OpenSearch more or less custom, etc.), in the product access (object storage, downloads, direct file system access, etc.), in the storage structure and in the authentication mechanisms (OAUTH, JWT, basic auth,…). All these different technologies add a knowledge overhead on a user (end-user or application developer) wishing to take advantage of these data. EODAG was designed to solve this problem.
EODAG (Earth Observation Data Access Gateway) is a command line tool and a Python package for searching and downloading remotely sensed images while offering a unified API for data access regardless of the data provider.
It gives you an easy way to access products from more than 10 providers, with more than 50 different product types (Sentinel 1, Sentinel 2, Sentinel 3, Landsat, etc.) that can be searched and downloaded.
EODAG has the following primary features: search and download Earth Observation products from different providers with a unified API. It is both a Command Line Tool and a Python Package. It supports STAC API and Static STAC catalogs. It can run as a STAC API REST server to give access to a provider’s catalog. New providers can be added with a configuration file or by extending EODAG with plugins.
EODAG ecosystem includes EODAG-Labextension, a Jupyterlab extension allowing to search and browse for remote sensed imagery directly from JupyterLab. It also includes EODAG-Cube for having a direct access to data and stream it to a Xarray Dataset.
Welcome to the Open Source Geospatial Foundation, proud hosts of FOSS4G, and advocate for free and open source geospatial software everywhere. This is a call out to open source software developers; please join OSGeo and help us help you!
Join OSGeo today:
Even just listing your project on the osgeo.org website is a great first step. Help us promote your technology so users can discover and enjoy your software.
The OSGeo “community program” gives project teams a chance to join the foundation with an emphasis on supporting innovation and new projects. The foundation provides some direct support, assistance along with endorsement and recognition from our board.
For established projects please join our “incubation program” to be recognized for excellence and as a full OSGeo committee.
Unlike other foundations, OSGeo does not require that you give up or transfer any Intellectual Property; we simply ask that you be spatial, open-source, and open to participation.
This presentation gives clear instructions on how to join OSGeo, and representatives from recent successful projects will be on hand to answer your questions.
The talk will focus on how Map Kibera has empowered different Counties in Kenya to Map their projects using Open Street Map, Kobo collect and ODK. During the exercise Map Kibera collaborated with World Bank to produce large maps that are used during the participatory budgeting meetings for members of the community to decide the projects they want implemented. Map Kibera's approach is to empower the youth and some selected County officials with new mapping skills and methodologies. Members of the community are then able to make informed decisions by seeing what they already have and what might be missing. Map Kibera also helped the Four selected counties to develop a website that shows the status and the budget allocation for each and every project mapped. This helps promote transparency and accountability within the counties. Before the mapping happened, people would be asked to propose what project they wanted but it became hard without knowing what was already in existence. Now they can tell, We have a hospital we want a school for example.
Maplandscape is a stack of open-source geospatial applications designed to enable mapping of agricultural landscapes and farm systems. It supports large team in-the-field mobile data collection, provides tools for data syncing and management, and easy visualisation and querying of spatial information for decision making and reporting. The workflow has been developed and deployed in Tonga through a collaboration between universities in Australia and the South Pacific, and Tonga’s Ministry of Agriculture, Food, Forests and Fisheries (MAFF). Maplandscape is currently being used in Tonga to map crop, livestock, agroforestry systems, and farm management practices; over 11,000 farms and four island groups have been mapped so far. This information has been used to inform agricultural planning and resource allocation, disaster response, land utilisation assessment, tracking land use changes, and monitoring of the condition of key commercial crops.
The Maplandscape workflow is based on QField for in-the-field data collection and QFieldCloud for data syncing, storage, and user authentication and management. Using QField mobile GIS, key agricultural landscape features (e.g. crop parcels, paddocks, fallow land) can be spatially mapped, and rich attribute information can be captured through various widgets and flexible and complex form logic, with support of reference geospatial layers. Three cloud-based applications have been developed that build on top of the QFieldCloud API to provide geospatial data visualisation and analytics tools. These applications provide differing and complementary functionalities, and facilitate quick analysis, publishing, and reporting of data collected using QField and stored using QFieldCloud. The apps are built using Shiny, Leaflet, ggplot, and DataTables software, and are deployed using ShinyProxy and docker containers in swarm mode. These applications use the QFieldCloud API to authenticate users and retrieve data and an R package has been developed to handle this interaction within Shiny apps. The goal of these applications is to speed up the process of data collected using QField informing agricultural monitoring, planning, and decision making activities. A summary of these geospatial data visualisation and analytics applications is provided below.
maplandscape-view is a web app that allows users to query and view spatial data on interactive maps and data tables and generate reports comprising cartographic outputs, charts, and summary tables directly from QFieldCloud data. maplandscape is a web app that allows users to specify and apply custom geospatial data querying, analysis, and visualisation tasks to their QFieldCloud data in a web browser and with a simple UI interface. This application enables users without a GIS background to extract information from, and analyse, rich datasets collected in-the-field using QField; for example, agricultural officials in Tonga used this application to generate village- and district-level crop area summary tables for their annual report. Finally, maplandscape-admin provides tools for users to manage their QFieldCloud projects and accounts.
Mining industry professionals are in a constant need of simple and efficient software solutions for drillhole visualization, management, and edition. Although several Open Source solutions are offered to partially fulfill the need, none has propagated into common professional use.
In partnership with a team of mining industry leaders including Orano, Evolution Mining, Sandfire Resources, Kenex, the University of Western Australia, NordGold, GoldSpot and the CEA, Oslandia has formed a consortium in 2021 to answer the demand.
Oslandia has since then been developing a high performance drillhole data visualization QGIS plugin that combines 3D, cross-section, map, and log views into a fully synchronous system. Moreover, users are able to connect OpenLog directly to their existing drillhole databases such as Acquire, Datashed or Geotic.
This talk will succinctly present the functionalities of OpenLog as well as its primary use cases, contribution to QGIS 3D data visualization technology, development roadmap, and future prospects.
The SpatioTemporal Asset Catalog (STAC) specification is a common language for describing geospatial information that is flexible enough to extend across domains and use cases. In this talk, we walk through best practices for building STAC catalogs and using STAC extensions, using real world examples. These best practices are informed by documentation, conversations with STAC contributors, and discussions within the wider community. We survey the ecosystem of open-source STAC software, which includes libraries and tools written in Python, Node.js, and more. We show examples of reading, modifying, and writing STAC catalogs with a selection of software, including PySTAC and stactools, and we show which metadata to include in your STAC objects to ensure interoperability with powerful tools like xarray and pandas. Whether you are new to the STAC ecosystem or an experienced contributor, this talk will provide you with the context and tools you need to build your best STAC!
The European Centre for Medium-Range Weather Forecasts (ECMWF) is an intergovernmental organisation that produces global numerical weather predictions and other data for its Member and Cooperating States and the broader community. It hosts one of the largest meteorological data archives in the world. ECMWF supports the open data community by providing data through its Public Datasets program and Open Charts. Additionally, the Centre has a long history of and extensive expertise in developing and providing software to process and visualize meteorological data.
In concert with these efforts, we developed SkinnyWMS – a lightweight Web Map Service for meteorological data. SkinnyWMS is currently used in the Copernicus Climate Data Store (CDS) and Germany’s Meteorological Service (DWD) Geoportal. It provides out-of-the-box interactive visualisation for a large set of meteorological parameters. It offers built-in support for data stored in GRIB (WMO standard) and NetCDF (OGC standard) formats, which are commonly used in meteorology, climatology, and oceanography.
SkinnyWMS is written in Python and is based on ECMWF’s existing free and open source software ecCodes and Magics. It is free and open source software available under the Apache License 2.0. SkinnyWMS’ code is hosted on GitHub and it's available as an Anaconda package, from PyPi and as a Docker image from Docker Hub.
The GeoNetwork-opensource project is a catalog application facilitating the discovery of resources within any local, regional, national or global "Spatial Data Infrastructure" (SDI). GeoNetwork is an established technology - recognized as an OSGeo Project and a member of the foss4g community for over a decade.
The GeoNetwork team would love to share what we have been up to in 2022!
The GeoNetwork team is excited to talk about the different projects that have contributed with the new features added to the software during the last twelve months. Our rich ecosystem of schema plugins continues to improve; with national teams pouring fixes, improvements and new features into the core application.
We will also talk a bit about the health and happiness of the GeoNetwork opensource team. Progress of our main branches (3.12.x and 4.0.x), and release schedule.
Attend this presentation for the latest from the GeoNetwork community and this vibrant technology platform.
- COG: Cloud-Optimized GeoTiff ( https://www.cogeo.org )
- COPC: Cloud-Optimized Point Clouds ( https://copc.io )
- Flatgeobuff ( https://flatgeobuf.org )
- GeoParquet ( https://github.com/opengeospatial/geoparquet )
- STAC: SpatioTemporal Asset Catalog ( https://stacspec.org )
- Zarr ( https://zarr.readthedocs.io )
As a small but scaling new space company, Satellite Vu relies heavily on open source tooling for our image production pipeline, as well as storing our image assets and conducting experiments using thermal data sources. The company prides itself on being early adopters of emerging technologies, particularly as standardization of satellite imagery access and reproducible science are at the core of what we stand for.
In this talk, we’ll give an overview of the main projects we lean on for all of data engineering, data science and thermal science as well as outline the vision for Satellite Vu’s evolving role within the open source community. Specific tools we’ll comment on our use include:
- STAC, and the related stac-fastapi, for storing and serving image collections
- rioxarray and stackstac for scaling our use of both internal and external cloud native imagery datastores
- The pangeo stack for running experiments and scaling data processing
- pygeoapi, as a vector data server
I’ll introduce the Satellite Vu public STAC, and talk through how we’re using FOSS4G tools to shorten the development time of new products as well as prepare for the first satellite launch in Q1 2023.
Large geospatial data sets generated by modern remote sensing and environmental modelling provide new opportunities for analysts, scientists, and researchers. However, the size of these data sets can present challenges due to the computation and resource management required for analytics and processing. Current solutions for processing such data sets largely focus on horizontal scaling approaches on, for example, distributed systems such as the Cloud, without fully exploiting the opportunities offered by modern computing architecture. Furthermore, the variety of formats and types of geospatial data often result in complex processing workflows composed of multiple tools for reading and writing, transformation, processing, and resource management. We present an introduction, overview, and demonstration of the open-source Geostack framework (gitlab.com/geostack/library). This has been developed to help simplify many common operations, provide economy of code and to transparently take advantage of modern CPU/GPU hardware. We have aimed to provide three main routes to simplify and accelerate geospatial processing. These are: 1) a unified interface to read vector and raster data and interoperate between them, with no software dependencies for common geospatial data formats, 2) treatment of all data as objects independent of geospatial transforms, with transparent resource management through an underlying tile-based caching system, reprojection and interpolation carried out where needed, 3) extensive use of OpenCL to provide computational acceleration and automatic processing vectorisation on GPUs and multi-core CPUs as well as user-defined scripts to be executed over these objects. The framework also includes many common geospatial operations as well as several base geospatial solvers (including moving fronts, flow networks, particle modelling) accelerated using OpenCL. Geostack is a C++ API with Python bindings, the code examples and demonstrations are presented in Python. The Python bindings are available through conda and fully interoperable with common Python libraries including numpy, gdal, xarray, netcdf, geopandas and sqlite, allowing users to use as much or as little of the Geostack functionality as required. We present demonstrations of several common geospatial tasks with benchmark comparisons to alternate workflows.
Acquiring and labeling geospatial data for training machine learning models is a time-consuming and expensive process. It is made even more difficult by the lack of specialized open-source tools for dealing with the idiosyncrasies of geospatial data. At Azavea, we have encountered both of these problems before. In this talk, we will present a solution that incorporates our geospatial annotation platform, GroundWork (https://groundwork.azavea.com), with our open-source deep learning framework, Raster Vision (https://rastervision.io), to provide a human-in-the-loop active learning workflow. This workflow allows labelers to immediately see the effect of their created labels on the model’s performance, thus speeding up the labeling-training-labeling cycle and making the connection between the AI and human GIS data labelers easy and seamless.
This talk will extend the hands-on experience introduced in last year’s “Human-in-the-loop Machine Learning with GroundWork and STAC'' FOSS4G workshop. We will present an enhanced active-learning workflow that allows labelers to train a model and see predictions on-the-fly as they create labels in GroundWork. The model-training and predictions will be handled by Raster Vision. This workflow will give the labelers a clear view of the model’s current strength and weaknesses at all times, and thus allow them to direct their labeling efforts more efficiently. Newly created labels will propagate back to the AI model in real time, and an asynchronous job will continue to refine the model and predictions. This loop is backed by the open-source Raster Foundry (https://rasterfoundry.azavea.com) and Franklin (https://azavea.github.io/franklin) APIs, and is compliant with the STAC (https://stacspec.org) and OGC Features (https://www.ogc.org/standards/ogcapi-features) open standards.
OSGeo is an international organization, and the members are people from everywhere in the world .
OSGeo is cares and loves the community. When you move from "other" G software to FOSS4G you become part of the community.
This community has a range of roles like users of the geo-spacial software, conference organizers, geo-spacial software developers, committee members, and some many more.
We are a complex organization.
As the second objective of this talk I would like to explain our organization.
The preface of the main objective:
These last two years have been hard times for everyone, we have been locked down, working remotely and unable to interact in person with our colleagues, and probably on this FOSS4G some members will not be able to be face to face during this FOSS4G 2022 in Florence, Italy.
The main objective:
I would like to share the "autographs & good wishes" from the OSGeo community members that I've meet along these 7 years that I've been participating on the organization.
GeoNode is a Web Spatial Content Management System based entirely on Open Source tools whose purpose is to promote the sharing of data and their management in a simple environment where even non-expert users of GIS technologies can view, edit, manage, and share spatial data, maps, prints and documents attached.
This presentation provides a summary of new features added to GeoNode in the year up to the latest releases of GeoNode together with a glimpse of what we have planned for next year and beyond, straight from the core developers.
The purpose of this presentation is to introduce the attendees to those which are the GeoNode current capabilities and to some practical use cases of particular interest in order to also highlight the possibility of customization and integration. Finally, we will provide a summary of new features added to GeoNode in the last release up to the latest releases of GeoNode together with a glimpse of what we have planned for next year and beyond, straight from the core developers.
When using QGIS as a user interface for a PostgreSQL- & PostGIS-based registry database, user experience plays a high role. The data schemas of a registry database can hold a complex set of relations and database objects which can be hard to set up within QGIS. This was the case with a waste soil transportation registry that we developed for the City of Tampere, Finland. The main goal of the registry is to optimize soil transportation from construction sites by communicating and making it visible what soil categories are available and needed where and when. The registry enables significant savings in transportation costs as well as substantial reductions in climate emissions.
When the relational data model gets complex, you can deploy different strategies for setting up the workflows within QGIS. One possible solution is to use editable views and triggers for enabling user-friendly workflows in QGIS. This was the case in our soil registry project. The data model, as it was, would have forced the user to create multiple new features to the database tables when he/she just wanted to add a single soil transfer from one construction site to another. This was seen as too tedious when repeated constantly. The solution was to make a database view that the user could edit in QGIS, in addition to deploying database triggers for creating the right database features, with the proper data into the correct database tables.
This presentation seeks to show how the strategy was deployed and what challenges we met during the development.
In the words of Eric Elliot; All software development is composition: The act of breaking a complex problem down to smaller parts, and then composing those smaller solutions together to form your application.
Inspired by Eric’s work we have begun to modularize all the things and combine your mapping libraries (Openlayers, Maplibre, et al.) with other open source libraries (Tabulator, ChartJS) for data visualizations.
ESBuild has revolutionized the way we compile script, and dynamic imports which have long been touted as the future are finally available to us with a little help from Skypack.
In addition to tried and tested server side rendering we now have powerful vanilla Web APIs to build application views on the fly and with little reflow, and repaint.
This is a talk for the opinionated JS enthusiast.
Is your government utilizing OpenStreetMap data in their workflows? How can crowdsourcing be leveraged to improve the completeness, freshness, and breadth of government geospatial data? As the world’s largest crowdsourced geospatial database, OpenStreetMap is poised to serve this need. Two years ago, OpenStreetMap US formed a Government Working Group to seek out mutually beneficial relationships between the public and open data communities. As part of this effort, OSM community members and representatives from federal agencies have been investigating solutions for feature collection. This collaboration has led to the development of Public Domain Map, which connects OpenStreetMap and government datasets. Through the Public Domain Map workflow, OpenStreetMap and government open geospatial data becomes more complete, current and readily usable by government agencies and the millions of users relying on both datasets. In this session, I will share the journey of Public Domain Map, how the project is bringing together US federal agencies and open source contributors to meet this goal, and how you can be part of it.
To make the Platform possible and meet the objectives set, it was necessary to work with different areas of the government involved in urban planning and systems, promoting the opening of data to generate a sustainable process over time for the automatic and dynamic system of interpretation of the Urban Code that governs the constructive and urban behavior of the city of Buenos Aires, which is the basis of this technological solution.
The platform is based on the “Digital Twin” concept, that is, a virtual and digital replica of a city's urban plan. The objective is to test any initiative on this virtual model before its real implementation, in order to reduce costs and risks.
The tool was developed entirely with open source resources and technologies so that other organizations can study, modify and improve its design through the availability of its source code.
Regarding its architecture, the platform works from the information extracted from the datasets of the areas referred to the urban fabric and parcels. Then, from a set of processing algorithms that works with systematized rules generated from the text of the urban code regulations, this information is processed, allowing the generation of 3D graphics of each of the city parcels. The volumetry of each parcel allows to know the buildability, the maximum allowed height and the allowed construction alternatives for each parcel, which integrate the platform's viewer.
In the past seven years, the Brazilian Army Geographic Service has been putting effort into migrating the entire geospatial production chain into open-source. The first step was the use of PostgreSQL + PostGIS as the primary data storage solution, then the use of open standards such as OGC WMS and WFS for data sharing, then the use of QGIS as the main software for data acquisition, and finally, the development of DSGTools, QGIS plugin with several tools for data quality control and cartographic finishing.
The Geoinformation Production Management System (GPMS) comes as the latest addition to our open-source stack. It has two main goals: to manage the distribution of jobs between the staff automatically; and standardize workflows, layers, styles, tools, processes and parameters of each job.
For the first goal, the manager can create a profile for each team member, setting which parts of the workflow they are qualified to execute. When the user asks for a job, the system matches his qualifications with the available jobs on the current project and gives him the highest priority job. This, combined with the visualization of the live production status on QGIS, helps the manager improve resource allocation's decision management.
For the second goal, the manager can configure in detail each workflow, setting which database the job will be executed on, which layers the user needs to access, which styles are available for each layer, which QGIS Processes the user should run with pre-set parameters, which resources the user has access to, such as imagery and DEMs. GPMS also does the permission control in the PostgreSQL database based on the job the user currently is executing and allows spatial filtering, so the user only can work in a spatially defined subset of the data. In this way, when a user receives a job, he has everything he needs to complete it.
All the jobs information, such as user, start timestamp, end timestamp, and job parameters, are stored in the system, allowing the automatic generation of metadata compatible with Brazilian Standards and visualizations of the project's current state.
As the Brazilian Army Geographic Service has 5 Centers of Geoinformation, the use of a production management system helped with the standardization of procedures, since a standard configuration can be defined and replicated between centers to use the same consistent workflow, with the same database schema, same layers per job, same QGIS Styles, processes, parameters and so on.
The GPMS is available on Github on https://github.com/1cgeo/sap as a Node web service and requires a QGIS plugin for the client (https://github.com/1cgeo/Ferramentas_Producao) and a QGIS plugin for the manager (https://github.com/1cgeo/Ferramentas_Gerencia).
OSGeo's Google Summer of Code Initiative has been an inspiring and motivating platform for new contributors to join the OSGeo projects, community projects, guest projects, and incubating projects. In 2022, OSGeo is participating for the 16th year in the Google Summer of Code, and it itself is a great achievement. With this talk, the OSGeo GSoC Administrators shall try to put forth the importance of GSoC with respect to the students and participating projects. The admins would focus on the development of projects with GSoC and encourage projects to be a part of the upcoming GSoC.
Over the years, OSGeo's Google Summer of Code initiative has transformed into an initiative full of contributions towards geospatial software development. In the last 16 years, many OSGeo projects comprising incubating projects, community projects, and guest projects have progressed attributed to the contributions of student developers. Some of these contributors continue to participate as contributors for the projects and went on to take mentoring and organizing responsibilities. This is a true sense of FOSS4G in terms of the individual and collective growth of the developers and the OSGeo community. In this talk, the OSGeo GSoC Admins team would try to appreciate the efforts of all the mentors and students involved till now and present the state of the GSoC 2022. The Admins would also present possibilities for new projects to be part of the GSoC with OSGeo as an umbrella organization.
This presentation provides a summary of new features added to deegree in the latest releases of deegree webservices and deegree OGC API together with a glimpse of what we have planned for next year and beyond.
Initiated in 2002 the OSGeo project deegree has developed over the last 20 years to an important building block for Spatial Data Infrastructures (SDI). As the implementation of the INSPIRE directive is fully underway it requires stable and mature software solutions based on OGC standards such as GML, WFS and WMS. One of the goals of the deegree project is to provide implementation of those standards.
In this talk we will focus on the recent improvements available in deegree webservices and our updated roadmap for full Java 11 support comming with version 3.5. We will show how the support for OGC API - Features Core, part 1 and 2 has been implemented and can be used with existing configurations.
Finally we will show the directions for the project and what future developments are currently planned.
The vision of “Cloud-Native Geospatial” is a new paradigm of performing efficient computing and data access in the cloud in an interoperable way in order to achieve scalable and repeatable analysis of geospatial data. The last few years have seen major developments in open standards and open software that are helping make this vision possible, supporting full end to end interoperable workflows on remote sensing data, from data discovery to publishing of interoperable derived products.
This talk will present the current state of the Spatio Temporal Asset Catalog (STAC) specifications (stac-spec and stac-api-spec), updates in the published STAC extensions, and the latest community developments around Analysis Ready Data (ARD). We will cover the landscape of current recommended cloud-optimized file formats, for raster, vector, and point-cloud data formats (COG, Zarr, GeoParquet, COPC). Finally, we will provide recommendations for open-source client software to use to take advantage of the emerging geospatial clouds.
Climate and vegetation indicators created from Earth observation data provide timely information to analysts and decision makers implementing disaster risk reduction and climate risk mitigation programs. The United Nations World Food Programme’s (WFP) Climate and Earth Observation unit (ClEO) works with a number of Earth observation datasets to measure and monitor climate risks across all of the regions where we work, including 90+ countries globally.
The end users of this information include government institutions such as the meteorological and disaster management agencies, implementers of humanitarian assistance programs, as well as WFP field staff working on programs which build climate resilience through the development of community assets and livelihood support.
To enable the creation and dissemination of monitoring indicators, WFP is in the process of deploying an instance of Open Data Cube with nearly global coverage. Leveraging the power of data cubes to measure key climate and vegetation indicators over space and time, WFP’s Open Data Cube instance will provide free and open access to a wide range of analysis-ready data products. Utilization of this data requires user-facing applications with easy to use and intuitive interfaces. One of the tools developed by WFP to provide more direct access to climate and Earth observation data is PRISM – an open-source software solution which greatly simplifies the integration of geospatial data from various systems. PRISM has been developed to easily integrate data from Open Data Cube deployments using OGC standards – providing a quick tool to display time-series raster data in an interactive dashboard.
During this talk, WFP will present a brief overview of the use cases we address with Earth observation data, the role of our Open Data Cube instance in the organization, and the development of tools and processes to disseminate data for visualization using OGC standards – including the PRISM platform.
This talk will describe some of the tools and tricks used by the team at Sparkgeo to gather, clean, and represent global resource data (minerals, wood, and water) for use in video game development. One of the resources collected by the team used the STAC package for the Joint Research Centre - Global Surface Water data product to deliver water occurrence as part of the end product.
This talk will describe how to use a STAC package like JRC to access and transform cloud datasets before moving on to additional datasets such as limestone, gold, and forest cover. These other datasets required a different geospatial approach to ensure that the resources were appropriately represented in the video game. Nevertheless, each dataset required gathering from the internet and performing an Extract-Transform-Load (ETL) task.
The use of open-source geoprocessing tools, data science methods, and data delivery formats helped ensure real-world data is used in video games. Community standards
As a part of its AI4Earth initiative, Microsoft has created a Planetary Computer (PC) for hosting and processing open geospatial data. In addition to publishing a wide range of datasets, including Sentinel-2, MODIS, and more, the PC provides a powerful API and compute system based on open-source geospatial tools and using STAC metadata for data query, discovery, and access. In this talk, we present the latest in open geospatial data access, discovery, processing, and visualization using a variety of datasets from the Planetary Computer. We demonstrate use of the odc-stac package, which leverages the power of the OpenDataCube computing platform without the need for a database backend, and how odc-stac can load, mosaic, and transform geospatial assets into xarray datasets. We dive into other data interoperability tasks, including scaling processing with Dask and leveraging a variety of cloud-native formats. Along the way, we provide recommendations for data providers and curators on how to ensure their data can be used in a rich, interoperable way by the latest in geospatial processing tools.
All of us involved in the creation and publication of large amounts of geodata are familiar with the complexities of data management. In the case of geodata created with GRASS GIS, we asked ourselves how they could be made accessible to GeoServer without duplication. To overcome the previous limitation of GRASS GIS having its own data format, we connected the tribes and let Java and C/Python communicate with each other. So the challenge was to be able to efficiently read the GRASS GIS database directly with GeoServer. And why is that? Because this directly links the analytical capabilities of GRASS GIS with the exceptional geo service & publishing capabilities of GeoServer.
Our approach is to use the existing GDAL-GRASS bridge, and add this bridge as a new extension to GeoServer. To this we add two new GRASS GIS addons (r.geoserver.style + r.geoserver.publish) to easily publish the data from a GRASS GIS session as an OGC service. The new GeoServer GRASS raster datastore allows to use GRASS raster data directly in a GeoServer instance. In this way it is now very easy to publish GRASS data as a web service via GeoServer without having to export the data from GRASS GIS to GeoTIFF or COG files. This works for both classic raster data and also for timeseries which can e.g. be inspected as a WMS Time.
Within large construction companies GIS is widely used to store, analyse and visualise spatial data. GIS is just one of the components of an information infrastructure and has to be an integral part that can provide data and information to other departments and subcontractors as well. Van Oord as a marine contractor deals with spatial data on a daily basis. Existing disciplines like the design team and Survey team already have their way of dealing with spatial data to fulfil their task. What is it that the organisation needs additionally to work with GIS?
Van Oord has chosen to use FOSS4g software as a GIS backbone. This choice has its benefits but also poses challenges. Most competitors in the business use proprietary software. Clients do not necessarily follow standards and the IT department has to support our requirements.
We are very proud of our GIS backbone and definitely see the benefits of using FOSS4G. Come and listen to the solution we have chosen; the changes we make in the business and challenges we face.
Several ambitious initiatives, such as Destination Earth of the European Union, are harnessing large amounts of Earth Observation and other data to develop so-called digital twins of the Earth. The combination of large and diverse data assets, cloud- & HPC-computing as well as sophisticates models and AI algorithms now permit to generate predictive EO scenarios. One key aspect of Digital Twins is the possibility to employ openness as one of the key principles for all its functions. Data policy, data access, software development, processing and information management are important considerations for engaging and integrating users of all kinds. Due to their high potential for science and society the European Space Agency, with its unique geospatial data holdings, is fully engaged in the development of Digital Twins of the Earth.
EarthDaily Analytics is building a powerful new constellation that will collect scientific-grade, 5 meter resolution imagery of the planet in a unique combination of 22 spectral bands using 3 different camera types, covering a broad spectral range from visible to thermal wavelengths. The mission will be launched in 2023 and will allow us to see the Earth’s global land mass each day in a wholly new way with more spectral bands, higher revisit, and at a higher resolution than ever before. It will allow us to monitor, detect changes, alert, and predict what is happening anywhere on the planet to help with some of world’s most pressing challenges in agriculture, Environmental, Social and Governance (ESG), and disaster prevention and recovery.
This mission has been made possible by a near-perfect convergence of three major technology breakthroughs in the last 10 years: 1) lower cost satellite launch and manufacturing, 2) advancements in computer vision and machine learning to support automation of petabyte scale processing, and 3) cloud compute power and storage necessary to drive the processing and calibration of trillions of pixels each day. Together these three emerging technologies are key to driving next generation geospatial insights, but to bring them together requires a software solution capable of handing the complexity of raw satellite with automation driven by machine learning, and cloud-based Big Geo Data pipelines for cost-effective scale and latency.
At EarthDaily Analytics, our software solution has been made possible by leveraging many open source software packages to form the backbone for our satellite processing, calibration and quality services called the EarthPipeline. Together with open source packages and custom machine learning and computer vision approaches, we are working on delivering true scientific satellite image products that can be applied directly to algorithms without the need for very costly (and dreaded) end user data normalization and correction procedures. This talk will focus on how EarthDaily Analytics uses open source packages and machine learning to create normalized scientific quality data, and will also provide some example applications of how the data can be used.
MapStore is an open source product developed for creating, saving and sharing in a simple and intuitive way maps, dashboards, charts and geostories directly online in your browser. You can use MapStore as a product to deploy simple geoportals by using the standard functionalities it provides but you can also use MapStore as a framework to develop sophisticated web gis portals by reusing and extending its core building blocks. MapStore is also integrated inside geOrchestra as well as GeoNode open source project.
The presentation will focus on the use of MapStore as web gis framework to create a modular, reproducible, simple yet powerful dashboard to visualize crime data (but in reality many different types of location based data) leveraging on GeoServer and PostGIS advanced functionalities. We will describe the main steps for creating such an infrastructure leveraging on the MapStore components and framework then we will cover how the existing dashboard template can be configured to work with your own data sources (eventually touching the needed processing steps for the data itself) including GeoServer and PostGIS advanced functionalities.
We will eventually discuss further improvements and new features to evolve the current capabilities to capture new and emerging requirements.
The goal of this presentation is twofold, on one side we are addressing developers in order to show them advanced usage of MapStore to develop compelling applications on the other side we will be addressing power users and system administrators willing to deploy the Crime Mapping dashboard to make their own data available without writing code.
OSGeoLive is a self-contained bootable DVD, USB thumb drive or Virtual Machine based on Lubuntu, that allows you to try a wide variety of open source geospatial software without installing anything. It is composed entirely of free software, allowing it to be freely distributed, duplicated and passed around. It provides pre-configured applications for a range of geospatial use cases, including storage, publishing, viewing, analysis and manipulation of data. It also contains sample datasets and documentation. OSGeoLive is an OSGeo project used in several workshops at FOSS4Gs around
The OSGeoLive project has consistently and sustainably been attracting contributions from ~ 50 projects for over a decade. Why has it been successful? What has attracted hundreds of diverse people to contribute to this project? How are technology changes affecting OSGeoLive, and by extension, the greater OSGeo ecosystem? Where is OSGeoLive heading and what are the challenges and opportunities for the future? How is the project steering committee operating? In this presentation we will cover current roadmap, opportunities and challenges, and why people are using OSGeoLive.
A few years after the democratization of semantic segmentation advanced techniques in the 2D
context (imagery), there are more and more initiatives for exploiting such algorithms with 3D
datasets. The context appears favorable: public and private initiatives are arising in terms of
massive 3D dataset collection, hence a huge amount of 3D point cloud data will become available in
a near future. As an example, the french cartography institute (IGN) is currently targetting a
full-coverage of the country with LIDAR data in the five next years.
Considering 3D point clouds is a really challenging task regarding semantic segmentation. Whilst
this data format allows to represent a scene with a high level of details, the unordered and
unstructured nature of the data makes the standard convolution neural network approach
ineffective. However other deep learning algorithms exist to cope with these
characteristics. Depending on the desired accuracy and the labelled data availability, some
"softer" machine learning approaches may also complete the toolbox.
Leveraging georeferenced data in such a context may be an interesting avenue in order to improve
the algorithm performances. In any case, these fairly innovative solutions can be applied in some
geographical use cases, e.g. cartography with street-views, Building Information Modelling (BIM),
This presentation will provide some insights on these 3D semantic segmentation related topics:
the 3D semantic segmentation state-of-the-art will be flied over;
the BIM use case will be detailed through the presentation of an ongoing R&D project carried out
by Bimadata.io, Oslandia and the LIRIS lab (CNRS);
the geo3dfeatures project (https://gitlab.com/Oslandia/geo3dfeatures) will be showcased in
order to illustrate what the seminal component of a 3D point cloud segmentation software could
This talk will provide you with a tour of the latest features in the library, including daring live demonstrations. We will present our recent and ongoing work on adding new features and making the library more fun to work with.
Whether you're a developer or decision maker, come to this talk to learn about the current status of OpenLayers. We’ll provide you with a glimpse into the future of the library and leave you motivated to get mapping with OpenLayers.
PostGIS represents the de-facto standard for managing your spatial data, while QField is the state-of-the-art application for their management in the field.
Since QField 2.0 release in spring 22, the seamless data synchronisation experience is complete. QFieldCloud closes the loop between your company's fieldworkers and the GIS analysts.
In this talk, we'll share features, best practices and pro-tips for managing your projects, remote teams, and permissions in a professional setting.
QField is an open-source app developed for efficient fieldwork in real-time in urban areas, with a 5G connection or with offline data. The mobile GIS app combines a minimal design with sophisticated technology to conveniently bring data from the field to the office. Seamless QGIS integration, GPS-centred, offline functionality, synchronisation capabilities, desktop configurable: “QField” is designed for fieldwork – simple but uncompromising. Link: https://qfield.org
QFieldCloud is a spatial cloud service integrated into “QField” that allows remote provisioning and synchronisation of geodata and projects. Although “QFieldCloud” is still in an advanced beta stage, it is already being used by many groups to improve their workflows significantly. Link: https://qfield.cloud
Never before have we had such a rich collection of satellite imagery available to both companies and the general public. Between missions such as Landsat 8 and Sentinels and the explosion of cubesats, as well as the free availability of worldwide data from the European Copernicus program and from Drones, a veritable flood of data is made available for everyday usage.
Managing, locating and displaying such a large volume of satellite images can be challenging. Join this presentation to learn how GeoServer can help with with that job, with real world examples, including:
- Indexing and locating images using The OpenSearch for EO and STAC protocols.
- Managing large volumes of satellite images, in an efficient and cost effective way, using Cloud Optimized GeoTIFFs.
- Visualize mosaics of images, creating composite with the right set of views (filtering), in the desired stacking order (color on top, most recent on top, less cloudy on top, your choice).
- Perform both small and large extractions of imagery using the WCS and WPS protocols.
- Generate and view time based animations of the above mosaics, in a period of interest.
- Perform band algebra operations using Jiffle.
Attend this talk to get a good update on the latest GeoServer capabilities in the Earth Observation field.
GeoPandas is one of the core packages in the Python ecosystem to work with geospatial vector data. By combining the power of several open source geo tools (GEOS/Shapely, GDAL/fiona, PROJ/pyproj) and extending the pandas data analysis library to work with geographic objects, it is designed to make working with geospatial data in Python easier. GeoPandas enables you to easily do operations in Python that would otherwise require desktop applications like QGIS or a spatial database such as PostGIS.
This talk will give an overview of recent developments in the GeoPandas community, both in the project itself as in the broader ecosystem of packages on which GeoPandas depends or that extend GeoPandas. We will highlight some changes and new features in recent GeoPandas versions, such as the new interactive explore() visualisation method, improvements in joining based on proximity, better IO options for PostGIS and Apache Parquet and Feather files, and others. But some of the important improvements coming to GeoPandas are happening in other packages. The Shapely 2.0 release is nearing completion, and will provide fast vectorized versions of all its geospatial functionalities. This will help to substantially improve the performance of GeoPandas. In the area of reading and writing traditional GIS files using GDAL, the pyogrio package is being developed to provide a speed-up on that front. Another new project is dask-geopandas, which is merging the geospatial capabilities of GeoPandas with the scalability of Dask. This way, we can achieve parallel and distributed geospatial operations.
Object detection, classification and semantic segmentation are ubiquitous and fundamental tasks in extracting, interpreting and understanding the information acquired by satellite imagery. Applications for locating and classifying man-made objects, such as buildings, roads, aeroplanes, and cars typically require Very High Resolution (VHR) imagery, with spatial resolution ranging approximately from 0.3 m to 5 m. However, such VHR imagery is generally proprietary and commercially available at a high cost. This prevents its uptake from the wider community, in particular when analysis at large scale is desired. HIECTOR (HIErarchical deteCTOR) tackles the problem of efficiently scaling object detection in satellite imagery to large areas by leveraging the sparsity of such objects over the considered area-of-interest (AOI). This talk presents a hierarchical method for detection of man-made objects, using multiple satellite image sources with different Ground Sample Distance (GSD). The detection is carried out in a hierarchical fashion, starting at the lowest resolution and proceeding to the highest. Detections at each stage of the pyramid are used to request imagery and apply the detection at the next higher resolution, therefore reducing the amount of data required and processed. We evaluate HIECTOR for the task of building detection for a middle-eastern country, estimating oriented bounding boxes around each object of interest.
For the detection of buildings, HIECTOR is demonstrated using the following data sources: Sentinel-2 imagery with 10 m GSD, Airbus SPOT imagery pan-sharpened to 1.5 m pixel size and Airbus Pleiades imagery pan-sharpened to 0.5 m pixel size. Sentinel-2 imagery is openly available, making their use very cost efficient. The Single-Stage Rotation-Decoupled Detector (SSRDD) algorithm is used. Given that single buildings are not discernible at 10 m GSD, a bounding box does not describe a single building but rather a cluster of buildings. The estimated bounding boxes at 10 m are joined and the resulting polygon area is used to further request SPOT imagery at the pan-sharpened pixels size of 1.5 m. In the case of SPOT imagery, given the higher spatial resolution, one bounding box is estimated for each building. As a final step, predictions are improved in areas with low confidence by requesting Airbus Pleiades imagery at the pan-sharpened 0.5 m pixel size. Ablation studies show that HIECTOR achieves a mean Average Precision (mAP) score of 0.383 and 20-fold reduction in costs compared to using only VHR at the highest resolution, which achieves a mAP of 0.452.
Code will be released under MIT license. We will also release the trained models on Sentinel, SPOT and Pleiades imagery. In addition, manually labelled building footprints over Dakar will be open-sourced to allow users evaluate the generalisation of the models over different geographical areas. The Sentinel Hub service is used by HIECTOR to request the commercial imagery sources on the specified polygons determined at each level of the pyramid, allowing to request, access and process specific sub-parts of the AOI.
What's new in MapLibre version 2? We'll explore all the latest features of the library, including its new 3D capabilities. We'll also show you get started with MapLibre in whichever frameworks you choose to work in. This talk will be useful for beginners through to experienced web mappers interested in MapLibre.
For those getting started with web mapping or learning new frameworks, we will show you how to use MapLibre in your own application. We will introduce practical code examples in React, Vue.js, Angular or Svelte, creating a simple map component.
The presentation will provide a comprehensive introduction to GeoServer's own authentication and authorization subsystems.
The authentication part will cover the various supported authentication protocols (e.g. basic/digest authentication, CAS, OAuth2) and identity providers (such as local config files, database tables and LDAP servers).
It will explain how to combine various authentication mechanisms in a single comprehensive authentication tool, as well as providing examples of custom authentication plugins for GeoServer, integrating it in a home-grown security architecture.
We’ll then move on to authorization, describing the GeoServer pluggable authorization mechanism, and comparing it with proxy based solution. We will explain the default service and data security system, reviewing its benefits and limitations.
Finally we’ll explore the advanced authorization provider, GeoFence. The different levels of integration with GeoServer will be presented, from the simple and seamless direct integration to the more sophisticated external setup. Finally we’ll explore GeoFence’s complex authorization rules using:
- The current user and its roles.
- The OGC services, workspaces, layers, layer groups.
- CQL read and write filters.
- Attribute selection.
- Cropping raster and vector data to areas of interest.
Over the past few years we have learned some valuable lessons as a scientific community; we are stronger together; we must uplift one another in order to achieve scientific progress; and the best solutions are found when we work collaboratively. These lessons will not spread unless we begin to address the hesitancy in transitioning towards sharing data and results and the lack of applied experience with tools which make open collaboration easier. Transform to Open Science (TOPS) is a new NASA Science Mission Directorate initiative designed to spark a cultural shift to collaborative, inclusive science.
NASA defines open science as a collaborative culture enabled by technology that empowers the open sharing of data, information, and knowledge within the scientific community and the wider public to accelerate scientific research and understanding. A system based on open science aims to make the scientific process as transparent (or open) as possible by making all elements of a claimed discovery readily accessible, which enables results to be repeated and validated.
Out of this open science concept, an evolving paradigm called open-source science is emerging. Open-source science accelerates discovery by conducting science openly from project initiation through implementation. The result is the inclusion of a wider, more diverse community in the scientific process as close to the start of research activities as possible. This increased level of commitment to conducting the full research process openly and without restriction enhances transparency and reproducibility, which engenders trust in the scientific process. It also represents a cultural shift that encourages collaboration and participation among practitioners of diverse backgrounds, including scientific discipline, gender, ethnicity, and expertise.
Within NASA’s Open-source Science Initiative, TOPS provides the visibility, advocacy, and community resources to support and enable the shift to open-source science; NASA is designating 2023 as the Year of Open Science, a global community initiative to spark change and inspire open science engagement through events and activities that will shift the current paradigm. TOPS seeks to enable science that is accessible, inclusive and reproducible via four activities: providing high-level visibility to open science activities at NASA and beyond; developing an open science curriculum through a collaborative, community-driven model that will be fully and openly available virtually and at conferences; incentivizing open science activities via mechanisms such as partnerships, badging programs, and awards; and instigating a transformation of NASA’s culture to better recognize and reward open science activities.
TOPS advocates for open science as it builds trust, advances understanding, and ultimately leads to new knowledge production and important new discoveries. To enable open science, the TOPS is developing resources and activities that will support and enable the scientific community to move forward. At FOSS4G we hope to share NASA's vision for open-source science, and invite this community to join us in the 2023 Year of Open Science as we challenge the old norms of scientific research and create a community which designs its scientific endeavors to be open from the start.
The National Land Survey of Finland has made a strategic decision to pursue increased use of open source solutions in its activities. We’ve been an active user of FOSS4G solutions for more than a decade. Further, we created an open source mapping framework Oskari, which has an active user and developer community.
In autumn 2020, the National Land Survey of Finland made a major decision to build our new topographic data production system on open source components, such as QGIS and PostgreSQL/PostGIS. The decision has raised a few eyebrows and a lot of interest among other national mapping agencies as well as other institutions using geospatial software. This talk will discuss
- on what grounds we made such a decision
- how we are progressing with the implementation
- how we are looking at engaging in collaboration with open source communities
- and most importantly, why every public sector organization should consider the benefits that can be gained by using and investing in open solutions.
SMASH, the smart mobile app for surveyor’s happiness, is a slick app dedicated to digital field mapping.
The open source flutter app for Android, IOS (and upon request Linux, Macos and Windows) is packed with features, as for example: Geopackage and PostGIS editing support, Kalman filter on gps logs, geo-fences, native geotiff and shapefile visualization support, SLD styling for vector datasets.
SMASH’s web counterpart is the Survey Server, a web application that allows groups of surveyors to centralize data collection. Users can synchronize the data from the app, but also download dedicated forms and projects, as well as basemaps and datasets. The server is built upon the same technology as the mobile app and visualizes the data with the same look and feel. Notes serverside-versioning has been introduced to enhance synchronization of data by teams. A redmine plugin is being developed by community members to create a geo-ticketing system.
This presentations gives an insight about the state of the art of the SMASH ecosystem and its current roadmap.
In this talk we are going to present how the Danish Agency for Data Supply and Efficiency (SDFE) transitioned from a purely proprietary system to an open source system based on SpatioTemporal Asset Catalog (STAC) API and Cloud Optimized GeoTiffs (COGs) for servingservicing its open data collection of 5 million oblique aerial images. The new system is built partly using existing open source components and partly on newly built open source components. It uses significantly less resources and lets third party users access the data in a standardized way.
An important part of the process has been to develop and propose a community STAC extension for perspective imagery. This extends the STAC base metadata with parameters which are needed to do photogrammetric calculations and measurements using the images. The potential of this extension is that it enables the community to build generic perspective imagery clients in which the user can do advanced photogrammetric measurements.
To ensure support for existing clients and to lower the barrier to entry the system also supports clients without COG reading abilities. Using open source components we have built "CogTiler" a high performance tile server which serves jpeg tiles directly from the COGs. Most of the time this is accomplished without decompressing the jpeg data.
SDFE required that all code written for this project be open source and easily available to anyone. Therefore, all the code is available on GitHub.
This talk presents the current state of MovingPandas and related movement data analysis tools. MovingPandas has been growing steadily since its first publication in 2018 (with more than 24 contributors to date). Building on GeoPandas and GeoViews, MovingPandas provides movement data analysis tools that support efficient exploratory data analysis through interactive (visual) analysis. Early functionality and demos were focused on dealing with GPS tracking data (including vehicle and animal tracks). This talk presents recent developments towards supporting other track data, including examples from sports tracking (movement in real space, extracted from video footage) and eye or mouse tracking (movement in virtual space). Among many other details, this includes support for local coordinate systems, integration of context beyond geographic base maps, as well as trajectory generalization, segmentation, and distance measures. Finally, we revisit the origins of MovingPandas: the QGIS plugin Trajectools; and review the steps necessary to bring MovingPandas' trajectory analysis tools to QGIS.
deck.gl is one of the most advanced open-source libraries for data visualization. In this session we will discuss how its WebGL-powered engine can be used to perform visual exploratory data analysis of large datasets. This library is quickly becoming one of the most used in the FOSS4G world due to its open governance model and compatibility with other mapping libraries like MapLibre GL JS.
We will present different examples ranging from simple layer visualizations to thematic and choropleth maps to advanced interactive 3D visualizations including animations.
Finally we will focus on specific use cases for large data visualization, from datasets with hundreds of thousands of features with data formats like GeoJSON to datasets with billions of features using advanced tiling schemes.
The presentation aims to provide attendees with enough information to master GeoServer styling documents and most of GeoServer extensions to generate appealing, informative, readable maps that can be quickly rendered on screen. Examples will be provided from GeoSolutions training material, as well as from the OSM data directory we shared with the community.
Several topics will be covered, providing examples in CSS and SLD, including:
- Mastering common symbolization, filtering, multi-scale styling.
- Using GeoServer extensions to build common hatch patterns, line styling beyond the basics, cased lines, controlling symbols along a line and the way they repeat.
- Leveraging TTF symbol fonts and SVGs to generate good looking point thematic maps.
- Using the full power of GeoServer label lay-outing tools to build pleasant, informative maps on both point, polygon and line layers, including adding road plates around labels, leverage the labeling subsystem conflict resolution engine to avoid overlaps in stand alone point symbology.
- Dynamically transform data during rendering to get more explicative maps without the need to pre-process a large amount of views.
- Generating styles with external tools.
A demonstration of how to deploy a mobile data capture platform in an enterprise setting. In this example an environmental survey has been developed for a large organisation with a team of surveyors. Consideration is made to authentication, version control and synchronisation to an enterprise spatial database for wider consumption.
QGIS is used to define base mapping, context layers and the data capture layers for the mobile application. In this project it is demonstrated how QGIS can be used to define survey forms to reduce input error. In this case a tree survey layer has been defined that adheres to the Individual Tree Standard from the UK’s Forest Research, the Open University and Treework Environmental Practice.
The QGIS project is then deployed to a dockerised Mergin service. Authenticated access is then granted to users of Lutra Consulting’s Input Android application. Field collected data can be created and synchronised with the Mergin service. Version control & merging allows multiple users to make asynchronous changes with the ability to rollback if required.
The Mergin service internal database is synchronised with an enterprise PostGIS database to allow other users to access and edit data via desktop. Media captured in the survey such as images is extracted through Mergin’s API and made available together with the survey data through a web GIS interface.
Since entering into force in May 2007, the Directive 2007/2/EC of the European Parliament and of the Council establishing Infrastructure for Spatial Information in Europe (INSPIRE directive) has been playing very important role in building spatial information infrastructures – both pan-European and at national level. Technical specifications and guidelines describe the key components of data content and of implementation of web services. However, each country decides individually how technically it will implement these requirements, what architecture and software it will choose to use. The roadmap of directive implementation has reached the last milestone in 21/10/2020 – all spatial data sets had to be provided to the INSPIRE geoportal ((https://inspire-geoportal.ec.europa.eu/). Now is the best time to share the experience how Lithuania started INSPIRE implementation path using commercial software but successfully ended with using only FOSS4G.
Implementation of INSPIRE directive in Lithuania is centralized and state enterprise GIS-Centras is responsible for technical work. INSPIRE directive was implemented in 2 stages. The first stage started back in 2012 and it was dedicated to cover data sets from Annex I and Annex II (only orthophoto imagery). Back then the INSPIRE directive implementation was a new and little-known technical challenge. The decision to call a tender and use the commercial software for which there were not many viable alternatives at the time seemed quite logical.
The second stage of INSPIRE directive implementation started in 2018 and ended up in 2021. This stage was dedicated to cover data sets from Annex III and part of Annex II. Instead of just calling a tender to implement the requirements with commercial software we already had we decided to implement everything by ourselves and change the commercial software to FOSS4G. At the end of the project, we have not only prepared and published all the datasets and services, but also had a team of in-house specialists who were able to work with the FOSS4G. Even the results from the first stage was quickly changed to FOSS4G in order to unify the implementation architecture.
We have set three goals for the transition from commercial software to FOSS4G:
1. Create a system that we can administer and develop ourselves;
2. Create infrastructure that is cost-effective in the long run;
3. Automate the workflow as much as possible.
In this presentation we will share our experience of transition from commercial software to FOSS4G both from technological and company/team perspectives. We will present the technological architecture we use to implement INSPIRE requirements. It consist of PostGIS, Geoserver, QGIS, Hale studio, GDAL and Geonetwork. Finally we will explain what we have learn as a company and GIS specialists during the transition process.
MapStore is an open source product developed for creating, saving and sharing in a simple and intuitive way maps, dashboards, charts and geostories directly online in your browser. MapStore is cross-browser and mobile ready, it allows users to:
- Search and load geospatial content served using widely used protocols (WMS, WFS, WMTS, TMS, CSW) and formats (GML, Shapefile, GeoJSON, KML/KMZ etc..)
- Manage maps (create, modify, share, delete, search), charts, dashboard and stories directly online
- Manage users, groups and their permissions over the various resources MapStore can manage
- Edit data online via WFS-T with advanced filtering capabilities
- Deeply customize the look&feel to follow strict corporate guidelines
- Manage different application contexts through an advanced wizard to have customized WebGIS MapStore viewers for different use cases (custom plugins set, map and theme)
You can use MapStore as a product to deploy simple geoportals by using the standard functionalities it provides but you can also use MapStore as a framework to develop sophisticated WebGIS portals by reusing and extending its core building blocks.
MapStore is built on top of React and Redux and its core does not explicitly depend on any mapping engine but it can support both OpenLayers, Leaflet and Cesium; additional mapping engines could be also supported (MapBox GL is in the working) to avoid any tight dependency on a single engine.
The presentation will give the audience an extensive overview of the MapStore functionalities for the creation of mapping portals, covering both previous work as well work for the future releases. Eventually, a range of MapStore case studies will be presented to demonstrate what our clients (like City of Genova, City of Florence, Halliburton, Austrocontrol and more) and partners are achieving with it.
„Hello again, my name is actinia. Still new to OSGeo and a Community Project since 2019, you might have heard about me already. In short I am a REST API on top of GRASS GIS to allow location, mapset and geodata management and visualization as well as execution of the many GRASS GIS modules and addons. Processing with other tools like GDAL and snappy is supported as well. I can be installed in a cloud environment, helping to prepare, analyse and provide a large amount of geoinformation. Besides these facts about me there is also a lot to tell about what happened last year! Besides vector upload, citable DOI, QGIS and python client implementations and more, I can be a Spatio Temporal Asset Catalog myself with the actinia-stac-plugin, am able to use data registered in a STAC for processing and after processing register the resulting data. With the ongoing development of the openeo-grassgis-driver, you can use this new functionality either in my native language or via openEO API. To learn about the details, come on over!“
Artificial Intelligence (AI) has made an impact in almost every field and has become an incredibly powerful technology. However, while some players in big tech and academia have access to a highly skilful workforce that can create sophisticated AI solutions, many companies, governments, public organisations, and other societal stakeholders are lacking sufficient AI expertise and know-how.
Many AI and Machine Learning (ML) frameworks aim to simplify and democratise AI development, even if typically focusing on those with software engineering skills. Some of these solutions, those that provide no-code tools, get the closest to the ideal of enabling "any person without prior training”. Collectively, these could represent a major breakthrough, as it has been proven time and time again that many businesses and organisations still struggle to implement AI to its full potential and scale.
Visual, often drag-and-drop, no-code AI tools can make AI less intimidating and more comprehensible to non-technical profiles and those who lack the time and resources to build such systems from the ground up. No-code AI frameworks are expected to require minimum technical knowledge to develop practical AI solutions at scale. This is an emerging field, like was previously the case with no-code web development, starting with Dreamweaver and MS Frontpage, the first WYSIWYG (what you see is what you get) solutions, both launched in 1997.
The European Commission is supporting the establishment of an AI-on-demand platform that will provide easy and simple access to AI tools that are made in Europe and are ‘trustworthy’. The platform will gather all the AI resources (algorithms and tools), and make them available to the potential users, businesses, and public administration, with the necessary services to facilitate their integration.
Within the context of the EU’s commitment to trustworthy AI, we are exploring the landscape of AI solutions for non-experts, including no-code, low-code, AutoML and similar approaches, and evaluating them in an experimental setting via prototyping. Our findings are expected to inform the development of relevant European digital initiatives. For instance, we will consider how these emerging AI solutions for non-experts could be integrated and used in the context of digital infrastructures such as EuroGEOSS or the European Data Spaces.
Further democratization of AI will happen when domain experts without prior AI expertise are enabled to tap into high-quality data to solve complex problems on their own, with technical details being abstracted away, such as algorithm selection, model training, software frameworks, hardware dependencies, and platform aspects. We expect that open-source AI technologies for non-experts represent a step towards such a future.
During the conference, we will present the initial results of our geoAI activity, including a high-level architecture and a stack expected to be composed of various open-source software tools and open standards for digital infrastructures, data engineering and AI development. Our intention is to gather feedback from the audience and establish possible future collaborations in this space.
Many definitions and criteria available, explored and investigated for OpenStreetMap (OSM) data quality are pertinent to certain datasets, which are usually authoritative or controlled datasets. Several studies have used these measures successfully for OSM but this also makes the quality of the OSM dependent on the development of these datasets. This dependency exacerbates when such datasets are not available or condition to ephemerality. Unavailability and temporal non-reliability of controlled datasets are some of the reasons OSM data quality is still a concern. OSM is an ever-evolving digital space that has complex interconnectedness with physical space. To understand this interconnectedness we have to go back to the fundamentals and understand the genealogy of OSM with different qualitative and quantitative lenses. In this talk, I would like to present a research vision on how we can apply these quali-quantitative lenses to conceptualize the data quality of OSM to reduce the dependency on controlled or authoritative datasets. I will shed light on the layered conceptualization of OSM data quality with respect to different case study areas and projects that I am currently exploring. Layered conceptualization is based on the hypothesis that OSM data should be intertwined with the region and context-specific considerations. Understanding this intertwining will result in a better understanding of OSM data ethics and how it is related to the current data quality criteria already existing for OSM and other geo datasets. The aim is not to expunge the current data quality initiatives but to acknowledge and understand the exceptionality of OSM and its data quality. This talk will be a progress talk for the OSM Utopia project.
Point cloud data are an important component of geospatial data workflows, but software and formats to manage it often have compromises that work against efficient storage and processing of data. While commonly seen characterizing topographic information in LiDAR applications, point cloud data are an important driver of change detection applications in SAR workflows and provide important raw data to bring the physical world to the augmented one through handset capture on devices like the iPhone 12+. COPC.io is an open specification by Hobu, Inc. for organizing point cloud data in LAZ that allows it to be streamable over HTTP, selectable for resolution or spatial window, and adaptable to existing point cloud workflows in a backward compatible way. We will discuss the design choices and evolution of COPC, demonstrate its use in PDAL and QGIS scenarios, and show how COPC can be used in the cloud for management of massive point cloud collections.
OpenStreetMap (OSM) Galaxy is a project that the HOT Tech Team launched in mid-April 2021 to optimise and improve availability and accessibility of OSM Data outputs for different user groups within the ecosystem. Through this project, we strive to address all the OSM data needs under one umbrella and ensure OSM data is available, accessible and ready to use for all kinds of users. We are trying to solve the high dependency on different data sources and uncontrolled platforms while focusing on fast queries and process optimisation by accessing data from HOT administered and controlled environment.
As a one-liner, the vision for OSM Galaxy is to provide a single platform to address all OSM Data Needs.
In OSM context, a data need is a broad term covering a variety of topics:
Raw data exports
Analysing completeness of Data
Checking the data quality in your neighbourhood
Understanding your contribution to a mapathon, to name a few
Through this project we strive to:
Bring together all the data needs under one umbrella
Ensure OSM data is available, accessible and ready to use for all kinds of users
Although in use for about 10 years in various Esri products and services, the LERC (Limited Error Raster Compression) raster compression algorithm has only just recently made its way into the free and open-source GIS scene by its inclusion in GDAL (3.3).
LERC can perform both lossless and lossy raster data compression. To achieve its impressive compression ratios and speed LERC employs two major basic tricks:
- The raster is processed and compressed in small two-dimensional blocks, taking advantage of spatial autocorrelation (neighboring values usually being more alike than others).
- The raster values are quantized (absolute values are replaced by differences between neighbors) and bitstuffed to minimize the number of bits required to store them, this is especially useful for high bit depth data.
For lossy compression LERC will follow a user-configurable maximum error threshold (the "limited error" in its name). Want to compress your DEM and allow up to one centimeter of error? No problemo!
LERC is patented by Esri but thanks to the choice of the permissive Apache License it is freely usable by anyone.
The talk will try explain the algorithm on a basic level, understandable by non-experts, and show its performance with some examples.
More and more geospatial operations are happening in the cloud and often these processes are serverless. Whether your focus is on transportation logistics, agriculture, climate change analytics, or real estate; it is now possible to do geospatial computing in the cloud for both small and large projects.
With serverless cloud compute options for data storage, compute, and desktop; we can now host our entire infrastructure completely serverless. Let's review the current state of geospatial serverless cloud infrastructure by exploring a stack consisting of OGC publishing, QGIS Desktop for cartography and client-side processing, TiTiler tile server, STAC data catalog, and a combination of data storage options.
For OGC publishing, we'll review MapServer and Koop using Lambda plus API Gateway. Similarly TiTiler can also be built with Lambda and API Gateway. QGIS Desktop can now be run in the cloud through AWS Workspaces or AppStream. For our data storage, we can use a combination of S3, Aurora PostGIS, and Redshift.
GeoWebCache is a popular open source tile cache server written in Java. GeoWebCache that can be used stand alone, working off a remote WMS or local tile layers, such as MBTiles. However, it's also integrated inside GeoServer, allowing simple and quick configuration, as well as transparent caching of WMS requests that happen to match a cached tile. This presentation will provide information on the latest development for the project, including:
- Performance and scalability improvements
- OGC TileMatrixSet built-in definitions
- Storage of tiles in more blob stores (Swift)
- MBTiles support
- Integration of tile caching in OGC API - Tiles (with GeoServer integration)
- Serving vector tiles and integration with the Mapbox ecosystem (e.g., style editing with Maputnik)
- Continued codebase QA efforts (code clean up, dependency upgrades and the like)
Attend this talk for a cheerful update on what is happening with this project, whether you are an expert user, a developer, or simply curious what it can do for you.
The PostgreSQL extension for Hydrographic Applications Project (PgHydro) represents the first innovation of intelligence as an extension on spatial database management systems for use in water resources management that uses the subcatchment network model and the logic elements present in the Pfafstetter basin coding system. PgHydro aims for an add-on implementation on a spatial database management system performed by a series of tables, queries, functions or views that can be used individually to assist in water resources decision making. These objects are the hydrography core of the intelligence system developed using free open source software that can be used by any person who deals with water resources management. To this end, this new conceptual model was implemented in the object-relational spatial database management system PostgreSQL/PostGIS respecting the integrity constraints related to the geometry of the mapped objects. These user-defined constraints respect the logical objects based on the Pfafstetter basin coding system and integrity constraints linked to the spatial relationships between objects, which follow the ISO SQL/MM specifications. The main advantage of using the pghydro extension is the possibility to process large datasets and complex queries using a more simple hydrography model and the tools and languages already available in spatial database management systems that work as a framework for the future development of new extensions related to water resources.
The pghydro functionalities can be run using a GUI developed in a QGIS plugin called PgHydro Tools. After the physical implementation of the pgHydro Scheme in the spatial database management system, the construction of the Pfafsteter hydrography dataset is started using the hydrography objects that make up the pgHydro Tools. The construction of this base is divided into seven stages: 1) Creation of the spatial database and creation of the pghydro extension; 2) insertion of the drainage lines and the drainage areas in the spatial database; 3) verification of the consistency of the drainage network geometries and topologies; 4) verification of the consistency of the drainage areas geometries and topologies; 5) verification of the consistency of topology between the drainage network and the drainage areas; 6) Pfafstetter basin coding and other information, finally; 7) export of the final Pfafstetter hydrography dataset. Optional steps are the systematization of river names and the multiuser edition management.
The pghydro project is officially and widely used by the National Water and Sanitation Agency of Brazil as a reference for the Water Resources Management of Brazil.
The new Road Traffic Act of Finland (effective since 2020) requires
the road management authorities to provide information about the existing road
signs, along with other similar infrastructure such as traffic lights and road paintings, to Traficom,
the Finnish Transport Infrastructure Agency. The data is stored in Digiroad, the national database of open
street and road data, also hosted by Traficom. To help the different actors in public sector and elsewhere fulfill this legal obligation,
as well as providing tools for infrastructure management and maintenance more generally, FOSS4G software can play an important role.
In this talk, we present results from our recent project related to this effort. In the project, we developed a
traffic sign inventory process using QField mobile data collection app and studied its suitability for the task.
There was a pre-existing conceptual data model for the road signs from Traficom, which was used as the basis for the
physical PostGIS database implementation. In addition, the data collection workflow was designed to make the
data collection as efficient as possible. This included configuring the data input forms and the traffic sign visualizations in the related QGIS project file, as well as other aspects of usability of the app, such as further development of the geocoding functionality.
The process was then tested out in the field and improved upon in cooperation with employees from a few different-sized municipalities around Finland. The finished project report, along with the files needed to set up the data collection project are freely available in the Github repository of the project: https://github.com/finnishtransportagency/digiroad-QField
The UK national walking organisation, Ramblers, are working to improve the public rights of way network, and in particular improve access to it for people who are less advantaged, and may not have access to vehicles. The research project described in this talk undertook an analysis of the national paths network using publicly available data supplied by hundreds of individual local authorities across the UK. This was done by setting up a series of models in the QGIS Graphical Modeler to generate six key indicators aggregated to census area level, including distance to nearest continuous path from each small area unit of population, length of available path within a series of buffers, and access to paths of specific types – for example those passing through protected or designated areas. The talk will look at some of the challenges of the project, including scaling the modeller to work with millions of path features and tens of thousands of point locations, and building processes to combine path segments and then disaggregate them to an appropriate level.
The main goal of the project was to inform and support specific policy proposals, but it is also intended that the QGIS models should be passed on to Ramblers and used in the longer term, to monitor the impact of changes to the paths network and of population patterns over time, and also to support analysis of how additions to the network, for example by the inclusion of historic paths which are not yet official rights of way, could improve access. The intention is that these models could be run on smaller areas, and on hypothetical paths networks, to help build a case for extensions and rationalisation of the paths network at both national and local levels. Use of the Graphical Modeler rather than scripts or database processing will make it easier for Ramblers staff to run the models themselves in the future using inputs of their choice.
From CAD to GIS: where are we?
CAD and GIS are often considered as contradictory. However, we are regularly asked to help to perform a transition from CAD to GIS.
We would like to share with you an overview of situations where this transition succeeded. For this, we will go through issues and solutions we provided.
We will talk about the differences and similarities between those two worlds.
We will review the existing file formats.
We will make a list of available opensource tools that can facilitate this transition: gdal, QGIS and its plugins.
We will show that almost anything needed can be realized within QGIS today.
Finally, we will introduce a verification tool that can validate data in order to facilitate their integration: QompliGIS.
Not only will we talk about theory, but we will also give each case a concrete example of the work done for it, since several years.
When websites and web tools are properly designed and coded, people with disabilities can use them. However, currently many sites and tools are developed with accessibility barriers that make them difficult or impossible for some people to use [W3C - Introduction to Web Accessibility].
The European accessibility requirements present a new and welcome challenge for OSGeo applications. Accessibility (a11y) goes far beyond ease-of-use, with strict guidance on making web applications easily accessible to screen readers and assistive technologies.
This talk provides an overview of implementing the accessibility guidelines WCAG 2.1, WAI-ARIA, and EN 301 549 (Harmonised European Standard). GeoNetwork is used as a case study here, with “hands-on” illustrations of successful guideline implementations and their technical and organisational challenges.
Making OSGeo Software accessible is a rewarding task that requires broad community engagement and support. Attend this talk to learn more about meeting the a11y requirements and the impact they will have on your organisation and your software solution.
Geofolio is a project aiming to make environmental data understandable and accessible for everyone. Geodata is notoriously difficult to use for non-experts. It often gets hidden away in confusing data portals, and at least some GIS expertise is a common prerequisite to find and download data, extract your area of interest and to do some simple analysis. Geofolio lowers these adoption thresholds by letting users draw an area of interest, and then a factsheet with text summaries, maps, and charts is generated automatically from various open access geodatasets. The factsheets contain information on various environmental themes such as topography, land use, hydrology, climate, and agriculture. Users can download the source data, thereby providing an easy step up for further investigation and learning using open source GIS applications such as QGIS.
Geofolio makes extensive use of open source software for geospatial applications. The front-end and factsheets use the Leaflet mapping library, and the back-end and processing framework depends on GDAL and Shapely. Geodata is stored for analysis and visualization using PostGIS and Cloud-Optimized GeoTIFF files. The actual data processing takes place "on-the-fly" using GDAL-enabled AWS Lambda functions.
STAC or SpatialTemporal Asset Catalog is now a popular option for providers wishing to create accessible catalogs of spatiotemporal asset data for end users. STAC aims to create a standardized and performant way for providers to expose their spatiotemporal asset data, and for users to ingest that data.
A 'spatiotemporal asset' is any file that represents information about the earth captured in a certain space and time.
Since the development of STAC started in 2007, the STAC ecosystem was not able to use the STAC data in desktop softwares. Recently through collaboration between Kartoza and Microsoft, a QGIS (a desktop GIS application) plugin called “STAC API Browser” was developed to bridge the gap between QGIS users and STAC data.
Now using “STAC API Browser” users can access, download, analyze and use a vast amount of imagery data offered by various STAC specification providers, such as Microsoft Planetary Computer.
The aim of this talk is to introduce the “STAC API Browser” plugin, give a guide on how to use the plugin inside QGIS, showcase cool things that the plugin supports and how users/developers can collaborate on the plugin project. On top of all, we will also look at how to use the QGIS temporal controller feature with the added STAC data from the plugin.
Assigning semantic labels to points within a point cloud aids in both visual interpretation of the data and as a preprocessing step to other forms of analysis like building footprint extraction, hydrological modeling, and biomass estimation. Our talk will focus primarily on earth observation data and airborne lidar data sources in particular, where labels are commonly aligned with those classes specified in the ASPRS LAS specification (e.g., ground, vegetation, and building), but we are also beginning to explore the extension of these same methods to data generated by commodity, consumer-grade devices like iPhones. For many years, hand-tuned models have been developed for this segmentation task, building on reasonable assumptions about the data. For example, ground points should include those lowest elevation returns within a local window or building segments should typically be planar. Within the past decade, we have seen a surge in AI/ML powered models that are able in many cases of outperforming the prior methods, being able to learn novel features and adapt to the intrinsic variability of data. We will provide an overview of the open source ecosystem powering this trend, from benchmark datasets like US3D and DALES to machine learning frameworks (i.e., PyTorch and Tensorflow) and key libraries such as PDAL, Open3D, and PyG.
Over the last 35 years, the IPLA has collected data on soils, both soil samples and cartographic data, to create a database (physical and digital) that represents knowledge for all stakeholders, public and private, of the composition and state of soils. Piedmontese.
in the last 25 years the SIP (Pedological Information System) has been changed several times, to meet the new needs in terms of data collection but above all to update it to the new FOSS technologies.
in fact, in the last 5 years we have gone from non-connected proprietary systems (DB FoxPro and Esri geoDB personal) to a single integrated system with FOSS tools: PostgreSQL / PostGIS and QGIS.
An information system has therefore been created that allows technicians on the one hand to be able to collect the field data by inserting them, with a web interface in the database, and to be able to see in real time, the cartographic themes in QGIS, based on the data.just inserted and taking advantage of the geographical component of the database.
Also the implementation of the survey points can be done, using in this case internal functions of the DB, both in alphanumeric way and in QGIS in a geographic way, with real-time modification of the points based on the coordinates entered by one of the two tools.
And new implementations for field survey (QFiled) are under development, for the survey of the points and their visualization on the DB in real time.
The pedological observations are characterized by different levels of information among which more than 5000 soil profiles are the basic and fundamental data, subdivided into field data (descriptions and photos) and analytical data (physical-chemical determinations in laboratory). The elaboration of these data provides the main structure of the Pedological Information System, that is the description of more than 1200 Soil Types, used to build up the characterization of 7000 Geographical Units.
Oskari (www.oskari.org) is used around the world to provide map applications with integrations to spatial and statistical data and service APIs. Oskari can be utilized as a Web GIS with a regular browser or via embedded maps controllable with a simple API. The embedded maps are created in a easy-to-use WYSIWYG-tool enabling users to add a map component to their websites/services without any programming skills. The additional API can be used to integrate to existing APIs and services for richer functionality.
This presentation will cover the basics of Oskari and new features introduced during 2021-2022. The focus will be on functionalities for end-users and administrators, such as: new styling tools, map layer analytics and diagnostics tool, metadata supported automation to statistical data visualisation, enhanced support for theming and mobile use. There will be a separate presentation about technical developments in Oskari focusing on developer experience. You can try the features of vanilla Oskari in our demo environment (demo.oskari.org), it has the newest stable version without any customisation.
[ʒeokɔmɛ̃] : is it the latest buzzword in France or a large movement towards more openness in the geospatial realm ?
The French National Geographical Institute (IGNF) recently started communicating its vision towards the development of Geo-commons. This sounds like a strategical change in the way the institute apprehends geo-data and geo-software production.
In this presentation, we first try to give a definition of what a "geocommon" is. Then we review the initiatives currently being deployed by various actors in France to transform this word into a reality.
We study the roots of these actions and their links to opendata, opensource and opengov movements.
We also try to provide a mindmap of involved actors, and how they interact together : administrations, data-oriented communities or software-oriented communities.
Then we anticipate the impact on free and opensource software for geomatics, and how it could affect technologies and communities in this area.
Keeping (OGC) Geospatial Web Services up-and-running is best accommodated by continuous monitoring: not only downtime needs to be guarded,
but also whether the services are functioning correctly and do not suffer from performance and/or other Quality of Service (QoS) issues.
GeoHealthCheck (GHC) is an Open Source Python application for monitoring uptime and availability of OGC Web Services.
In this talk we will explain GHC basics, how it works, how you can use and even extend GHC (plugins).
There is an abundance of standard (HTTP) monitoring tools that may guard for general status and uptime of web services.
But OGC web services often have their own error, "Exception", reporting not caught by generic HTTP uptime
checkers. For example, an OGC Web Mapping Service (WMS) may provide an Exception as a valid XML response or
in a error message written "in-image", or an error may render a blank image.
A generic uptime checker may assume the service is functioning as from those requests and an HTTP status "200" is returned.
Other OGC services may have specific QoS issues that are not directly obvious. A successful and valid "OWS GetCapabilities" response may not
guarantee that individual services are functioning correctly. For example an OGC Web Feature Service (WFS) based on a dynamic database may
return zero Features on a GetFeature response caused by issues in an underlying database. Even standard HTTP checkers supporting "keywords"
may not detect all failure cases in OGC web services. Many OGC services will have multiple "layers" or feature types, how to check them all?
What is needed is a form of semantic checking and reporting specific to OGC services!
GeoHealthCheck (GHC) is an Open Source (MIT) web-based framework through which OGC-based web services can be monitored. GHC is written in
Python (with Flask) under the umbrella of the GeoPython GitHub Organization. It is currently an OSGeo Community Project.
GHC consists of two parts: (1) a web-UI app (using Flask) through which OGC service endpoint
URLs and their checks can be managed, plus for visualising monitoring-results and (2) a monitoring engine that executes scheduled
"health-checks" on the OGC service endpoints. Both parts share a common database (via SQLAlchemy, usually SQLite or PostgreSQL).
The database also stores all historic results, allowing for various forms of reporting.
GHC is extensible: at this moment of writing a plugin-system is developed for "Probes" in order to support an expanding number of
cases for OGC specific requests and -checks. Work is in progress to provide a GHC API for various integrations.
- Website: http://geohealthcheck.org
- Sources: https://github.com/geopython/GeoHealthCheck
- Demo: http://geohealthcheck.osgeo.org
In 2012, FAIMS project developed FAIMS Mobile, an open-source platform for minting Android applications for offline human-mediated data collection on multiple tablets. Originally intended for archaeology, this platform saw cross-disciplinary adoption including disciplines such as oral tradition, linguistics, ecology and geochemistry. Mobile GIS (provided by Nutiteq, Estonia) was built into the core software from the start providing the most essential geospatial functionality from management and rendering user-owned raster and vector data, to manual data creation, editing, retrieval, and rendering. Automated data collection via onboard and bluetooth sensors was also implemented to support unique identifier generation and printing, and other key tasks for field sample tracking. Navigation and spatial query facility existed. The simplified interface isolated end-users from administrators, with only the latter needing geospatial skills and domain knowledge, a division that facilitated data entry by unskilled volunteers. Many of the geospatial functions, however, required programming to customize. Given this barrier to entry, only clients with access to a programmer could create customisations for geospatially-tailored field data entry. Others had to run existing customisations, published on Github. Despite this bottleneck, FAIMS 2.6 clients created a variety of spatial data collection workflows, from simple offline shape mapping to manual map data digitisation.
In 2022, FAIMS project is rebuilding the FAIMS Mobile platform to equip it with a graphic user interface for customisation, to allow cross-platform deployment, and to implement ‘round trip’ data transfer to and from existing desktop tools. We hope to retain a robust geospatial data creation capability but aim to strip away functionality that saw little use over the 10 previous years, taking a 'just-enough-GIS' approach. As the architecture of FAIMS Mobile is changing from sqlite with spatialite extension to CouchDB/ PouchDB supporting geojson, technical elaboration pointed to OpenLayers as the most appropriate and complete library for geospatial data collection and management. This paper will examine the challenges and considerations of ‘just enough GIS’ implemented with OpenLayers in a comprehensive mobile data capture application.
High resolution aerial photos combined with accurate map data represents a perfect data set for training artificial intelligence models. The ‘KartAi’ project is an innovation project in public sector aimed at developing Ai-methods that detects buildings not in the cadastre or the building map dataset. Thereafter involving the property owner/citizen in a digital dialog and validate or crowdsource more detailed data. The foundation for this is high quality datasets for training and validating the different Ai-models. High resolution aerial photos are collected in large parts of Norway on a regular basis – often yearly – in a collaboration between federal and municipal. Thereby there exists a vast amount of extremely detailed image data combined with building map data and cadastre data. However, training the Ai-models have uncovered that minor errors and ‘skewed’ photos and/or vector data affects the results of the segmentation of roof tops/buildings. Therefore the KartAi projects has made fine tuned and accurate training data sets in several geographical areas optimized for training on detecting and segmenting buildings.
In several large scale experiments, a multitude of existing models, newer models and own models have been training and validated. Additionally we have included LIDAR-height data to enhance the precision of segmenting between the likes of roofs and terraces. Training the models on the existing data yields good results. However, when finetuning with the high accurate data – the models show impressing results.
Spatial Ai projects like KartAi are at the mercy of volumes of good training data. Our experience show that even more accurate data sets improve the models even further. Therefore, the project has made efforts that have resulted in the release of the training data sets publicly – as well as all of the results data for the different models and approaches that have been developed. This is an effort into developing a more open living lab for Spatial Ai in Norway. Our hope is that sharing the knowledge and data created can ensure that other Ai-models have easier access to high resolution and high accuracy data – to train models in the open living lab – and apply the models internationally where data is scarcer.
Vehicle Routing Problems (VRP) are a class of combinatorial optimization problems whose purpose is to design an optimal route for a fleet of vehicles, serving a set of customers, along with satisfying certain constraints. They are NP-complete (non-deterministic polynomial) and have been a challenging problem ever since.
pg_scheduleserv offers an API-based solution for solving scheduling problems without the need to be an expert in algorithms or databases. It is a RESTful API written in Go, built on top of vrpRouting. It uses VROOM as the schedule optimization engine to increase operational efficiency in location-based processes.
pg_scheduleserv lets you create jobs, shipments, or vehicles in the database by posting JSON objects through HTTP requests. It enables you to obtain the optimized schedule either as a JSON object or in iCal format so that the generated results can be directly used in your calendar applications.
pg_scheduleserv is easy to use and set up. By providing an API-based solution, it can be integrated into existing applications. In this talk, we present pg_scheduleserv along with a simple demo application to demonstrate its efficiency and simplicity of usage.
This talk will introduce a new Spatiotemporal Asset Catalog (STAC) extension for geospatial video assets. The extension is designed to standardize the metadata for all types of overhead geospatial video assets, including those collected by satellite, UAV, or airborne sensors, while accommodating situations in which the sensor moves throughout the video. The talk will include a brief overview of the STAC ecosystem (elements and extensions), and explain the Video extension’s schema. In addition, there will be a complete end-to-end demonstration including data preprocessing and STAC item creation (entirely using FOSS tools), and a FOSS method for displaying STAC Video extension-enabled items on an interactive map. The audience need not prepare in any way for this introductory presentation, although some background in STAC and geospatial video might be beneficial. Otherwise, this talk has a broad appeal for data professionals through to frontend developers who are keen to add some motion to their maps!
QFieldCloud's unique technology allows your team to focus on what's important, making sure you efficiently get the best field data possible.
Thanks to the tight integration with the leading GIS fieldwork app QField, your team will be able to start surveying and digitising data in no time.
Discover what QFieldCloud has to offer and how, thanks to seamless integration with your SDI, it can help make your teams' fieldwork sessions pleasant and efficient. And if you want to roll out your own customized version, nothing will stop you, QFieldCloud is open source!
QFieldCloud is a SaaS (software as a service) solution built by OPENGIS.ch that allows your team to seamlessly integrate field data to your SDI.
QFieldCloud is written in python using the Django Web framework that encourages rapid development and clean, pragmatic designs.
QField is the mobile data collection app for QGIS with more than 120K active monthly users and 500K downloads. Discover how the seamless synchronisation with QFieldCloud can help make your teams' fieldwork sessions pleasant and efficient.
Oskari is used world wide to provide web based map applications that are built on top of existing spatial data infrastructures. Oskari offers building blocks for creating and customizing your own geoportals and allows embedding maps to other sites that can be controlled with a simple API. In addition to showing data from spatial services, Oskari offers hooks for things like using your own search backend and fetching/presenting statistical data.
This presentation will go through the improvements to existing functionalities and new features introduced in Oskari during the last year. The focus will be on functionalities from developer perspective like:
- Improvements for working with vector features
- API-improvements for embedded maps.
- Rewrite of service capabilities parsing and handling
- Planned developments for better theming support and mobile-device friendliness for the geoportal
You can try some of the functionalities Oskari offers out-of-the-box on our sample application: https://demo.oskari.org.
Taro Matsuzawa has been working for Georepublic for 10 years now. He has had experience in many open source communities since his student days, but had not yet joined the OSGeo community until he joined Georepublic. He is now a FOSS4G specialist and has presented at FOSS4G conferences in Japan.
For a ten years he has been a committer for OpenMapTiles and several OpenSource projects, and has contributed more than 20 projects to Japanese companies and municipalities. In this issue, he would like to share some of the insights he has gained from his work and the OSS community.
I will mainly talk about the basics and applications of Game Tile technology, a small-scale map solution using pgRouting, and Python3 support for the ckanext-spatial plugin.
A knowledge graph is a network that interconnects concepts, objects, or events according to domain specific relationships and terminology. Spatial knowledge graphs model locations and how they are spatially related to each other according to semantic properties and are useful for helping to automate the integration of geographic data across silos. Information systems used to make decisions often have different pictures of the geographies (i.e. people, places, and infrastructures) they respectively cover. Within a single area, different programs collect and store different geographic data in siloed systems at different times, leading to discrepancies and duplication of effort. This also results in decisions based on incomplete and out-of-date geographic data (e.g., spatial distribution of population and resources).
GeoPrism Registry is an open-source Common Geo-Registry (CGR) implementation that utilizes spatial knowledge graphs to provide a single source of truth for managing geographic data over time across multiple information systems and data sources. It is used to publish, access, and manage changes over time to hierarchies and geospatial data for geographic objects such as administrative divisions, infrastructure and other relevant physical features.
GeoPrism Registry uses geo-ontologies to define semantic properties and relationships that implement spatial knowledge graphs using a graph database. Changes to attribute values, relationships, and geographies are managed for different time periods. Historical views of data can be generated for any time period. The application has been released under the Lesser General Public License (LGPL) and was developed using only open-source components including OpenJDK, MapboxGL, PostgreSQL, OrientDB, Solr, GDAL, and GeoServer.
This talk will demonstrate how spatial knowledge graphs defined in GeoPrism Registry using FOSS4G tools can:
1. contextualize data from different sources in both time and space,
2. use geographic objects as the common link between data sources,
3. facilitate trend analysis, and
4. aggregate data according to different hierarchies
Support for the development of GeoPrism Registry was provided by the Bill and Melinda Gates Foundation via the Digital Solutions for Malaria Elimination (DSME) Project and the DSME Community. The DSME project uses geo-enabled information systems to improve the efficiency and effectiveness of malaria surveillance, program planning, and intervention.
In the last few years, open source GIS has been developing relatively rapidly with an increase in the number of open-source GIS software available for performing various specialty functions. With this increase came the problem of managing the dependencies of different software when installing them on the same machine or getting them to work together to accomplish a task. How was the setting up process the last time you needed to make a map, share it and write about how you made the map? This is where the docker-based Open Source GIS Stack (OSGS) comes in.
OSGS is a rich, integrated, and opinionated GIS Stack with a focus on configurability and ease of use built from open source components. The primary objective of the OSGS stack is to provide simple and effective end-to-end solutions based on open source geospatial technologies. Some of the key services offered by the OSGS platform are Nginx and Hugo for web publishing using static web pages, File Browser for file management, QGIS-Server for publishing web maps, PostgreSQL and PostGIS for database management, and Metabase for visualizing your data. We’ll take a look at how easy and painless making, sharing and writing about maps can be.
The Open Source GIS Stack by Kartoza is maintained in the Kartoza OSGS repository https://github.com/kartoza/osgs.
The library is currently still quite small in scope but it has proven to be very efficient and helped us lower the complexity in various codebases. Great success!
If you are one of the poor souls having to reimplement for the zillionth time a GetCapabilities parser, or if you have given up trying to guess a data schema from a WFS service, this library is for you! Come join us for this lightning talk to learn more!
The NexSIS project aims to create a digital rescue platform providing all civil protection actors in France with a complete set of cloud operational services. Open Source GIS solutions were chosen for this national project with strong technical requirements.
This has a direct impact on the way data will be exploited. Each Fire Department will have to adapt and create data that will be directly used by the NexSIS software (areas that require special equipment or specialized teams for example).
Currently in France each fire department has a budget depending on the size / number of people in the department. This budget is used to buy new fire extinguishers, computers, but also to hire new people, etc... Most departments currently use many different proprietary softwares for all GIS aspects. Historically, each department has made their own choices on what software they use.
This talk will show how we are helping fire brigades to make the switch to Open Source without losing any functionalities and without any extra work load.
Using the power of both QGIS and PostgreSQL, we will show how these tools can be used to share and publish common workflows (qgis expressions, model builders) that are often used in fire emergencies, build a common atlas (report module), edit spatial data (forms, and constraints) and so forth.
The use of remote sensing data operating in different observation domains is an undeniable asset for the realization of quality land cover products.
Indeed, satellites allow to cover large areas of interest in a regular way with a durable quality.
Satellite data can be of different but often complementary natures, which makes it possible to broaden the possible fields of application (water management, snow cover, crop yield, urbanization, etc.).
In addition to these new data, there are recent technological developments (or old but now usable due to the evolution of computing capacities, such as the use of neural networks), and means of service provision and dissemination that allow these applications to be carried out over a longer period of time (long time series that are computed more rapidly) and in a larger space at different scales, sometimes simultaneously (stationary, local, national, continental, global scale).
iota2, developed by CESBIO and CNES with the support of CS GROUP, is a response to the growing demand for the creation of an Open Source tool, allowing the production of land cover maps at a national scale that is sufficiently generic to be adapted to the different objectives of users.
In addition, this project ensures the production of an annual land use map of metropolitan France [REF https://doi.org/10.3390/rs9010095], with a satisfactory level of quality, thus proving its operational capacities.
iota2 integrates several families of supervised algorithms used for the production of land use maps. Supervised algorithms (e.g., Random Forests or Support Vector Machine) that process pixels that can be parameterised by the users through a simple configuration file. iota2 also offers the user the option of using a deep learning model.
In addition to the pixel approaches, contextual approaches are also proposed, with Autocontext  and OBIA (Object Based Image Analysis). Autocontext, based on RF, takes into account the context of a pixel in a window around its position. The OBIA approach exploits an input segmentation to classify objects directly.
In addition to the supervised classification approaches, iota2 is also able to produce indicator maps (biophysical variables) either by supervised regression or by using user-provided processors, diversifying the possibilities of using iota2.
One major interest in iota2 is it's ablility to deal with a huge amount a data, for instance the OSO product (https://theia.cnes.fr/atdistrib/rocket/#/collections/OSO/2327b748-a82c-5933-afb0-087bbfeff4cd) is generated using a stack of all available Sentinel-2 data over the France without any landscape discontinuity due to the Sentinel-2 grid. Another point of interest is its capability to produce a landcover map everywhere a Sentinel-2 data and a groundtruth are available (ie : https://agritrop.cirad.fr/597991/1/Rapport_Intercomparaison_iota2Moringa.pdf).
- Derksen, D., Inglada, J., & Michel, J. (2020). Geometry aware evaluation of handcrafted superpixel-based features and convolutional neural networks for land cover mapping using satellite imagery. Remote Sensing, 12(3), 513. http://dx.doi.org/10.3390/rs12030513
TerriaJS is an open-source framework for web-based geospatial catalogue explorers.
It uses Cesium and Leaflet to visualise 2D and 3D geospatial data, and it supports over 50 different Web APIs, file formats and open data portals.
TerriaJS is used across the globe to create next-generation Digital Twin Platforms for open geospatial data discovery, visualisation and sharing - it is used to drive
- National Map (Australian Gov)
- Digital Earth Australia Map
- Digital Earth Africa Map
- Pacific Map
- NSW Spatial Digital Twin (Australian State Gov)
- and many others
In this talk, I will give:
- Background information about TerriaJS and how it is used by the community
- Current state of the project for users, developers and wider community
- New features
- Future plans!
First there were DJs, then there were VJs. Clear the booth and make room for the MJ (map jockey).
Carto-OSC leverages a handful of open source libraries, data, and protocols in order to activate performance spaces – theaters, galleries, nightclubs – with projected, real time cartography. The Open Sound Control (OSC) protocol, itself a mature successor to MIDI (Musical Instrument Digital Interface), is coupled with a touch surface to control layer coloring and patterning, zoom/pitch/bearing effects, label typography, transits along road networks and great circles, and a variety of transitions purloined from the history of cinema and visual music.
This presentation describes the architecture of Carto-OSC, the impulses behind its creation, development, and future, and includes documentation of its use in performances of music, dance, and spoken word.
Mapping is a private operations. It is done with several different local tools and usually by one person at a time. Yet we are used to have realtime multiuser editing of spreadsheets, documents and presentations. MUDraw tries to define a protocol to enable multiuser editing of features on a map and make it available as a library for both Leaflet as well as Maplibre (and enabling cross-library data editing) in order to make map editing a group activity. It relies on the client(s) and a server part written in Python/FastAPI that can be used independently from the infrastructure in which the communication is used and can set up a persistence layer taht is connected directly to github or other storage facilities.The idea of this tool is to be able to integrate it into UMap in order to make it a more fun to use tool, but also in a longer perspective, part of the Public History Toolkit OpenHistoryMap is developing.
MAps for Planning, Monitoring and Evaluation (MAPME) is an initiative founded by Geo-geeks and FOSS enthusiasts from KfW Development Bank (KfW), French Development Agency (AFD) and MapTailor Geospatial Consultants.
Aid agencies such as KfW and AFD financially assist developing countries in fighting hunger, poverty, disease, illiteracy and environmental degradation around the world. Together with our partner countries we are key decision makers in the allocation of the so-called Official Development Assistance (ODA). KfW, for example, allocated 12.4 bn. EUR to assist developing countries achieving the Sustainable Development Goals (SDGs) in 2020.
Geodata and geospatial technologies help us to take informed decisions to allocate funds responsibly and maximize public goods and benefits. Nevertheless, the uptake of open data and geospatial technologies within our institutions and decision-making processes is still relatively low. We think that one of the main reasons for this is missing openness in the way that we deal with data-analytic questions in our institutions.
In response we founded MAPME, an open community and open-source initiative to upscale and democratize the usage of geodata and geo-spatial technologies within our own institutions as well as our partners. With this initiative we promote cultural change in our institutions by prototyping small FOSS and open-data pilot projects that illustrate the power and usefulness of these technologies to improve development aid projects. One of our outputs is the mapme.biodiversity package, which offers R-users the possibility to automatically download and process several important open-data sources for conservation science using a parallelization approach to deal with large AOIs or global conservation portfolios (https://github.com/mapme-initiative/mapme.biodiversity).
We will offer a talk where we share our approach to FOSS application and development, what we see as barriers in our institutional and IT contexts and first successes stories that leveraged the power of geospatial data for learning about our projects and taking more informed decisions.
Migration of the current tools that maintain the Cadastre of Mexico City to Open Source and free licensing technologies.
The objective of the project was the development of a plugin in QGIS that optimizes the functionalities of the licensed software and allows the maintenance and management of cadastral data.
Stages of the development process:
a. Requirements analysis: Based on the information provided by the client, the feasibility of integrating the database engine, the API of workflow services with the transactional systems of the municipality is evaluated.
b. Definition and implementation of PostGIS and QGIS: together with the client, for the integration of the tools and subsequent development.
c. Design of Python and QT tools: carry out the design and development of plugin components in QGIS.
d. Design of Postgres security interfaces: coordination with the permissions, automation routines with the plugin components.
e. Development of functional tools from Bentley to QGIS: development of the previously detailed functionalities in the analysis and design stages of the project in order to replace the existing functionalities, accompanied by redesign.
f. Test and implementation in Quality Assurance (QA) environment
g. Training and technology transfer.
h. Documentation of all strage of the process of the project, in order to carry out the complete transfer to our client.
For over a decade, there has been an open source map publication platform in the Netherlands, known as Tailormap (formerly Flamingo). That project is maintained largely by one company, B3Partners. Currently, Tailormap is being overhauled. Nah, not overhauled, I’d say completely rebuild. And this rebuild comes with a new approach on how to distribute this software project, and how to make it accessible to other developers to contribute, to organizations to roll out independently, and to other companies to use in their customer solutions.
What we aim for in the long run is an online geospatial platform that is easy to use for all. For now, we publish an easy to install online GIS and mapviewing application, with features like mobile editing capabilities so that it can be used for maintenance purposes as well as for data dissemination. And here, at the FOSS4G in Firenze, we will celebrate this with our international launch presentation (and party in a not yet disclosed bar).
We’d like to invite you to join us in this journey. We’re extremely happy to have been able to start this development without outside funding, and we’re looking for partners to grow Tailormap together. We’re currently looking into the OSGeo Community project program, to see if we can join. We’ll be at the B2B meeting as well, but this presentation is where we’ll show the goodies.
The Carto 2 project is part of the Geo-IDE program of the Ministries of Ecology and Agriculture in France. The objective of Geo-IDE is to provide stakeholders in the ministries with common data and tools in the field of geographic information.
Since 2019, the Ministry of the Ecology and Camptocamp have been collaborating to create a new module, Carto2, that would allow data administrators to compose, publish and consult maps online and other users to search, consult or download published maps at the same time.
Carto2 also had to offer a more modern and ergonomic platform, as well as possibilities to evolve the module and its functionalities in order to better meet the needs and expectations of the decentralized services.
The project is developed using free solutions such as Geoserver, QGIS Server and Openlayers. This is an example of the development of a large departmental spatial data infrastructure used by about 150 services in France.
We will present the software architecture and the strategies put in place to develop Carto 2 and we will describe the technical challenges we have faced during the development.
In this talk we will look at how PostGIS and Uber's H3 index can be used for aggregating large amounts of data, in our case property insurance risk, in real-time. We will explore a number of different techniques from the H3 PostGIS extension generating GeoJSON, to generating MVTs from the database to pre-caching the H3 index and painting a vector tile layer client side. For our client side layer will use a React JS interface, Maplibre and will also look at Deck.GL for more advanced use cases. We will discuss how the stack can be deployed using a serverless architecture running on AWS Lambda and Aurora Serverless Postgres.
This talk requires no prior knowledge however some experience with PostGIS and vector tiles will be useful. You will learn techniques which can be applied to any problem domain where there is the need to work with data volumes where processing individual points would not be practical.
The applications of safe drone mobility are vast and diverse, such as pipeline inspection and large-scale crop monitoring. However, this innovation comes with safety and security risks. Drones have caused 91 disruptions to German air traffic in 2021 alone - in part, because not all no-fly zones are formally defined.
To solve this, the mFUND fAIRport project was started in 2020. In this project, wetransform, Deutsche Flugsicherung and Fraunhofer work together to create a comprehensive high-precision geodatabase for no-fly zones. This is achieved by merging existing datasets and with new information created through re-use of INSPIRE data, through crowdsourcing and through AI-based object detection from orthoimages.
The re-use of INSPIRE protected sites allows us to create cross-border data sets with no-fly zones. For the necessary transformation and the merging of the individual data sets, we used the open source ETL tool hale»studio. We encode the merged data sets in a wide range of formats, such as ED-269 JSON, GML and GeoPackage, and deliver them via various standardised APIs.
In this talk, we will introduce the use case, and the created data sets. We will also explain hale»studio’s declarative mapping and model transformation workflow, to show how it can improve the quality and usefulness of data to help solve problems at scale.
DistrictBuilder (districtbuilder.org) is a web-based, open-source tool for collaborative political boundary redistricting or redistribution.
In order to support creating legally valid districts, DistrictBuilder allows advocates and legislators to define districts using geometries as small as a single census block, which are very numerous – a medium-sized state will have hundreds of thousands of them. Users can create districts from any combination of geometries, and we need to be able to generate statistics and dissolve them into district geometries in near real-time.
By reformatting our data as TopoJSON, a file format and Node.js library for working with topological data, we are able to dissolve over half a million census blocks into legislative districts in only a few seconds!
I’ll discuss how we use TopoJSON in DistrictBuilder; the issues we encountered when using it at scale in production and how we were able to overcome them; and the other tools we considered instead of TopoJSON and how they compared in terms of performance.
I’ll also go over our strategy for displaying and calculating metrics in real-time in the browser, using typed arrays and web-workers in combination with Mapbox vector tiles to do real-time aggregation of statistics from hundreds of thousands of features.
This talk by Thomas Gageik, Director Digital Business Solutions (DIGIT.B) at the European Commission will get you up to speed on the most recent actions to encourage free and open source in and around the Commission. This is an introduction to the session organised by the EC, and we will show you what has changed since the adoption of the reinvigorated open source strategy in October 2020.
Our topics include: what have we done to make it easier for the Commission to share software as open source, and how can we contribute to existing free/open source software tools. We will show you how the Commission is helping to strengthen the security of open source software, and how we are networking with other organisations to help open source to progress in public services across the EU.
The talk will also introduce you to the open source programme office. The EC OSPO, created in 2020, is here to help Commission projects with free/open source.
The new GRASS GIS version 8.2 is a special edition including all new features developed during Google Summer of Code 2021. One of the enhancements is the parallelization of several raster modules by means of OpenMP, an implementation of multithreading to speed up massive data processing. Another exciting new feature is much improved, the Jupyter notebook support. Here, a new python package (grass.jupyter) is available which allows to interactively visualise maps and time series given the integration with folium.
The graphical user interface in version 8.0 introduced faster and more streamlined startup without a need for a welcome screen. For even more convenience, version 8.2 adds an experimental single window layout with familiar look-and-feel.
Related to raster data, a new metadata class called semantic labels can now be added to raster maps. Examples of semantic labels are aerial or satellite spectral bands, dataset names in remote sensing products (ndvi, evi, lst, etc), or any custom names.
At community level, we have developed a student grant program and, thanks to the move to GitHub, we have welcomed numerous new contributors.
QGIS is one of the most used Open-Source GIS Software. It is possible to display, edit, analyse, process different kind of data such as vector, raster, mesh, point clouds etc.
QGIS has some native functionalities to work with OSM data. Either with raster layer as a basemap, or with vector, QGIS can deal with OSM data. Depending on the amount of data to work with, the need to "refresh" the data (from the main OSM database), the extent of the coverage, different plugins or technologies are possible.
This presentation will try to give an overview how it's possible to use OpenStreetMap data within QGIS according to different situations (Geocoding, TMS/WMS, Overpass-API, Docker, PostgreSQL…).
The presentation will show how you can contribute to QuickOSM to add some default « map preset » to QuickOSM core on GitHub. This feature in QuickOSM allows users to have a set of vector layer with styles in QGIS which are ready to be used, with a symbology.
During the last 2 years we've been working on TiTiler (https://developmentseed.org/titiler/), a dynamic raster tile server. Built on top GDAL/Rasterio, TiTiler is written in python and use FastAPI (https://fastapi.tiangolo.com) framework. TiTiler is an application that let you create raster tiles dynamically from raster datasets (e.g Cloud Optimized GeoTIFF) but also from Spatial Temporal Asset Catalog (STAC) or Mosaic (using MosaicJSON). It is also a set of python modules which can be used independently to create custom services.
During this talk we'll explain the concept of dynamic tiling, what is TiTiler (and the libraries powering it), how it works and more important how users can customize and built their own dynamic tile server.
We will also present project like TiTiler-PgSTAC (https://github.com/stac-utils/titiler-pgstac) which enables the creation of Mosaic tiles dynamically from a Spatial Temporal Asset Catalog (STAC) database, or eoAPI (https://github.com/developmentseed/eoAPI) which is a full Earth Observation data service combining STAC database, STAC-FastAPI and a TiTiler in one easily deployable project.
pycsw is an OGC CSW server implementation written in Python and is an official OSGeo Project. pycsw implements clause 10 HTTP protocol binding - Catalogue Services for the Web, CSW of the OpenGIS Catalogue Service Implementation Specification, version 3.0.0 and 2.0.2. pycsw allows for the publishing and discovery of geospatial metadata, providing a standards-based metadata and catalogue component of spatial data infrastructures. The project is certified OGC Compliant, and is an OGC Reference Implementation.
The project currently powers numerous high profile catalogues such as IOOS, NGDS, NOAA, US Department of State, US Department of Interior, geodata.gov.gr, Met Norway and WMO WOUDC. This session starts with a status report of the project, followed by an open question answer session to give a chance to users to interact with members of the pycsw project team. This session will cover how the project PSC operates, the current project roadmap, and recent enhancements focused on ESA's EOEPCA, Open Science Data Catalogue and OGC API - Records.
It seems to be conventional wisdom that a search engine for geodata is best implemented with a text search engine like OpenSearch or Solr. Most of available open-source geocoders follow that wisdom. Nominatim is the odd one out. OpenStreetMap's main geocoder was originally developed 12 years ago as a proof of concept that a geocoder can be efficiently implemented on top of a PostgreSQL/PostGIS database. Since then it has grown into mature project. And so have the PostgreSQL database and the OpenStreetMap project.
In this talk, I will share some of the experiences of working with PostGIS on a growing OpenStreetMap dataset. The talk starts with a quick overview about what the Nominatim database looks like under the hood. It then goes on to present some of the lessons we have learned over the last 10 years on managing a PostGIS database with more than 270 million searchable places. We talk about features that improved performance and about some that are best avoided. The talk concludes with some general observation about implementing search on top of an SQL database.
Come have a look under the covers at the data structures that enable geospatial and multi-dimensional indexing and search at massive scale in Apache Lucene and OpenSearch. This talk will cover not only the indexing structures considered and ultimately implemented in the Apache Lucene Open Source Project but the exceptional performance improvements and centimeter spatial accuracy obtained in the latest release. As a bonus, this talk will cover new and upcoming Spatial Analysis Aggregations and Processing available in the OpenSearch Open Source project.
From tessellation to multidimension encoding and block KD trees this talk will cover the algorithms and data structures written and committed to the following open source projects:
Apache Lucene (specifically the release of BKD based geo indexing https://issues.apache.org/jira/browse/LUCENE-8396)
Performance benchmarks for Lucene Spatial Indexing: https://home.apache.org/~mikemccand/geobench.html
Finally, we will discuss the future of the project including existing and evolving support for custom coordinate reference systems and projections, spatial regression modeling and statistics, and spatial visualizations with OpenSearch Dashboards.
In 2022 we've prepared an infrastructure based on QGIS & Input for Municipal surveys - Helping cities and regional councils in Israel to collect geospatial data with Open Source tools.
Especially focusing on common surveys which would save resources for each participating entity such as road signs, city infrastructure, number of routine oversight acitivty (littering, fixes , upcoming works, traffic issues and gardening & cleaning) along longer term planning like zoning or mapping residents complaints over time.
Our goal is to allow medium to low socioeconomic status municipalities to get the opportunities of "richer" municipalities, who spend much of their budget on proprietary GIS software. In some cases, this opens the door for the first GIS software in for such municipalities.
This talk would cover both the technical aspects and the project management aspects of our efforts, as "converting" a market, traditionally familiar with only proprietary software, to Open Source, has it challenges, especially when it is the public sector.
Aside from many new features, GRASS GIS 8 brings an improved graphical user interface focused on better user experience. Based on a broad community discussion involving several surveys and test sessions, we developed a new startup mechanism helping the users understand the data hierarchy and guiding them in their first steps. In addition, our surveys helped to identify a number of opportunities for improvements, including a need for a Single-Window mode that could fully replace the traditional Multi-Window GUI that has been in GRASS since the first GUI version in 1999. Therefore, during the GSoC 2021 project, the first steps towards the Single-Window GUI were established, eventually leading to the friedlier GRASS GUI in version 8.2. Come and listen to the presentation describing how a community-driven approach helped to steer the development direction of the GRASS graphical user interface to satisfy both GIS beginners and advanced users. You can also look forward to the brand-new screenshots of the GUI in version 8.2 that might eventually inspire you to try GRASS on your own.
OpenStreetMap editing transitions to mobile devices. There are few editing apps, and the best ones are thematical. This year I've published "Every Door": an app specifically designed to collect hundreds of shops and amenities. I've made it with the experience of mapping in OSM, making a Telegram bot of a similar purpose, and studying geospatial UX design. I've surveyed half a thousand amenities with the bot, and even more — with this new app.
In this talk we'll briefly touch on the app itself and the OSM tagging model. The main attraction would be map UX design: why you should remove the most interactivity from your maps. These are hard to use even on desktop, and a small screen provides an even bigger challenge. Can we get rid of them altogether? Let's see how working with maps can be made efficient, and how the ideas behind this app can make geodata collection apps better.
GISCO, the ‘Geographical Information System of the COmmission’, is a permanent service of Eurostat that fulfils the requirements of both Eurostat and the European Commission for geographic information and related services at European Union (EU), Member State and regional levels. These services are also provided to European citizens at large. GISCO’s goal is to promote and stimulate the use of geographic information within the European Statistical System and the European Commission.
One of the main lessons learned over the last years is not only to provide ‘conventional’ GIS datasets, but add a variety of distribution channels like Application Programming Interfaces (APIs), Linked Open Data (LOD) plus Human Friendly Interfaces on top. API’s for example simplify software development and innovation by enabling applications to exchange data and functionality easily and securely into a digital ecosystem. Additionally, the implementation of API’s contributes to: a) an open government approach to modernise public administration b) a modernised use of the European Interoperability framework or c) the application of the Once Only Principle. The talk will describe some of GISCOs API’s supporting European Institutions in their daily work as well as the public. For that, we are using FOSS tools in production environments. Besides, GISCO team members develop or contribute to a wide variety of software tools (e.g. eurostat-map.js, gridviz, IMAGE tool, diff and generalization tool) which will be presented for further use by the FOSS4G community.
ZOO-Project is a WPS (Web Processing Service) platform which is implemented as an Open
Source project and following the OGC standards, it was released under an MIT/X-11 style license and
is currently in incubation at OSGeo. It provides a WPS compliant developer-friendly framework to
easily create and chain WPS Web services. This presentation gives a brief overview of the platform
and summarizes new capabilities and enhancement available in the new version. A brief
summary of the Open Source project history with its direct link with FOSS4G will be presented. The new release comes up with a brand new ZOO-Kernel Fast Process Manager and, with the approved standard OGC API - Processes part 1: core. The new functionalities and concepts available in the latest release will be presented and described, also highlight their interests for applications developers and users. Apart from that, various use of OSGeo software, such as GDAL, GEOS, PostGIS, pgRouting, GRASS, OTB, SAGA-GIS, as WPS services through the ZOO-Project will be presented. Then, the ongoing developments and future innovations will be explored.
pygeoapi is an OGC API Reference Implementation. Implemented in Python, pygeoapi supports numerous OGC APIs via a core agnostic API, different web frameworks (Flask, Starlette, Django) and a fully integrated OpenAPI capability. Lightweight, easy to deploy and cloud-ready, pygeoapi's architecture facilitates publishing datasets and processes from multiple sources. The project also provides an extensible plugin framework, enabling developers to implement custom data adapters, filters and processes to meet their specific requirements and workflows. pygeoapi also supports the STAC specification in support of static data publishing.
pygeoapi has a significant install base around the world, with numerous projects in academia, government and industry deployments. The project is also an OGC API Reference Implementation, lowering the barrier to publishing geospatial data for all users.
This presentation will provide an update on the current status, latest developments in the project, including new core features and plugins. In addition, the presentation will highlight key projects using pygeoapi for geospatial data discovery, access and visualization.
Published in 2007, the INSPIRE Directive has established a pan-European Spatial Data Infrastructure (SDI) to support European Union (EU) policies related to or having an impact on the environment. The Directive requires Member States public organisations to make geospatial datasets in scope (i.e. belonging to 34 cross-sector categories known as data themes) interoperable, discoverable and accessible through view and download services. Fifteen years after the entry into force of the Directive, we assess the state of play, reflect on the lessons learned and, leveraging on these while also considering the current policy and technological context, elaborate a vision for the future evolution.
Through its Geoportal, which regularly harvests the EU Member States national catalogues, the INSPIRE infrastructure currently provides access to approximately 90 thousand datasets. The amount and update of those datasets is steadily increasing as is the fraction of datasets whose metadata, data models and view/download services are compliant to the legal requirements of the Directive. The INSPIRE infrastructure is currently based on three so-called central components, which in turn are implementations of reusable and mature open source software solutions: the INSPIRE Reference Validator makes use of the ETF testing framework, the INSPIRE Registry is based on the Re3gistry software (included in the OSGeo Live since 2021) and the INSPIRE Geoportal is currently being migrated to GeoNetwork . INSPIRE has also played a key standardisation role in Europe by fully promoting and relying on open standards, mainly by ISO and OGC. Finally, an active and engaged community of stakeholders, meeting at the annual INSPIRE Conference and other related ad-hoc events, has highly favoured the policy and technological development.
Despite many pros, lessons learned from INSPIRE also show some cons. These include e.g. overspecification in legislation (often leading to extensions to existing standards) which still limit implementation, and the lack of a common approach to data licensing. In addition, the current technological landscape is very different from the one from the INSPIRE dawn. New data sources (Internet of Things, citizen-generated and Earth Observation data, research data and data owned by businesses), new agile standards (e.g. OGC APIs for data sharing and modern standards for data encoding) and novel architectures (cloud, edge and fog computing) are creating an opportunity that INSPIRE shall leverage to remain relevant and fit-for-purpose. In parallel, driven by the recent European Strategy for Data, the current European policy context and related legislative instruments are strongly pushing for an increased, better and fairer use of all available data for the benefit of European economy and society.
Within this context, the talk will illustrate our vision to streamline and simplify the technological and organisational structure of INSPIRE towards a data-driven and self-sustainable ecosystem. We will mainly reflect on the key role played by open source software, open standards and open licenses, and on the need to redefine the governance of the infrastructure through the increasing involvement of open source communities, including OSGeo as a strategic partner.
The Apache Arrow (https://arrow.apache.org/) project specifies a standardized language-independent columnar memory format. It enables shared computational libraries, zero-copy shared memory, streaming messaging and interprocess communication without serialization overhead, etc. Nowadays, Apache Arrow is supported by many programming languages.
Geospatial data often comes in tabular format, with one (or multiple) column with feature geometries and additional columns with feature attributes. This is a perfect match for Apache Arrow. Defining a standard and efficient way to store geospatial data in the Arrow memory layout (https://github.com/geopandas/geo-arrow-spec/) can help interoperability between different tools and enables us to tap into the full Apache Arrow ecosystem:
- Efficient, columnar data formats. Apache Arrow contains an implementation of the Apache Parquet file format, and thus gives us access to GeoParquet (https://github.com/opengeospatial/geoparquet) and functionalities to interact with this format in partitioned and/or cloud datasets.
- The Apache Arrow project includes several mechanisms for fast data exchange (the IPC message format and Arrow Flight for transferring data between processes and machines; the C Data Interface for zero-copy sharing of data between independent runtimes running in the same process). Those mechanisms can make it easier to efficiently share data between GIS tools such as GDAL and QGIS and bindings in Python, R, Rust, with web-based applications, etc.
- Several projects in the Apache Arrow community are working on high-performance query engines for computing on in-memory and bigger-than-memory data. Being able to store geospatial data in Arrow will make it possible to extend those engines with spatial queries.
MapMint is a comprehensive task manager for publishing web mapping applications. It is a robust
open source geospatial platform allowing the user to organize, edit, process and publish spatial data
to the Internet. MapMint includes a complete administration tool for MapServer and simple user
interfaces to create mapfiles visually.
MapMint is based on the extensive use of OGC standards and automates WMS, WFS, WMT-S, and
WPS. Most of the MapMint core functions are run through WPS requests which are calling general or
geospatial web services: vector and raster operations, mapfiles creation, spatial analysis and queries
and much more. MapMint server-side is built on top of ZOO-Project, MapServer and GDAL and its
OpenLayers and Jquery and provides user-friendly tools to create, publish and view maps.
MapMint architecture and main features will be introduced in this presentation, and its modules
(dashboard, distiller, manager, and publisher) will be described with an emphasis on the OGC standards and OSGeo software they are using. Some short but relevant case studies and examples will finally
illustrate some of the key MapMint functionalities.
The process of drawing new political boundaries in representative democracies has generally been done with closed source software. However, a number of open source products are changing the way governments draw their jurisdictions. The QGIS Redistricting Plugin has been used to redistrict communities in the United States, Canada, and Australia, and other open source software such as DistrictR has been used to redistrict the United States in their previous cycle, significantly cutting the cost needed to participate in this activity and allowing individuals to make better contributions. At its core, the software is simple but powerful: it allows users to change attributes in an attribute column using selection tools and displays aggregate statistics for other selected columns. Join John Holden, the plugin's developer, and Blake Esselstyn, a geographic and political consultant, for a plugin demonstration and a discussion of how governments and citizen groups have transitioned to using open source software in this important political area.
MapProxy is a tile server for geospatial data that is capable to cache, accelerate and transform data from existing map services and serve them for various clients.
MapProxy is a tile cache, but also offers many more features like full support for WMS, re-projection of tiled map services and much more. MapProxy is Open Source (Apache Software License 2.0), is easy to install and to configure and runs on various OS.
Thus, for many of our customers MapProxy is an essential part of their geodata-infrastructure (GDI), but the simple possibility to ask on a mailing list is often not enough, when they want to bring Mapproxy into production. Guidelines of IT departments often require a service contract, looking at proprietary software such a contract often is a binding part of the user agreement. But what about Open Source Software?
We as a service provider offer professional support and also service contracts for MapProxy (and other OSS software) and thus, we help customers to bring Open Source into production by filling the gap of missing warranty. In the talk we would like to discuss the various business models that we developed in the past, but we also want to show why MapProxy is an important part of their GDI for many customers.
The message is not surprising: You should quality check your code, too, even if you are writing a small script for your own needs! However, maybe you wondered if all the warning messages are relevant to you or got discouraged after getting a flood of messages from tools like Pylint. Perhaps you were even annoyed by it. This talk will help you get motivated and get started and how to automate that with continuous integration tools such as GitHub Actions.
In this talk, I will share my experience with adding various code and non-code checks to GRASS GIS which is primarily written in C, C++, and Python. Checking a mixed code base with over 30 years of development is not easy, but not impossible. The talk will cover code quality measures in GRASS GIS such as tests, Pylint, Black, GCC, CodeQL, and Super-Linter and how this compares to my experience with new and small organizational repositories.
The United Nations Mission in South Sudan (UNMISS) is a United Nations peacekeeping mission for South Sudan, which became independent on 9 July 2011. UNMISS was established on 8 July 2011 by United Nations Security Council Resolution 1996 (2011) and as of March 2021, it is composed of 19,075 total deployed personnel including 14,222 troops; 217 experts on mission; 1,446 police personnel; 2,228 civilians; 387 staff officers and 388 UN Volunteers, where, it is headquartered in the South Sudanese capital of Juba.
Under Chapter VII of the Charter of the United Nations, UNMISS is therefore authorized to use all necessary means to implement its mandate which includes:
(a) Protection of civilians
(b) Creating conditions conducive to the delivery of humanitarian assistance
(c) Supporting the Implementation of the Revitalized Agreement and the Peace Process
(d) Monitoring, investigating, and reporting on violations of humanitarian and human rights law
The mission has decided to extend its public outreach activities in a different method by utilizing geospatial information and using open geospatial tools and data for showcasing some of its important activities in support of above-mentioned mandates, and for this purpose contracted a service provider through bidding exercise and procurement protocols.
In this general session talk, speaker(s) will give their presentations on below topics:
- UN Open GIS Initiative Background
- UNMISS GeoStories architecture, FOSS4G tools and data
- Preventing mis/dis-information by extending public outreach
- Review selected Geostories in support of UNMISS mandate
pygeofilter is a library to support the integration of geospatial filters. It is split into frontend language parsers (CQL 1 + 2 text/JSON, JFE, FES) , a common Abstract Syntax Tree (AST) representation and several backends (database systems) where the parsed filters can be integrated into queries.
Currently pygeofilter supports CQL 1, CQL 2 in both text and JSON encoding, OGC filter encoding specification (FES) and JSON filter expressions (JFE) as input languages. Additionally pygeofilter provides utilities to help create parsers for new filter languages.
The filters are parsed to an AST representation, which is a common denominator across all filter capabilities including logical and arithmetic operators, geospatial comparisons, temporal filters and property lookups. An AST can also be easily created via the API, if necessary.
pygeofilter provides several backends and helpers to roll your own. Built-in backends are for Django, SQLAlchemy, raw SQL, (Geo)Pandas dataframes, and native Python lists of dicts or objects.
pygeofilter is used in several applications, such as PyCSW, EOxServer and ogc-api-fast-features
Exploitation platforms offer a cloud-based virtual work environment where expert users can access data, develop algorithms, conduct analysis close to the data and share their value-adding outcomes. We now have a complementary ecosystem of platforms, data sources and cloud services. To fully exploit the potential of these complementary resources we anticipate the need to encourage interoperation amongst the platforms, such that users of one platform may consume the services of another directly platform-to-platform.
The goal of the EO Exploitation Platform Common Architecture (EOEPCA) project is to define and agree a re-usable exploitation platform architecture by identifying a set of common building blocks that provide their services through open interfaces (e.g. OGC), to encourage interoperation and federation within this Network of Resources. We are also developing an open source Reference Implementation, to validate and refine the architecture, and to provide an implementation to the community.
The Reference Implementation comprises a set of open source components that are available on GitHub, and provided with helm charts for Kubernetes deployment. The components can be used together as an integrated platform, or individually for specific capabilities - which include:
- Application Deployment and Execution Service (ADES) - processing engine for execution of user defined applications via OGC API Processes interface
- Processor Development Environment (PDE) - integrated web tooling to develop, test and package apps for ADES execution
- Resource Catalogue - metadata catalogue for data/applications which provides OGC CSW, API Records, STAC and OpenSearch interfaces
- Data Access - standards-based access to both platform and user-owned data (OGC WCS, WMS, WMTS)
- Workspace - centralises the user’s management of owned resources through personal Resource Catalogue and Data Access services, integrated with platform S3 object storage
- Identity and Access Management - OpenID Connect (OIDC) for authentication (OIDC) and User Managed Access (UMA) for authorization, with integrations for external identity providers
We provide an introduction to each of the building blocks and the open source projects that underpin their development.
All of our work is available on GitHub (https://github.com/EOEPCA), via our website (https://eoepca.org/) and through our helm chart repository (https://eoepca.github.io/helm-charts).
Let's assume you have an attribute-focused table, but you would still like to see a thumbnail of the associated geometry. Or more generally: How to dynamically render polygon geometries in a HTML page without any mapping library. Enter ST_AsSVG (PostGIS function)!
Last year I showed how we display geo data in our webapps using vector tiles (ST_AsMVT). This year I will explain how we apply ST_AsSVG of PostGIS on database records to create beautiful geo-infographics in pure HTML. The result is a geo-visualization similar to this one: Comparison maps of Australian Cities (Size, Population). The trickiest part will be the sizing of the SVG objects (viewport vs. viewBox).
The talk will contain some theory on SVG. It will then show basic setups for FastAPI, SQLModel, Jinja2 and, of course, PostGIS. All code will be made available via GitHub.
After the talk you will master sizing of SVG and be capable of creating your own dynamic geo-infographics directly from data stored in your PostGIS database.
Elasticsearch is a well-known and mature NoSQL database providing search and analytics services for big datasets. The “elasticity” of its name comes from the distributed design and easy scalability capabilities that have made it an industry leader for more than ten years. In this talk we will present two exciting new features that have been added recently to the product related with the geospatial topic: vector tiles support and line and hexagon aggregations.
Vector tiles have become an industry standard to encode large amounts of data to be displayed in the browser by web mapping libraries like MapLibre or OpenLayers. Elasticsearch analytics & geo team has added a new API endpoint that renders search and aggregation queries as zipped protobuffers, allowing developers to retrieve right from the datastore assets that are ready to be sent to the user's browser without much further processing. This will speed up the rendering of large datasets by avoiding transferring JSON assets from Elasticsearch to application middleware.
Elasticsearch geospatial aggregation capabilities have been extended recently by two new methods, one is to allow combining related points into a new line geometry (think of a vehicle track) and the other is to aggregate geometries into an hexagon grid. The new geo-line aggregation will be very useful for asset tracking use cases where the second enables Elasticsearch to perform powerful analytics combined with the extensive support for metric aggregations.
In this talk we will present this project, going through the different use cases with some examples and demonstrations using both Kibana Elastic Maps and a simple ad-hoc web project that leverages this new feature.
When you care about data integrity of spatial data you need to know about the limitations/weaknesses of using simple feature datatype in your database. For instance https://land.copernicus.eu/pan-european/corine-land-cover/clc2018 contains 2,377,772 simple features among which we find 852 overlaps and 1420 invalid polygons. For this test I used “ESRI FGDB” file and gdal for import to postgis. We find such minor overlaps and gaps quite often, which might not be visible for the human eye. The problem here is that it covers up for real errors and makes difficult to enforce database integrity constraints for this. Close parallel lines also seems to cause Topology Exception in many spatial libraries.
A core problem with simple features is that they don't contain information about the relation they have with neighbor features, so integrity of such relations is hard to constraint. Another problem is mixing of old and new data in the payload from the client. This makes it hard and expensive to create clients, because you will need a full stack of spatial libraries and maybe a complete locked exact snapshot of your database on the client side. Another thing is that a common line may differ from client to client depending on spatial lib, snapTo usage, tolerance values and transport formats.
In 2022 many system are depending on live updates also for spatial data. So it’s big advantage to be able to provide a simple and “secure” API’s with fast server side integrity constraints checks that can be used from a standard web browser. When we have this checks on server side we will secure the equal rules across different clients.
Is there alternatives that can secure data integrity in a better way? Yes, for instance Postgis Topology. The big difference is that Postgis Topology has more open structure that is realized by using standard database relational features. This lower the complexity of the client and secures data integrity. In the talk “Use Postgis Topology to secure data integrity, simple API and clean up messy simple feature datasets.” we will dive more into the details off Postgis Topology
Building an API for clients may be possible using simple features, but it would require expensive computations to ensure topological integrity but to solve problem with mixing of new and old borders parts can not be solved without breaking the polygon up into logical parts. Another thing is attribute handling, like if you place surface partly overlapping with another surface should that have an influence on the attributes on the new surface.
We need to focus more on data integrity and the complexity and cost of creating clients when using simple feature, because the demands for spatial data updated in real time from many different clients in a secure and consistent way will increase. This will be main focus in this talk.
This talk is about the development of an Environmental Impact Assessments(EIA) data visualization system using FOSS4G. The system is being developed by Gaia3D utilizing several open source projects such as PostGIS, GeoServer, Cesium, and mago3D.
Although EIA has played an important role for environmental decision-making and sustainable development, most EIA statements are published as a mix of text and tabular data that is not
easily accessible to or understandable for the public. The system was designed to improve the public’s understanding of stakeholders before and after a construction project by providing visualization of key environmental elements. The final goal of the system is to improve the EIA process so that not only experts but also non-experts, citizens can participate in the EIA process and easily understand the meaning of the EIA statements with help from 3D GIS, Easy Finger real-time simulation technology.
This system development is 5 years long project funded by Ministry of Environment(MOE-2020002990005), South Korea. This talk will focus on the research outcome of Phase I and future plans. The final system will be opened as an open source with permission from Ministry of Environment.
Workflow engines like Apache Airflow are commonly used in data engineering nowadays. They provide an infrastructure for setting up, executing and monitoring a defined sequence of tasks, arranged as a workflow application. Tasks and dependencies are defined in a declarative way or in a programming language like Python. Airflow established using directed acyclic graphs (DAGs) to manage workflow orchestration.
This talk compares a selected subset out of the huge number of available Open Source workflow engines, which are especially suited for workflows containing spatial data processing. It compares the well known Apache Airflow engine with Dagster, an other solution using DAGs and a BPMN-based workflow engine using Celery as distributed task queue.
In the same space there is the new OGC API - Processes standard which is a modern REST API for wrapping computational tasks into executable processes. This talk gives an overview of the API and shows possible integrations with available workflow engines.
In this talk, we'll review the major milestones that have defined Spatial SQL as the powerful tool for geospatial analytics that it is today.
From the early foundations of the JTS Topology Suite and GEOS and its application on the PostGIS extension for PostgreSQL, to the latest implementation in Spark SQL using libraries such as the CARTO Analytics Toolbox for Databricks, Spatial SQL has been a key component of many geospatial analytics products and solutions, leveraging the computing power of different databases with SQL as lingua franca, allowing easy adoption by data scientists, analysts and engineers.
The CARTO Analytics Toolbox is a comprehensive library that provides advanced geospatial functionality through Spark SQL. It enables Spatial SQL analytics at scale providing the foundational tools for analyzing and visualizing geospatial data.
In this talk we'll cover the technical aspects of the library implementation using Open Source technologies, as well as demonstrating the installation and practical usage with a real-life example.
Our talk will go through some of the geospatial operations that can be performed directly in Spark and we will demonstrate how users of the Analytics Toolbox can create beautiful map visualizations leveraging the latest Open Source rendering tools; and how to address a wide variety of spatial use cases using other products built on top of open source technologies, like CARTO and Databricks.
The FunctionalScope software builds on the concept of the CityScope, developed by the MIT Media Lab City Science Group. The FunctionalScope supports urban planners in the functional planning phase of new neighborhoods, the phase in which a competition design proposal is refined in preparation for creating a binding land use plan (Bebauungsplan).
The tool offers a 3D view of the new urban design, vector(ized) data of the architectural designs, embedded into a MapLibre based application in the browser. Several near-to-realtime APIs offer the opportunity to evaluate a neighborhood’s design performance in terms of pedestrian flows, wind-comfort and traffic noise. Each simulation allows the user to set custom scenario criteria to enable to, for example, assess different policy and design strategies for the neighborhood such as pedestrian access to private land, speed limits on city streets, or simulate wind-comfort in for various wind conditions.
In addition to the web-interface for detailed planning stages, we have developed a tangible table, which allows users to iteratively generate new spatial configurations using 3D-printed buildings. Simulations are run for the designs created on the table, too.
The entire stack is built on open-source software.
We have used this tool in cooperation with the City of Hamburg (HafenCity GmbH) during the planning process of a new waterfront-neighborhood, Grasbrook. The FunctionalScope is designed to in a generic manner and twill be used in the planning of at least one new neighborhood-scale urban development project in Hamburg.
This talk will present the tech stack behind the tool: starting from the translation of architectural into geospatial data (geojson), covering the 3D neighborhood visualization in MapLibre and presenting our open-source near-to-realtime simulation APIs. Moreover, the technology behind the tangible planning table, based on an infrared camera, ArUco markers and Unity will be explained.
The talk concludes with lessons learned when developing and applying such an innovative tool to support a new neighborhood-scale development project.
In recent years, several Python packages (e.g. xarray, rasterio) have evolved around more basic software libraries such as netCDF4 or GDAL for accessing geospatial data. These packages allow to work with all kind of data formats (e.g. GeoTIFF, NetCDF, ZARR) providing the data in array format (NumPy, xarray) and constitute a fundamental part of any scientific analysis or operational task. However, they do not offer full flexibility when working with Earth Observation (EO) datasets. The multidimensional complexity of EO data (i.e. space, time, bands) is often resolved by distributing dimensions across many files and thus not always easy to access. An important step forward to streamline EO data access has been the Open Data Cube (ODC) toolbox, which utilizes predefined dataset configurations and file-based indices stored in a database. With this setup, ODC enables an easy and uniform access to multidimensional geospatial datasets. Still, users are often confronted with a great variety of data formats, and files being distributed over different systems. This can pose a hurdle when working with ODC, especially if one wants to process a new stack of geospatial data, where the extra overhead of a database can stall swift progress.
In order to close this gap, the yeoda (''your earth observation data access'') Python software package aims to resolve this shortcoming by offering a similar interface as ODC, but allowing to interact with geospatial data on a lower level. It relies on two other Python software packages developed by TU Wien: geospade (definition of geospatial properties of a dataset, e.g. geometries), and veranda (read/write access to a variety of raster and vector data formats, e.g. GeoTIFF). This modular setup ensures a clear separation of concerns, specifically between geospatial operations and I/O tasks, yielding a homogenized interface independent from the actual data format. For example, geospatial operations based on tiled EO raster datasets can be easily performed across tile or file boundaries. Data access is then realised in veranda, which combines geometric properties with I/O objects listed in a table. On top of geospade and veranda, yeoda acts as a communication layer between files stored on the file system and data objects by adding additional dimensions to the data table, such as common metadata or file name entries. Thus, one can filter multiple files by their attributes (e.g. time, bands, variable names, satellite platform) before accessing the data.
Hence, yeoda guarantees the necessary freedom to apply arbitrary algorithms on manifold data formats, while simultaneously supporting scalability by means of parallelised I/O operations. Despite ODC's tremendous value for accessing EO datasets through large scale operational services, yeoda introduces a new level of data interaction making it an indispensable tool for the EO user community. When taking a look on recent advancements in interoperable cloud-based processing via the openEO API, yeoda could be utilized as a slim back-end library to lower the hurdle of sharing new EO datasets and to foster scientific exchange.
GC2/Vidi: What’s new in spatial data infrastructure project
The GC2/Vidi platform helps you build a spatial data infrastructure quickly and easily. Powered using open source components for a scalable solution focused on freedom rather than fees.
GC2/Vidi comprises two software projects:
GC2 – makes it easy to deploy PostGIS, MapServer, QGIS Server, MapCache, Elasticsearch, GDAL/OGR. And offers an easy-to-use browser application to configure the software stack.
Vidi – a modern take on browser GIS. It is the front-end client for GC2.
The GC2/Vidi project is released under GPL and accepted as an OSGeo Community Project in 2018.
The talk gives a brief overview of the platform and summarizes the capabilities it has to offer. A new CLI tool (Command Line Tool), which enables administration, import/export of data, starting MapCache seed jobs, running SQLs and more will be introduced.
In addition, the new "GC2/Vidi User Group" will be introduced. It is a non-profit organization whose mission is to promote the adoption of GC2/Vidi and the underlying technologies as well as knowledge sharing. The organization was founded in 2020 and has about 15 members, including municipalities, public transport and private companies.
This presentation will go over recent updates to GeoBlaze, including the addition of support for Cloud-Optimized GeoTIFFs. We will also discuss the roadmap for the next couple years.
GeoBlaze can be used wherever vectors and rasters meet. You can use it to calculate the hectares of wheat in a country, the change in daily median earth temperature, and identify wildfires in satellite imagery.
GeoBlaze is built on top of the following open-source projects: dufour-peyton-intersection, georaster, geotiffjs, and calc-image-stats.
- Time Series Analysis with GeoBlaze: Mean Daily Air Temperature for the Month of May: https://observablehq.com/@geosurge/time-series-analysis-with-geoblaze-mean-daily-air-temperat
- Identifying Carr Wildfire with Landsat 8: https://observablehq.com/@geosurge/identifying-carr-wildfire-with-landsat-8
- Hectares of Rainfed Wheat in Ukraine: https://observablehq.com/@danieljdufour/hectares-of-rainfed-wheat-in-ukraine
UN Open GIS Initiative, established in March 2016, is to identify and develop an Open Source GIS bundle that meets the requirements of UN operations, taking full advantage of the expertise of contributing partners (Member States, international organizations, academia, NGO’s and private sector).
Geospatial Information Systems (GIS) has been played a substantial role in providing timely and effective geospatial information products (maps and dynamic tools) to ensure the United Nations operations are equipped with suitable information to support the UN mandates through informed planning and decision-making processes. The UN has been using proprietary GIS software for the past two decades. The rapid growth and development of open-source GIS solutions present the technological potential, operational flexibility and financial benefits as well as easy to access for UN operational partners and host nations.
In view of complexity and variety of UN operational demands and the outcome and lessons learned from the UN Open GIS Initiative, it is identified to develop a hybrid GIS platform that the users should be able to access the most suitable solutions to fulfil the operational demands in flexible and cost effective manner whether the solutions are open source or proprietary, combination of both and/or complement each other. The hybrid model lets coexist two software stacks, one open source based, the other proprietary, which renders different services to end users and applications.
Significant progress has been made so far in developing open source-based GIS solutions such as Hybrid Geospatial Database, GeoPortal, Analytical models/ applications, Data collection and Optimized/innovated applications for harmonizing open source technology with proprietary as well as open GIS trainings to the UN staff for smooth transition from proprietary to hybrid GIS platform technology.
In particular, it will provide an overall update on what has been achieved by the UN Open GIS Initiative during the past year, such as hybrid GIS architecture, mobile GIS solution, story map project, field application of OpenDroneMap and UN Vector Tile toolkit as well as the capacity building activities.
The shift to using vector rendering has enabled maps to take a leap forward compared to using raster data. It is now possible to offer a much richer experience by performing styling, processing and filtering directly in the client. Coupled with tiled rendering, it is now feasible to work with huge datasets directly in the web browser.
This presentation will look at how applications can be built using the open source deck.gl library, with a focus on displaying vector tilesets, styling and filtering data on the client, with acceleration provided by the GPU. We will look at how deck.gl elegantly works with vector tiles and show how maps and visualisations can be styled using a few lines of code. We will also explore tools provided by the CARTO platform, which bring these features to those without programming experience, via a web-app.
A brand new feature of deck.gl will be presented: the MaskExtension is a powerful tool that allows one dataset to act as a geospatial mask for another. For example this can be used to let the user select features on a map using a lasso tool, or to select map features based on a geospatial bound. All at 60fps on the client.
The continuously increasing amount of long-term and of historic data in EO facilities in the form of online datasets and archives makes it necessary to address technologies for the longterm management of these data sets, including their consolidation, preservation, and continuation across multiple missions. The management of long EO data time series of continuing or historic missions, with more than 20 years of data available already today, requires technical solutions and technologies which differ considerably from the ones exploited by existing systems.
The ESA project LOOSE (Technologies for the Management of LOng EO Data Time Series) enables investigating, testing and implementing new technologies to support long time series processing.
For specific tasks (such as ingestion, discovery, access, processing, analysis of EO data) a multitude of completely different mature open source components is usually available. LOOSE aims at combining functionally similar solutions from different heritages into one comprehensive framework. LOOSE even supports parallelism in a way that multiple solutions for the identical task are available and the application developer is invited to chose between these different components during implementation (e. g. "GeoServer" versus "EOXServer").
In addition, LOOSE partners extended well-known existing components with new capabilities (=interfaces) to support efficient ingestion, discovery, exploitation optimized access, processing and optimized analysis of EO data timeseries. For example, GeoServer was extended with the capability to handle STAC metadata.
Overall outcome of the project is a "blueprint architecture concept" which focuses on the interfaces between components and takes innovative concepts such as Bulk data retrieval from dedicated archives, OGC's Data Analysis Processing API and Data Cubes offering Discrete Global Grid Systems into consideration (see enclosed viewgraph).
The LOOSE system architecture is inspired by the EO Exploitation Platform Common Architecture (EOEPCA) and focuses on the technological evolution of selected services that enable the end-to-end workflow from retrieving long-term archived EO products to the extraction of high-level information based on processed value-added datasets. Architecture and interoperability are evaluated within LOOSE by using different implementations of these services (e.g. EOxServer and GeoServer) and deploying the whole system on two different infrastructures (DLR/LRZ and Mundi/OTC). The complete LOOSE infrastructure is built on Kubernetes and is therefore well transferrable between different cloud providers.
The validity of the LOOSE blueprint architecture is demonstrated in three different real-world application pilots.
These applications are covering totally different thematic areas:
- Agricultural monitoring (based on Sentinel-1 and -2 data),
- monitoring urbanization globally (also based on Sentinel-1 and -2) and
- supporting fishery in the Black Sea (multi sensor approach, including in situ-data).
LOOSE partners are DLR (Oberpfaffenhofen), EOX (Vienna), Terrasigna (Bucharest) and Mundialis (Bonn).
PostgreSQL is the most advanced opensource RDBMS. As GIS folks, you most probably use it in combination with PostGIS, its Geospatial plugin.
When dealing with Geospatial data, we usually focus on geometries. But most of feature attributes are text data. Of course, filtering on these text data with standard SQL capabilities is a day-to-day operation for database users.
But PostgreSQL provides much more capabilities when it comes down to text data management. In this presentation, we will go through a few of them.
After a quick look at standard text functions in PostgreSQL, we will discover the lesser known fuzzy matching modules :
pg_trgm extension allows for string searches using trigraphs to determine a similarity rank between text items
fuzzystrmatch extension provides fuzzy matching functions like soundex, Levenshtein, metaphone
Then, we will explore Full Text Search ( FTS ) PostgreSQL capabilities.
Last but not least, we will peek inside PostgreSQL collation concept, which has nothing to do with your lunch. Collations are a powerful feature in PostgreSQL allowing to adapt the way you deal with text data according to the localization. Like trying to answer this - apparently - obvious question : is '12' before or after '2' ?
And, because we can, display all of this on a map :-)
Orfeo Toolbox (OTB) is a free and open-source remote sensing software. It is available on multiple platforms, Linux, Windows and MacOs, and was developed primarily by CNES (French Space Agency) and CS Group in the frame of the development of the ORFEO program (French and Italian support program for Pleiades and Cosmo-Skymed).
OTB can process large images thanks to its built-in streaming and multithreading mechanisms. Its data processing schema is primarily based on ITK pipelines, and uses GDAL dependency to read and write raster and vector data. Many formats are supported by the library (at least those supported by GDAL) as CosmoSkyMed, Formosat, Ikonos, Pleiades, QuickBird, Radarsat 2, Sentinel 1, Spot5, Spot 6/7, TerraSarX or WorldView 2.
OTB provides a lot of applications to process optical and SAR products: ortho-rectification, calibration, pansharpening, classification, large-scale segmentation and more. The library is written in C++ but all the applications can also be accessed from Python, command line launcher, QGIS and Monterverdi, a powerful satellite image visualization tool bundled in the OTB packages capable of manipulating large images efficiently.
The library also facilitates external contributions thanks to the remote module functionality: users can add new applications without modifying the core of the library. If this new remote module is relevant, it could be added as an official remote module, like DiapOTB (differential SAR interferometric processing chain) and OTBTensorflow (multi-purpose deep learning framework, targeting remote sensing images processing).
Moreover, several operational image processing chains are based on OTB: their algorithms use the framework of OTB Applications while the orchestration is written in python. Some of the chains are also open source: Let It Snow (Snow cover detection), iota2 (Large Scale Land Surface Classification), WASP (Multitemp images fusion), S1Tiling (Sentinel-1 calibration and MAJA (Maccs-Atcor Joint Algorithm). The Orfeo Toolbox is also a part of the Sentinel 2 ground segment, being integrated in the S2 Instrument Processing Facility (IPF) module where it is used for radiometric corrections and resampling.
In the latest releases (from 7.x to 8.0), several features have been added as new SAR sensor models and new applications, and the OSSIM dependency - used for geometric sensor modelling and metadata parsing – has been removed in favor of functionalities available in GDAL. The aim of the presentation is to present the major features of OTB, the latest updates, the future features and architecture of the library and how OTB is used at CNES and CS Group to process data from scientific and developer points of view.
With our Lizmap hosting service, we provide and monitor several hundred of QGIS servers. These QGIS Servers receive and process 3.5 million requests per week, including 3 million WMS GetMap requests.
We do not control the content of these QGIS projects, which are sent by our customers on our servers. Therefore, we need to deal with projects having some various kind of issues. Some QGIS projects can have very heavy SQL views which are slow to load. Our infrastructure may host projects having hundreds of layers with complex symbology. Users can publish QGIS PDF layouts (A4 and A3) with custom logos etc. This can lead to memory problems.
GIS technicians can add different data sources : vector and raster files, PostgreSQL / PostGIS database, OGC WMS, WFS and WMTS web services into these QGIS projects. We need to ensure that QGIS Server is working properly, for all customers, to execute incoming requests when some external Web Services providers are too slow to respond or are temporarily offline.
We need to take care of possible errors propagated by these projects. In some circumstances, we have about 10 thousand errors per week coming from QGIS server.
The goal of this presentation is to give an overview of what QGIS Server can experience into the wild and what we need to do to make the Lizmap user experience the best possible: monitoring, proxy, caching.
In force since 2007, the INSPIRE Directive has established a European Spatial Data Infrastructure to support European Union (EU) policies relevant to the environment. The INSPIRE Geoportal (https://inspire-geoportal.ec.europa.eu) constitutes its main component, being the central point of access to all datasets published by EU Member States falling under the scope of the Directive. Using the INSPIRE Geoportal, users can search for, access, visualize and download datasets published by more than 7000 data providers from across Europe.
In line with the open source strategy of the European Commission and the ambition towards a sustainable evolution of the INSPIRE infrastructure based on open source components, since 2021 the INSPIRE Geoportal is has been revamped by using cutting-edge, open source applications and open standards, while redesigning the way in which information and services are offered to users.
The process comprises deep changes in the Geoportal backend, totally renovating the underlying catalogue application: management interface, powerful harvesting engine, set of metadata, data and service linking tests, search engine, automatic metadata translation capabilities, containerization and deployment in a cloud environment. In addition, a new frontend (user interface) is integrated with the mentioned backend using APIs, making both layers more independent in terms of technological stack and update cycles.
The application selected for achieving these goals is GeoNetwork opensource, currently constituting the catalogue choice of around 80% of Member States national geospatial data portals in the EU. Since its inception in 2001, it has been developed with a strong focus on international standards and many new features have been added over the years.
During the last year, the GeoCat staff and the INSPIRE team at the European Commission’s Joint Research Centre have closely collaborated in the development of a new powerful and versatile system for delivering the revamped INSPIRE Geoportal, contributing the improvements – to the maximum possible extent – to the core of GeoNetwork to maximize their exploitation by the geospatial community.
More in detail, the system encompasses a high-performance harvesting system based on a microservices, including link validation and reporting functionality. All these pieces have been developed as separate components, which can be scaled independently from other components within a GeoNetwork-based infrastructure. Performance and compliance to the INSPIRE Technical Guidelines formed part of the key requirements of the work. As a result, GeoNetwork capabilities for supporting the INSPIRE Directive will be substantially improved and completed in the toolkit. This, in turn, will also improve the transparency and collaboration between the national authorities and the European level.
At the end of this transitional phase for the INSPIRE Geoportal, foreseen by mid-2022, the project will deliver tangible benefits. Firstly, by providing EU Member States with an up-to-date and transparent toolkit for managing their geospatial metadata, data and services in compliance to the INSPIRE Directive monitoring and reporting obligations. Secondly, by making a direct valuable contribution to the core of GeoNetwork and to the whole open source geospatial community.
MapServer is one of the founding OSGeo projects, and is used for publishing spatial data and interactive mapping applications to the web .
This talk provides an overview of new developments for existing users, and to show the potential of MapServer for those yet to try the software.
We'll review migrating to the new MapServer 8.0 release , using the new OGC API, highlighting lesser-known features, optimizing performance, and reporting news from the MapServer ecosystem.
MapScript , a scripting interface to MapServer provided in several languages such as Python, PHP, and C#, will also be covered.
This talk will give an overview of current and planned development for MapServer and its related project MapCache, a tile server that speeds up access to map layers .
Finally, we'll look at how to become involved in the MapServer community both as a user and as a developer.
This is a technical feedback about why the COG (Cloud Optimized GeoTIFF) format is valuable outside the cloud and can speed up productivity in many ways.
During first months, remote work and COVID, IT department was overbooked and has to face to many issue such bandwidth limitation. Images display was suffer in GIS client. Of course, webservice was always available but user has to control on band order or radiometry settings. WCS is supposed to be the solution. Unfortunately, it offered degraded performance.
COG is supposed to be serve from HTTP server or S3. But we’ve simply test from a network drive / mount point and it offer great performance. Depending internet connection, it could be as fast as it is in local !
COG advantage must be consider outside of the cloud as remote work tends to develop more and more. It could avoid to deploy heavy webservice infrastructure for only raster visualization.
From other side, benchmark between publish some other format compare to COG in GeoServer. From Regional Data Infrastructure, it’s streamline storage data between raster format as input file for webservices and opendata raw downloading services as open archives.
Finally, I will give some feedback and tips and tricks to find best parameters to convert orthophotography, DEM or DSM, etc. to COG.
Belgian federal authorities are working on PSI/INSPIRE conversion tool. We have produced an enhanced DCAT AP 2.0 profile. We have proposed to use ATOMFeed to instantiate dcat:Distribution classes because of their semantic completeness. We have tried to keep most of the INSPIRE metadata elements in order to keep the work that has been done for some years..
Now we are working on its implementation through GeoNetwork 4.x microservices that would provide a consistent DCAT AP RDF/XML with many languages. By doing this we consider that our datasets will be more accessible through many platforms and open data portals and will become real FAIR data.
This work is the result of the strong collaboration between federal belgian authorities (e.g. Cadaster, National Mapping Agency, Office of federal statistics, ...).Moreover we are involving regional authorities in order to reach a certain harmonization. Now we would like to share these developments with the opensource community. You can find more information here https://github.com/belgif/inspire-dcat.
This talk will be about the art of beautiful digital cartography. Some of the features in Mapserver can contribute to making maps that stand out a little extra. We will focus on advanced line symbology, the layer composition pipeline and the newly added GEOMTRANSFORM "centerline".
Creating very complex line symbology can be tricky. We will go into detail about how to build such symbology. The layer composition pipeline offers many exciting possibilities. We will show various examples how to achieve some stunning symbology for different feature types. The geomtransform centerline function can produce beautiful labeling possibilities. My first experiences and lessons will be shared. Other things that could come up is possibilities with “Named Styles”
To summarize it will be a talk about some new features and some older features in MapServer that are described in more detail. The talk is based on practical experiments and real problems that the author has experienced.
The Orfeo ToolBox is used as development framework for satellite image processing over large dataset in several operational projects. Indeed, its image processing functionalities (multithreading, streaming, ram configuration) allow to process big images quite fast. The operational processing chains use OTB from the Python API and C++ API.
Among the optical chain processing using OTB, we can list: MAJA, WASP, BIOPHY and IOTA2. MAJA (Maccs-Atcor Joint Algorithm) is an atmospheric correction and cloud screening software, based on multi temporal and multi-spectral processing. This chain uses L1C products to generate high quality L2A surface reflectance time series for Landsat8, Venus, and Sentinel 2 missions, it is mainly used by THEIA distribution center. The core algorithms of Maja are based on the Orfeo Toolbox. To process a product, the chain uses aerosol contents, cloud and shadow detection and various atmospheric effects to estimate accurate surface reflectance values. The main problem of the L2A products is the presence of clouds in time series which is why WASP was created. Indeed WASP (Weighted Average Synthesis Processor) delivers L3 products which provide monthly syntheses of cloud-free reflectance for Sentinel2 and Venus L2A products distributed by THEIA. This processor mainly includes a directional correction to normalize data and a weighted average of surface reflectance. Two other operational chains which uses OTB are BIOPHY - the goal of this processing chain is to create L2A products containing biophysical variables (FAPAR, FCOVER, LAI) related to the presence of vegetation in the image over a year – and IOTA2 - a soil occupation processor over a year of Sentinel1 and Sentinel2 data, the algorithms use the classification toolbox provided by OTB to process large areas, to determine the areas covered by buildings.
OrfeoToolBox is also used for radar processing chain, like diapOTB or S1Tiling. S1TIling is a generic processing chain for Sentinel-1 time series developed with open-source software. Its main goal is to produce time series of Analysis ready data S1 images for large areas. The algorithms are using the SAR processing toolbox from OTB to take profit of its in-memory pipelining capabilities. DiapOTB is a differential SAR interferometry processing. It uses two SAR images of the same portion of the Earth’s surface taken at different time as input and aims to analyze potential events (earthquake, destruction …) by highlighting differences between SAR images.
To conclude, OTB is the central framework for a large scale of operational chains in remote sensing. Its genericity permits to cover a lot of use cases in one single tool. Note that it is also distributed in other projects like WorldCereal, SNAP, AI4GEO as toolbox or RUS as training and formation aim.
PgMetadata is made for people using QGIS as their main GIS application, and PostgreSQL as their main vector data storage.
The layers metadata are stored inside your PostgreSQL database, in a dedicated schema. Classical fields are supported, such as the title, description, categories, themes, links, and the spatial properties of your data.
PgMetadata is not designed as a catalog application which lets you search among datasets and then download the data. It is designed to ease the use of the metadata inside QGIS, allowing to search for a data and open the corresponding layer, or to view the metadata of the already loaded PostgreSQL layers.
By storing the metadata of the vector and raster tables inside the database:
- QGIS can read the metadata easily by using the layer PostgreSQL connection: a dock panel shows the metadata for the active layer when the plugin detects metadata exists for this QGIS layer.
- QGIS can run SQL queries: you can use the QGIS locator search bar to search for a layer, and load it easily in your project.
The administrator in charge of editing the metadata will also benefit from the PostgreSQL storage:
- PostgreSQL/PostGIS functions are used to automatically update some fields based on the table data (the layer extent, geometry type, feature count, projection, etc.).
- The metadata is saved with your data anytime you backup the database
- You do not need to share XML files across the network or install a new catalog application to manage your metadata and allow the users to get it.
The plugin contains some processing algorithms to help the administrator. For example:
- a script helps to create or update the needed "pgmetadata" PostgreSQL schema and tables in your database
- a algorithm creates a QGIS project suitable for the metadata editing. This project uses the power of QGIS to create a rich user interface allowing to edit your metadata easily (forms, relations). Why use another interface when QGIS rocks ?
More PgMetadata features will be shown during the presentation:
- Modification of the template to tune the displayed metadata
- Export a metadata dataset to PDF, HTML or DCAT
- Publish the metadata as a DCAT catalog with Lizmap Web Client module for PgMetadata. It can then be harvested by external applications (Geonetwork, CKAN)
- The data model is very close to the QGIS metadata storage and the DCAT vocabulary for compatibility.
We will also show the last features such as the new support of the PostgreSQL rasters
In Postgis Topology a merge of two surfaces does not involve spatial operations, since
the surface to border relation has foreign key structures in the database. This means that the border of the new object is spatially not touched/changed when two surfaces are merged. With simple feature the common border must be computed on the fly, which again may involve snapTo and cause tiny overlaps and gaps.
With Postgis Topology you can easily make an API where the client only sends new borders which is a key issue to secure data integrity. This secures that old border are not are not moved by a client error or the by simple transport format, because existing points are never not passed back to the server. Postgis Topology makes it easy for the server to work with those new borders(delta), because there are standard methods for this in Postgis Topology and all relations between border and surfaces are stored in the database. Postgis Topology also has validation routines in addition to using standard database constraints to secure a healthy system.
The principles that Postgis Topology is based on was used in spatial system many years ago, but one problem was to keep the border line work nice and clean and not end up in a spaghetti. So one of the first things we did together with Sandro Santilli was to create methods on top of Postgis Topology to avoid this, by throwing away any border parts that does not contribute to a new “valid” surface.
Postgis Topology is built on a relational database model that is based on SQL-MM part 3. Your own domain data are easily linked to border, surface objects with more. For instance to check domain attributes on a surface on the other side of a border is not spatial query but a standard relational query.
The following projects will also be touched in this talk:
https://gitlab.com/nibioopensource/pgtopo_update_sql (Functions using Postgis Topology to make it easy to create spatial update clients.)
https://github.com/strk/qgis_pgis_topoedit (Postgis Topology is very well integrated with QGIS.)
https://github.com/larsop/resolve-overlap-and-gap (Show how we clean up, simplify, generalize simple feature tables with millions of rows using Postgis Topology)
Is relational database structure a good choice for Postgis Topology? Yes I will mean and since it’s also linked up SQL-MM part3 and not a random private structure and with all great Postgis functions available this is very good combination. You may take take glance at https://www.ibm.com/ibm/history/ibm100/us/en/icons/reldb/ and other articles about relational databases.
The plan now is to build a full ecosystem around Postgis Topology with a generic client to support declarative rules, where you can define attributes, rules for attribute handling and how to deal with overlap and gap.
All the work NIBIO has done/is doing here would not have been possible with out the great support from Sandro Santilli.
This talk will give an overview of organizational and technical characteristics of the Open Source project Masterportal. We will also give an outlook about our activities towards getting Masterportal becoming an OSGeo project in the future.
Driven by the succeeding trend of Open Governmental Data and the resulting needs for easy-to-use data platforms, the Masterportal (published under MIT license) sucscessfully developed into a powerful, customizeable Open Source solution for the visualization of geospatial data within the past years.
In order to build up a user and development community, the Masterportal Implementation Partnership was founded in 2018. This organization actually includes 35 public administration partners at the federal, state, and local levels in German-speaking countries (as of 02/2022). The most important committees of the Implementation Partnership are:
- Strategic Committee (steers and controls the strategic direction of the master portal)
- Technical Committee (supports the Strategic Committee in technical issues)
- Product maintenance (technical further development, release management, etc.)
- Maintainergroup (supports the product maintenance in technical further development, processing of PullRequests etc.)
- Product Management (coordinates organizational matters, public relations, event planning, etc.)
In addition to regularly committee meetings, various workshops are organized, for example on the initial setup of the software or on special technical topics such as the integration of secure geodata services.
Beside the partners from the public administration side, there are various companies, that offer support and maintenance and contribute to the further development of Masterportal.
From a technical point of view, the code is mainly based on OpenLayers, Vue.js and Bootstrap, while a wide range of modern ES6 libraries are used within the development. There are accurate and helpful development guide lines and code convention to ensure a reasonable code quality.
Without any further programming skills, a beginner may easily setup and configure a feature-rich modern WebGIS portal including also an individual design. The configuration is done in a application context file, which is defined in JSON-format. Besides well-known features like printing, common map interactions, layer management, routing, search or drawing, Masterportal also supports all common OGC-Standards including WMS-Time and SensorThings API and the integration of metadata records.
Also, Masterportal serves as an underlying software basis for more individual projects - therefore the addon concept allows maintainers to add further individual functionality, like special tools or individual feature info themes.
The talk will introduce the project and show some examples from real world clients, build upon Masterportal.
Twitter Link: twitter.com/masterportalorg
Code, Documentation: bitbucket.org/geowerkstatt-hamburg/masterportal/
Update on the status of OGC API standards and draft specifications enabling client-driven execution of processing workflows, supporting on-demand and ad-hoc selection of data and algorithms. Overview of the capabilities enabled by OGC API - Tiles, OGC API - Coverages and Processes – Part 3: Workflows and Chaining. Demonstration of both a server and a client implementing these specifications.
The Workflows and Chaining draft extension specification to OGC API – Processes enables ad-hoc execution of workflows integrating processes and data available from one or more OGC API instances. The specification allows triggering processing as a result of requesting results for a specific area and resolution of interest, which provides a simple mechanism to chain geospatial data inputs and outputs.
By referring to a collection of geospatial data irrespective of a particular area, resolution or date/time of interest, workflows can be defined in a generic, re-usable manner, and processing can be performed on-demand rather than (or in addition to) as a batch execution. Such on-demand processing has the advantage of optimizing the use of computing resources and speeding up the availability of the latest available data, such as for continuously captured Earth Observation satellite imagery.
The initial version of the Workflows and Chaining specification was a result of a GeoConnections 2020-2021 project funded by Natural Resources Canada, which also supported the development of a unified OGC API driver in GDAL allowing to directly visualize the results of such workflows in QGIS.
OGC API – Tiles is the specification succeeding to WMTS in the OGC API family, leveraging the concept of 2D Tile Matrix Sets. In addition to providing tiles of maps or imagery, Tiles can also be used to distribute raw data tiles, including coverage and vector tiles. Using tiles to deliver results and trigger execution of processing workflows can facilitate caching while allowing to efficiently select an area and resolution of interest.
OGC API – Coverages is the specification suceeding to WCS in the OGC API family, and provides a simple mechanism to request an optionally down-sampled subset of a coverage. Specific fields (e.g. imagery band) can be selected as needed. The Coverages specification can also be used to request results while triggering execution of a workflow.
GeoJSON is one of the most common geospatial data formats. In simple terms, it is an extension of JSON with geometry property. It is text-based and designed with human readability in mind. For the sake of being eye-convenient, there is a performance trade-off when the browser renders it. GeoJSON consists of features containing redundant property keys, causing the size to be bloated as the feature size goes up. Commonly, drawing GeoJSON with the size of tens megabytes would be slow. Showing a hundred megabytes of GeoJSON data on the browser would most likely crash the browser.
When we are in complete control of the system: back end, front end, or anything in between, we could probably change the source format to something more efficient like Vector Tiles. But what if we can only tweak the front end?
When we can only tweak the front end, geojson-vt comes to the rescue. Initially designed for Mapbox, we can pair it with OpenLayers to render GeoJSON on the fly as Vector Tiles. We will compare the performance between direct GeoJSON rendering versus geojson-vt for different types of GeoJSON. The usage is straightforward, making it a pretty easy solution to improve our map’s performance. On top of that, we could still use Vector-specific Open Layers function like getFeatures when needed.
Humanity is facing a number of challenges, among which climate change and biodiversity loss range in top positions. Investors are exposed to those risks through their investee companies given their dependence on biodiversity and ecosystem services. Therefore, just recently the financial community has started to pay attention to the economic and financial consequences of biodiversity loss. However, a key challenge is that a conceptual framework for measuring and understanding biodiversity-related financial risks (physical and transition risks) is less advanced than for climate-related financial risks. Thus the topic of biodiversity and corresponding risks is gaining momentum and methodologies how to assess them are being developed. In this context and due to the mounting pressure of biodiversity loss numerous frameworks like the EU Taxonomy for sustainable activities or the Task Force on Nature related Financial Disclosures (TNFD) have recently been created that try to curb these effects at scale. The intention of the international community is to channel investments to business ventures that do not harm biodiversity or ideally even regenerate it while investors are generally interested in minimizing their risks. To this end a consortium of financial specialists, biodiversity experts and remote sensing data analysts is developing a methodology and prototypical platform, initiated by WWF Switzerland, that quantifies the physical risks, i. e. risks associated with degrading ecosystem services (at the site level subject to data availability). Part of the innovation is to quantify ecosystem service dynamics at asset locations, i. e. the locations where companies actually produce their goods. The production sites and related activities are analyzed with regard to their dependency on five ecosystem services, namely (i) surface water, (ii) flood and storm protection, (iii) pollination, (iv) fiber & other materials and (v) climate regulation to quantify the physical risk at these locations by means of an index. This presentation focuses on the physical risk module, where EO data products represent a valuable source of information. The module builds on fundamental work done in the ENCORE project, in which the relationships between drivers of environmental change, natural capital assets, ecosystem services and production processes have been described exhaustively for the entire economy. Additionally, established databases containing information on the status of natural capital assets, e.g. the WRI Aqueduct 3.0 dataset on the spatial distribution of water-related risks, are used as a baseline against which other data can be compared. Up-to-date EO data products are then used to assess drivers of environmental change to investigate dynamics and trends, and to identify potential threats to the natural capital assets and the provisioning of corresponding economy-relevant ecosystem services. Here, we focus on drivers responsible for biodiversity loss, for example by using annual global land cover data from the Copernicus Global Land Service to assess habitat modification due to land cover change.
The World Meteorological Organization (WMO) Information System (WIS) is a coordinated global infrastructure responsible for telecommunications and data management functions and is owned and operated by WMO Members.
WIS 2.0 will provide users with seamless access to diverse information from a wide range of sources and will enable weather, water and climate information to be related to socioeconomic and other contexts. Through an open ecosystem of tools, applications and services, WIS 2.0 will allow all information providers to manage, publish and share their data, products and services, and will allow all users to develop value-added services and new products.
The WIS 2.0 principles highlight and promote the value of standards, interoperability and the Web/mass market. This will extend the reach of weather/climate/water data for a number of societal benefits.
WIS 2.0 is being designed to have a low barrier to entry for data providers. This will also result in enabling infrastructure and provide great benefit for less developed countries (LDCs). There is a strong motivation to provide LDCs easy to use tools and sustainable workflow for data exchange to 1./ ease the burden of exchanging data 2./ continue to provide valuable weather/climate/water data in WIS 2.0 over time.
The WIS 2.0 in a box (wis2box) project enables LDCs free and open source onboarding technology to integrate their data holdings and publish them to WIS 2.0 in a manner consistent with the architecture for plug and play capability, supporting discovery, access and visualization.
This presentation will provide an overview of the project and current capabilities highlighting the use of numerous FOSS4G tools and PubSub driven implementation of OGC API standards.
Managing hundreds of layers from different sources in a Mapserver production is extensive work. Keeping them up to date, scalable and in constant deployment takes time and effort. Not to mention monitoring all of it.
By combining a configuration management tool (open-source Progress Chef in our case) and Mapserver, we have a continues deployment cycle. Mapserver’s map file is divided into pieces that Chef puts together. All the layer files are separate entities which are easily manageable and changeable. Different map files can be produced combining different layers to keep map files smaller but still all in one place for management. It also enables to switch off or turn on layers easily.
This also gives the benefit of keeping development environment different from production.
Through MapProxy seeding process we also provided our thousands of users with their base map services and serve them WMS, WFS and our own produced Vectortiles.
All of it is also under constants monitoring and the logs are processed to produce simple statistics to see which applications are requesting, which layers are being accessed the most. We have built a notification system that notifies us immediately through hooks if our services are down or there are errors in any of the Mapserver layers requests.
It brings us back to the point of how to make your Mapserver layer handling, production, and management smoother and more straightforward. Let us share our insight!
MobiDataLab is the EU-funded lab for prototyping new mobility data sharing solutions.
Our aim is to foster data sharing in the transport sector, providing mobility organising authorities with recommendations on how to improve the value of their data, contributing to the development of open tools in the cloud, and organising hackathons aiming to find innovative solutions to concrete mobility problems.
The project consists of following main pillars:
1) Open Knowledge Base
... a portal about open mobility data which provides informations about about
practices and solutions related to legal and regulatory (s.a. licenses), governance,
data privacy, technical standards (for data interoperability and accessibility),
and challenges for actors in the mobility domain.
2) Transport Cloud
... a cloud-based prototype platform for sharing mobility data. It facilitate users by
several tool components to find, use and interact with mobility data in an open, interoperable
and privacy-preserving way.
3) Living and Virtual Labs
... are the environments for the project to interact with the reference group (mobility data providers and users),
b2b and endusers (s.a. data innovators, solution providers and further stakeholders in the mobility domain) to get
feedback on challenges and missing pieces in the mobility data and services assets. A set of mobility use-cases,
set-up by the project stakeholders and the reference group will help to trigger practical execution, innovation,
and further ideas within the labs.
4) Socio-economic impact
... identifies the the current best practices in data sharing, analyses the market potential and elaborates new
data sharing services and business models on that.
With the heterogeneous experts project group, we are facing the challenge of mobility data sharing from different perspectives - research, privacy, data, mobility solutions, open data, services, ... .
A close work with a large reference group (which is representing several mobility data providers - e.g. from the public and private sector and actors, s.a. start-up communities) and the implementation of virtual and living labs allows to get early real world feedback and help to identify challenges in interoperability and missing standards as well as findable and available data, conflicting licenses, accessibility and usability of data and services.
Since the project started in February 2022, we will present the mid-term achievments and
provide an outlook on the second part of the project.
Further information on the project is available via https://mobidatalab.eu
MobiDataLab is funded by the EU under the H2020 Research and Innovation Programme (grant agreement No 101006879).
The MapTiler plugin is the easiest way to load styled vector tiles into QGIS. The plugin allows anybody to easily load map data of the entire planet (from OpenStreetMap project), with details down to the street level from Cloud or any other URL.
The version 3.0 of MapTiler plugin brings several new features, maps and datasets.
A new global DEM of the entire planet is ideal for terrain spatial analysis. New maps - both in vector and raster - OpenStreetMap (popular OSM Carto finally in vectors!) and a Winter map for all wintertime activities. A new Satellite map based on our new 2021 cloudless satellite imagery with 10m resolution for the entire planet.
The plugin offers maps of the entire world in vector or raster tiles, but can also open maps from any other URL. You can load high-resolution aerial imagery, hillshading, global terrain data and contour lines for outdoor maps or official government open data from various countries.
A ready-to-use list of beautiful map styles is available to QGIS users. Those who prefer customized maps can make their own map design in a few clicks using the Customize tool. Users can set their own colors, fonts, or choose the language of map labels.
Use the power of QGIS and reproject, rotate and export vector tiles to various formats (including PDF, SVG or DWG) or use Print Composer to create beautiful high-detailed maps to fit your needs.
The plugin is an open-source project with code available at GitHub repository and open to any contribution from developers and users.
I am working as the Technical Lead at Blue Sky Analytics, a climate-tech startup empowering the world’s decision-makers with accurate, real-time, and standardized climate data.
All datasets that we are building here at Blue Sky Analytics, technically have one similarity - they all have a space and time component. We tried to build solutions like filling empty values in inconsistent temporal data, and dividing the data in specified time period chunks for faster queries, while these worked as POC, they were not easy to scale up. Working with structured data was much easier to understand, working on postgres with the addition of timescale and PostGIS gave us exactly what was needed. Building the solution at the database level with the existing open-source technologies has been an exhilarating experience.
Imagine a dataset with hourly frequency going back years on a global level, with frequent inconsistencies, that not only you have to efficiently store but that should also be highly accessible in combination with other such datasets. If not for the open-source, we would not have been able to answer questions like:
- How much have the lakes shrunk between the years 2010-2020 on a yearly basis?
- Finding GHG emissions from biomass burning of "all US states, for the last 10 years on a monthly, weekly, daily basis".
Leveraging other open source solutions like h3-pg indexing also helped us to reduce the query time by an exponential factor for global level queries!
While the database sounds pretty amazing, another challenge was putting it all together and deploying it on the cloud, which was a whole another challenge. The most intuitive solution was to deploy a bunch of Postgres instances. While it was not so hard to implement the basics, it became almost impossible to scale up or down, install rolling updates, account for failures.
Keeping up with the tech, Kubernetes seemed like a great solution for building a high availability cluster service, and finding the postgres-operator (PGO) by crunchydata was exactly what we needed. It combined all the right tools like pgBouncer, pgBackRest, and monitoring solution using grafana and Prometheus all in one packaged easily to deploy service. While the learning curve with Kubernetes was a little steep, it lead to building a highly scalable and resilient database cluster.
The PostgreSQL + PostGIS + Timescale + H3 stack helped us simulate the temporal and spatial nature of the world at the database level and gave a universal approach to store and query all our datasets. It can handle textual data like fires with time (recorded time) and spatial information (lat -long) or shapes of counties, water bodies, etc., and combine them with each other using few joins giving us a very powerful geospatial-temporal query engine.
Without FOSS it would have been impossible to even imagine any of this but as of now, we are quantifying climate change!
OTBTF is a remote module of the Orfeo ToolBox enabling deep learning with remote sensing images.
Created in 2018, it aimed to provide a generic framework for various kind of raster-oriented deep-learning based applications.
Originally, OTBTF included user-oriented applications for patches sampling, model training, and inference on real world remote sensing images, and a few python scripts to help users with no coding skills to generate some ready-to-use models.
A few years later, it has been used for a wide range of applications, like landcover mapping at country scale, super-resolution, optical image cloud removal, etc.
This talk will present a few selected IA based applications powered by OTBTF in the framework of research projects, public policies support, or teaching.
We will present the recent features added in OTBTF and we are very happy to introduce what is next!
More details on the project on the github repository: github.com/remicres/otbtf
The Re:Earth project grew from the idea of, "What would be possible if anyone, anywhere could access the digital Earth's potential?". To make this a reality, we knew Re:Earth needed to be a no-code solution. But more than that, we needed to make sure hardware and OS requirements wouldn't get in the way, which is why Re:Earth is a fully web-based application.
Re:Earth allows you to manage, edit, compute and visualize a multitude of geographic information including 3D data with no coding required.
We knew projects as well as data would need to be shareable so we have both project publishing and data exporting.
Publishing a project is easy and gives users the chance to opt-in or out of SEO, change their URL and setup publishing to their own domain. Exporting data is easy and supports many of the most common file formats seen in GIS.
It is also the first WebGIS to feature a plug-in system that runs in the browser.
Today, we are focused on solving a problem people face in maintaining, organizing, and managing a wide variety of data, by developing Re:Earth into a general-purpose data management system that can handle all types of data, and one that can be integrated with the user's existing systems.
Our desire has always been to open Re:Earth to the OSS community, to build a global community around the vision of Re:Earth, and to provide and disseminate the value we create with our contributors to the wider society.
The first step to making this happen was Resium, a popular OSS package that allows developers to use Cesium with React. With Resium we have been able to write Re:Earth's codebase with React and Typescript on the front end. As the main backend language we chose Go. By using these modern languages we have kept Re:Earth highly maintainable and scalable and hope that other developers will find contributing to it easy.
At our institute we manage a lot of input data and model outcomes of soil data to be shared online. We experienced that updating service configurations and metadata records can be quite a challenge, when managed manually at various locations. We've been working on tooling to help us automate the publication processes. These days data publications are set up as CI-CD processes on Gitlab/Kubernetes.
These efforts resulted in a series of tools which we call the Python Data
Crawler. The crawler spiders a folder of files, extracts and creates metadata records for the spatial files, as well as generates a Mapserver configuration for the data to be published as OGC services. Underneath we're building on the tools provided by the amazing FOSS4G community, such as GDAL, Mapserver, pygeometa, owslib, mappyfile, rasterio and fiona.
A typical use case for this software is with many organizations maintaining a file structure of project files. The crawler would index all the (spatial) data files, register the metadata records in a catalogue and users would query the catalogue from QGIS Metasearch to find and load relevant data.
We will present our findings around the project at the conference and hope to talk to institutes with similar challenges, to see if we can create an open source software project around the Python Geodata Crawler.
Vector tile map is now industry standard and general-purpose schema is available on OpenMapTiles project. But if the features that people focus on in your own country are different from the default schema? We developed styles to cover and highlight Japanese authentic geographic attributes such as railways, hot springs, and religious facilities with an improved OpenMapTiles schema. The styles are available on MapTiler Cloud as MIERUNE styles globally. In the process, we developed some useful tools for vector tile styling. One is a style-competing tool that makes it cartographers easy to compare two versions of styles interactively. Another is a kind of style management tool using git that visualizes diff of style.json and takes screenshots automatically. Structured approaches of planning and implementation of vector tile styling are not much shared. In this talk, we will speak about how to enrich the styles for your own country and enhance the styling process for vector tile cartographers.
Baremaps is a blazing fast vector tile server which makes your life easier regarding the publication of OSM data: import, generation and cloud storage.
But Baremaps also shines and differentiates from solutions like pg_tileserv in the way you can customize your tileset and merge custom datasets.
Based on this advantage, we turn Baremaps out to be a vector tiles studio api, allowing the user to easily customize the content of the vector tiles.
We adopted the OGC api specification for tileset, layers and styles. Baremaps offers various entry points to manage the datasets and serve them as vector tiles. As an exemple, you can dynamically import different kinds of data sources (geojson, SHP, database) to the server which will expose them as datasets, then you can use any kind of dataset within the same tileset. You can also bring value to your data by doing aggregations (spatial, attribute, hexbin) or computation. It leverages the power of postgis functions and vector tiles specification into one solution. You can attach a style for your dataset and baremaps will serve both Mapbox style file and Vector tiles stream to render the map the way you expect.
To illustrate this concept, we will showcase a studio UI which literally provides a tool to quickly create valuable maps and publish them to the web.
Baremaps Studio is the solution to handle dynamic rendering and styling of your vector datas.
Farmers and herders in the West African Sahel are critically vulnerable to climate shocks and need access to climate information to secure their livelihoods. Herders use data on pasture and water availability to move their livestock and farmers need weather predictions to plan their planting. While satellite imagery has made much of this information readily accessible to the spatial community, few channels exist to transmit this information to herding communities. As a result, climate data has become more powerful than ever before, yet mostly inaccessible to those who depend on this information for their livelihoods.
This talk goes over the lessons of a programme that seeks to bridge this gap. GARBAL is a call center that uses Copernicus Earth Observation imagery and field data to provide farmers & herders with information on pasture, water and markets in Mali, Niger and Burkina Faso. GARBAL was first developed in 2015 and this talk will provide lessons from several years of practice.
The GARBAL interface is built on mapserver and uses automated scripts to download and treat imagery from Sentinel 2 and Meteosat which then display information on pasture conditions and water availability. Field data is routed through a network of local data collectors who provide weekly updates on livestock conditions and market prices. In addition to an interactive map, the interface provides user-friendly textual outputs that summarize all the layers for any area of interest on the map, which allows call center agents to quickly provide data to callers.
The talk will share lessons from the technical and programmatic aspects of the project. The technical side will go over the architecture of the data treatment, demo the interface, talk about successes and failures and show how you can play with the data yourself. The programmatic side focuses more on how the user needs evolved over the years, techniques for translating GIS data into information useful to farmers and herders, operating in areas of active conflict and how EO data fits into existing centuries-old traditional data collection systems in the Sahel.
Recent advancements in both raster and vector tile generation mean that TileJSON services can now serve tiles from on-the-fly sources as well as pre-built caches. Currently, Addresscloud uses CloudFront backed by S3 buckets to serve tile caches for its customer-facing applications. Whilst this configuration worked well for pre-built tile caches, it does not readily support on-the-fly generation and is limited by CloudFront's requirement for cookies or signed URLs for private tilesets. In this presentation we will look at the use of Amazon's API Gateway to provide a scalable interface for multiple TileJSON sources. This approach benefits from providing on-the-fly generation tile in a serverless manner and supporting multiple authorization configurations. The presentation will demonstrate the integration of API Gateway with three tile sources: (1) a Lambda function using rio-tiler for on-the-fly generation of raster tiles from a Cloud Optimised GeoTiff. (2) a Lambda function using Amazon Aurora's HTTP API for MVT generation from PostGIS. (3) a proxy interface to a pre-built cache of tile objects stored in an S3 bucket. The presentation will include publication of source code under an open license, which will be available to the community as a reference architecture. This presentation is of interest to anyone developing tiling services in the cloud.
In many countries, access to schooling is one of the key measures of performance of the education system. It is not always known how long learners walk to school, even if the buffer distance is set by policy. GISPO teamed up with the UNESCO International Institute of Educational Planning (IIEP) to study the problem.
The result is a new QGIS plugin (“Catchment”) which allows easily calculating catchment areas based on travel time (isochrones), for all schools across a whole territory. The plugin uses the open source Graphhopper routing server and OpenStreetMap data across the globe. This allows us to easily find out how many people live e.g. 15, 30 or 60 minutes away from education in different parts of a country.
Further, the development of the plugin triggered a campaign of local OpenStreetMap mapping in Madagascar, which was one of the first countries to pilot the plugin. Having more roads mapped on OpenStreetMap has an impact far beyond educational planning.
Naturally, the same plugin may also be used for calculating all kinds of service catchment areas in QGIS; it was also employed to e.g. calculate access to rail transit across Helsinki metropolitan region.
In the current web GIS ecosystem, 3D is nothing new. It is currently fairly simple to create a 3D web application for rendering geospatial data using open source software, the same can be said for 2D GIS. But in some use cases, you do not wish to have to decide between one or the other. Enter @vcmap/core, a new OS project developed by virtualcitySYSTEMS GmbH of Berlin. With a number of high level abstractions, this slim open source library allows you to create web applications which are able to represent the same data in 2D, 3D and even oblique imagery.
By abstracting layers, maps, interactions and styling, your data becomes renderer agnostic. Additionally, a parameterized approach to 3D allows you to easily create cuboid 3D representations from simple 2D representations. A feature which has proven useful in urban planning scenarios.
Furthermore, the @vcmap/core comes with a powerful serialization mechanism. All runtime objects can be serialized and stored using JSON. This way, you can easily develop a web gis framework which allows a quick deployment of multiple applications which only differ in data.
And this is not all, the @vcmap/core is still not finished, with geometry editors on the roadmap and a further open source project, the @vcmap/ui to follow this year. The @vcmap/ui is an accompanying UI which integrates smoothly with the @vcmap/core and provides a powerful plugin API. This plugin API allows for fast development of custom tools with which to enhance, analyze and use your geospatial data without the need to fully implement an entire web GIS.
TOSCA, also known as Toolkit for Open and Sustainable City planning and Analysis, was implemented by the Digital City Science group at Hafencity University Hamburg (HCU) as a joint venture with the German Association for International Cooperation (GIZ). The project – which has won the Hamburg Open Science Award in year 2020 –works very closely together with academic and local governments in India and Ecuador in order to develop use cases in the context of urban upgrading, disaster prevention, and participatory planning. The WebGIS application uses modern state-of-the art technologies like Docker, View.js, PyWPS, GrassGIS and Geoserver. The source code of the open source solution is hosted on Git repo. Moreover, user and admin manuals plus several step-by-step video tutorials were uploaded on the Vimeo video portal. In terms of analysis functionalities, TOSCA is equipped with buffer area, time map (service area analysis), query module (filter by categorical and numeric attributes) and volcanic eruption scenario analysis (equivalent with intersect: select features by geolocation).
This project has been successfully implemented in India and Ecuador since October 2019. It supports investigations not only in regards to Indian slum upgrading issues, but also volcanic disaster mapping challenges in Ecuador. Further applications of TOSCA in Palestine has been kick-start in May this year. TOSCA can be deployed on multi-touch table or on virtual machine - through cloud hosting, and is designed for usage by non-GIS specialists. It targets diverse user groups ranging from local citizen to experts, the former implying participatory workshops and the later focusing on urban scenarios decision-making processes.
TOSCA Git Repo: https://github.com/digitalcityscience/TOSCA
Vimeo Tutorial Site: https://vimeo.com/user127753830
In order to promote the TOSCA Toolkit further, we encourage developers co-work with us to further develop on modules of the Toolkit.
The photo group for FOSS4G 2022
OpenDroneMap is an ecosystem of free and open source software to collect, process, analyze and display aerial data. In this talk we will present an exciting overview of what's new in the ecosystem, where the project is headed and how you can benefit from using it. In particular, will first provide a brief overview of the ecosystem, what the tools are and how you can start making maps in minutes. A short introduction to the "magic" of the processing pipeline will be presented. We will then touch on state-of-the-art advancements in photogrammetry technology within ODM, how we benefit from a global team of researchers and how that has allowed us to match (and often times exceed) proprietary software results. After a presentation of the technological advancements, we will discuss the importance of people, or how prioritizing people over code and investing into the community has affected both participation and adoption.
This presentation will show tips and tricks how to design dynamic and relatively complex forms in QGIS desktop - with the help of the drag and drop form designer, widget configurations, dynamic expressions, data-defined widget visibility, default values, constraints, embedded forms, relations, actions and more. In addition, we will show how you can use spatial joins from other layers to automatically fill in data from independent but spatially related layers.
You will be walked through an application developed for the management of biodiversity subsidies in the Kanton of Solothurn, Switzerland. The application allows to collect data from eligible areas in the canton's biodiversity programme. Farmers and foresters can apply for separate subsidies for biodiversity support if the areas and their management methods meet certain criteria. The QGIS based application allows to collect data, automatically assigns parcel numbers, place names, community names, etc. and allows to define usage restrictions and record maintenance measures. Interfaces exist for a reporting generator (contract generation) and an SAP based disbursement system for the payment of subsidies.
In the presentation we will present the result of the development work and show some tips and tricks with forms, widgets, expressions and actions and how we stitched everything together.
I often hear complaints that the stackexchange sites are too mean to new users, that moderators are too quick to close a question that doesn't fit the guidelines or nitpick the questions to death. Also, that they didn't get a good answer anyway so what is the point.
This talk will give you an introduction on how to ask a “good” question on gis.stackexchange.com from an experienced moderator of the site. As a bonus, you will also find out how to file a useful bug report (either internally or to an external project). This talk will cover the things that you might not think are useful but are in fact vital to someone who is trying to help you.
I will discuss how the site works and how to make it work well for you, what sort of questions are “good” questions and which ones are better asked somewhere else. I will also cover what the difference between closing and deleting a question is and how to get your question reopened if it is closed. How the review queues work and how you can help improve the site for other users.
Today Protected Areas only partly cover important sites for biodiversity and are not yet fully ecologically representative and effectively or equitably managed. Improvement of the existing networks and further expansion of conservation areas will therefore require well-defined baselines that are comparable across countries for actions prioritisation.
In recent years the availability of new earth observation imagery and advances in analysis and processing have significantly improved our capabilities in terms of mapping and monitoring biodiversity variables and ecosystem services. This event will introduce the Biodiversity Analyst, a web GIS tool, developed by the European Commission - Joint Research Centre, for identifying areas of potentially high conservation value based on the available datasets such as species distribution, ecosystems services and natural state.
The aim of the Biodiversity Analyst is to provide decision-makers with means to visualise and interact with the above datasets which are considered key for biodiversity conservation while allowing them at the same time to test different weighting schemes in terms of prioritisation to identify so-called "biodiversity hot-spots".
The MapWithAI RapiD editor for OpenStreetMap offers a variety of open data to improve OpenStreetMap. This web-based map editor presents the user with various sources of open data to validate and add to OpenStreetMap, including MapWithAI roads, Microsoft buildings, and various open datasets shared via Esri.
In addition to these past data offerings, the user can now validate and add sidewalks and crosswalks derived from both Mapillary street-level imagery, as well as derived from various organizations who provide footway open data. Finally, Mapillary point data derived from imagery can also now be verified and directly converted into map data, thanks to a more efficient and rapid workflow.
We will explore all that open data available in the RapiD editor, with a specific focus on how footways are generated from Mapillary, validated from open datasets, conflated against existing OpenStreetMap data, and presented to the user for improved maps of pedestrian walkability.
Although GRASS GIS has been used for big data processing for a while now, you may think that some esoteric knowledge is needed to take full advantage of its computational power. The purpose of this talk is to demonstrate simple ways to parallelize your computations in GRASS GIS, that are applicable whether you are working on your laptop or HPC. I will give an overview of the state of parallelization of individual tools, show benchmarks, and introduce you to other GRASS GIS parallelization tricks. I will use examples relevant to land change modeling and share our experience with simulating urban growth at 30m pixel across the contiguous United States (16 billion cells) using FUTURES simulation implemented in r.futures addon. This talk is for all levels of expertise, although basic Python or GRASS GIS knowledge will be advantageous.
GRASS GIS is a well established, all-in-one geospatial number cruncher with Python interface, command line, and GUI, with new major version 8.0 released in spring 2022.
FUTURES is an open source urban growth model specifically designed to capture the spatial structure of development. It can accommodate the input of a variety of datasets with different spatial extents and can be coupled to other models. FUTURES is implemented in r.futures GRASS GIS addon.
Vector tile has been one of the key topics in the web mapping. Thanks to the pioneers who have contributed a lot to open source development in the field of vector tile, we can now develop our geospatial information service with open source vector tile technique. UN Vector Tile Toolkit, or UNVT, was established in 2018 under the UN Open GIS Initiative, and has provided various tools to facilitate the production and use of vector tiles. UNVT grows with the global partnership among United Nations and various contributors, and now we have various experience in (1) producing, (2) styling, (3) hosting, (4) optimizing, and (5) consuming vector tiles.
In out talk, through introducing our experiences and lessons from selected projects, we will share how we can create vector tiles and their map application using open-source tools including UNVT. We will introduce our experiences such as vector tile production and update of the whole globe for UN purpose, development and use of our style tool named unvt/charites, development of an interface with Esri's geoportal, UNVT application in a sigle board PC (Raspberry Pi), development of storytelling map with UNVT, efforts of 3D visualization.
- UNVT Story telling workshop
- Slide https://speakerdeck.com/hfu/unvt-storytelling
- Recording https://www.youtube.com/watch?v=CVajhAUDLMs
- UNVT styling tool - Charites
- URL https://github.com/unvt/charites
- An example of usecase - charites use for editing esri based style
- UNVT selected resources
- unvt/equinox - use of UNVT in raspberry pi
- unvt/nanban - use of UNVT in Docker for windows user
Protomaps is a new, open source set of tools for vector cartography on the web. It’s designed to enable projects of any scale - from hobby projects of a neighborhood, to dense datasets covering the entire planet. It finally makes it simple to both host tiles and render them using web standards, and accomplishes it in the most affordable way possible.
This talk will be an overview of the entire mapping stack, driven by an ethos of simplicity. Component projects include:
- The OSM Express database for syncing and querying fresh OpenStreetMap data
- The PMTiles cloud-optimized archive format for serverless hosting on platforms like S3
- The Protomaps JS renderer for custom cartography on the web using Canvas 2D
- The relationship to complementary projects like GDAL, Leaflet, MapLibre, Tippecanoe and FlatGeobuf
I’ll also describe successes and failures in adoption among users over the past two years, as well as future development plans.
Open-source solutions in geoinformatics have gradually come into the focus of attention over the past decades, becoming of the most well-promising opportunities in tertiary education. It is indubitable that students have to develop their skills to find, apply and contribute to the existing open-source opportunities, besides developing skills regarding coding and programming logic.
At the Budapest University of Technology and Economics, a complex system has been developed to support students on this matter. This system is fully based on open-source software and free cloud services. Consequently, all the teaching materials have creative common licenses supported by the use of open source software.
The main entry point of the educational materials is Moodle, considered as one of the most popular learning management systems. The majority of the source codes and explanation texts/notes used for teaching are published in Jupyter notebooks, stored on personalized GitHub pages. For opening and testing the Jupyter notebooks, the students can use Google Colab.
Another challenge worth mentioning is the continuous assessment evaluation format. Moodle supports the creation of tests based upon a wide variety of question types (e.g. multiple choice, true/false, drag and drop markers, etc), which are stored in a question bank. The test is generated by randomly selecting a given number of questions; therefore, taking the test a couple of times is highly recommended. As stated by many, this way of self-studying is popular among students these days and efficient in achieving remarkable progress.
Our presentation shares either the developed system or the gained experience over the recent years.
DroneDB is free and open source software for geospatial data storage. It provides a novel approach to store point clouds, textured models, aerial images, orthophotos and elevation models via a dynamic filesystem index. In this talk we will present DroneDB's storage approach and how you can start using it right away. We will discuss the project's architecture and roadmap. We will also perform a showcase of the project and demonstrate its most effective use cases.
We will cover how DroneDB's dynamic index can be published on the web using Registry, a cross-platform open source application which provides both a friendly user interface and a RESTful API. This enables users and GIS developers alike to access and manage the underlying data. We will showcase the ddb client, a command-line interface that enables power users to manage the index and can be used to sync, share and download remote datasets in a manner inspired by git workflows.
We will show how WebODM and Registry can integrate together to create a powerful and versatile workflow for 3D reconstruction using a full opensource stack.
WebGL has enabled fast rendering of maps on the web (including MapLibreGL and OpenLayers renderers), but from the software development point of view, is a notoriously cumbersome technology to work with.
A few architectural features of Gleo will be outlined, including:
- "One GL shader per type of cartographic symbol" rendering & framebuffer compositing approach
- Object-oriented design: symbols as instances; allocation/deallocation of GPU resources for each symbol
- Sliding window algorithm in a wrapped WebGL texture for tile caching
- On-the-fly reprojection enabled by updating just one WebGL data structure
- On-the-fly CRS offsetting to prevent floating-point precision artifacts
- Coordinate wrapping and display tessellation to avoid antimeridian artifacts
GeoServer is a well-established multiplatform, open-source geospatial server providing a variety of OGC services, including WMS (view services), WFS and WCS (download services) as well as WPS (spatial data processing services). Among the open-source GIS web servers, GeoServer is well known for the ease of setup, the web console helping the administrator to configure data and services, the variety of OGC services available out of the box, as well as the rich set of data sources that it can connect to (open source, such as PostGIS as well as proprietaries, such as ArcSDE, Oracle or ECW rasters). GeoServer also provides several OGC APIs, including the OGC API - Features which recently attracted the interest of the INSPIRE community.
As far as the INSPIRE scenario is concerned GeoServer has extensive support for implementing view and download services thanks to its core capabilities but also to a number of free and open-source extensions; undoubtedly the most well-known (and dreaded) extension is App-Schema which can be used to publish complex data models (with nested properties and multiple-cardinality relationships) and implement sophisticated download services for vector data. Based on the feedback of App-Schema users collected over the years, a new generation of open-source mapping extensions have been implemented in GeoServer: Smart Data Loader and Features Templating, these extensions are built on top of App-Schema and ease the mapping of the data models by allowing us to act directly on the domain model and target output schema using a what you see is what you get approach.
This presentation will introduce the new GeoServer Smart Data Loader and Features Templating extensions, covering in detail ongoing and planned work on GeoServer. We will also provide an overview about how those extensions are serving as a foundation for new approaches to publishing complex data: publishing data models directly from MongoDB, embracing the NoSQL nature of it, and supporting new output formats like JSON-LD which allows us to embed well-known semantics in our data. Eventually, real-world use-cases from the organizations that have selected GeoServer and GeoSolutions to support their use cases will be introduced to provide the attendees with references and lessons learned that could put them on the right path when adopting GeoServer.
QGIS is a freely downloadable open source GIS software suite that contains a desktop option, mobile, and web component. QGIS is free to download and use, it is released with a GPL v3 license which is a non commercial license allowing users to download and use it without concerns compared to other commercial GIS software.
Up to QGIS version 3.12 there was no core support for temporal data, users were required to install a plugin called TimeManager in order to visualize temporal data inside QGIS. Through a collaboration between the Canadadian Government, Kartoza and North Road, efforts were made to add core support for temporal data inside QGIS.
As a result the QGIS version 3.14 was released with a Temporal Controller feature which was now responsible for handling all the temporal layers inside QGIS. The initial role out of the Temporal Controller contained support for raster, vector and WMS-T layer providers.
This session will explore how to use the QGIS Temporal Controller to do animation and visualization of the WMS-T layers, this will include how to setup a standard WMS server that will be serving time based layers.
In the session we will also learn about the Temporal Controller API, how to use it through QGIS python bindings and create a simple QGIS plugin that will show the API in action.
RapiD is an advanced Open Source map editor for OpenStreetMap built by the MapWithAI team at Meta. RapiD makes it simple to work with openly licensed geodata and AI-detected road, building, and landform features.
Our team recently converted this legacy rendering engine to instead use WebGL technology by leveraging the popular Open Source PixiJS game engine. The conversion from SVG to WebGL yielded a considerable performance boost, and the new WebGL-based renderer is up to the task of working with massive world-scale datasets and handling the increasing data density of OpenStreetMap.
In this talk we share our progress on bringing new datasets into RapiD, tell the story of how we built a modern map editor on top of an Open Source game engine, and share our roadmap for the future of mapping.
After 2019 and 2021 we also want to give a status report of the GeoStyler  project on this years FOSS4G. GeoStyler is an OSGeo community project  and received a lot of new features and changes in the past months.
GeoStyler provides a set of parser libraries that allow the conversion between different styling formats. On top of the core functionality GeoStyler provides an user interface library that helps to integrate GeoStyler into your own web application. Using these components, GeoStyler can be used for example to create a WYSIWYG style editor. The project also maintains a GeoServer-plugin , which includes styling UI-components into GeoServer.
Two more tools from the GeoStyler universe should be mentioned: A commandline interface (CLI) and a REST interface. The CLI provides a tool for server-side style conversion for an arbitrary number of style files – completely automated. The REST interface can be used to create web services which do the conversion between formats. With these tools, it is possible to convert a huge amount of QGIS styles to SLD, or Mapfile or any other supported file based styling format and vice versa.
GeoStyler is based on a plugin concept, so the UI works with any of the supported parsers and can thereby be used for projects that use SLD, OpenLayers, QML, etc. Currently, GeoStyler supports the styling formats OGC SLD, OpenLayers Styles, Mapfiles, QML, Mapbox and also – for assistence when styling by attributes – the geodata formats GeoJSON, OGC WFS and Shapefile. Common Query Language (CQL) for filtering is understood as well as filter encoding (OGC FE).
There are a number of new features such as a card layout, enhanced support of expressions, filter UI enhancements as well as various documentation updates and translations planned for this year and we expect all or most to be realized when FOSS4G 2022 takes place.
With this talk, we want to present the current project status and show how GeoStyler evolved since the last talk. In order to show a real-life example, we will present the results of a project, where the task was to convert UMN Mapserver based styles to QGIS using GeoStyler.
In the last years the attention for gender equality in all context has increased all over the world.
Nowadays the sensibility of Public Administrations towards the naming of streets, roads, squares and monuments after women has highlighted that, instead, the toponymy has always been oriented to the choice of male figures.
In this work we present a Python script for QGIS, that allows to verify if a proper name, contained in a street directory, is of male or female gender.
There are other Open Source projects that, starting from an address, verify the gender of the represented person; the most famous is the GeoChicas Project ; in Italy it is worth mentioning the "Toponomastica Femminile" Association  that manually verifies the streets dedicated to women, according to a predefined taxonomy (religious women, artists, etc.).
The goal of the present work is to automate the gender reconnaissance starting from a list of names; however, unlike GeoChicas that use as a base parameter a dictionary of names with which to compare the list, we propose to make a query of DBpedia via SPARQL in order to identify the subject and derive its gender.
If the address is the attribute of a spatial dataset, then it is possible to add a new attribute (the gender) to the vector layer table as a result of DBpedia query.
This approach overcomes language limitations (which would require differentiated dictionaries) and the ambiguities that some names would have (for example the nome "Andrea" is used as both a masculine and feminine name).
The script is created using the SPARQL language with a very simple structure, in which the triplet of data is constructed in order to obtain the gender from the name of a person through the query of DBpedia.
The script can be run in QGIS environment associating the data outputs directly to the geometry or even outside of QGIS and as a result you will have a list of "genders".
The process of relying on Wikipedia/DBpedia has the twofold advantage that, where the name dedicated to street exists, then the desired information is taken, the gender in our case, while if it missing it can be added or enriched.
The script is currently under validation and will be published in the dedicated git repository .
To enable sustainable and impactful Open Science in the long-term, ESA Earth Observation looks to design and implement a comprehensive Open Science framework, which includes a dedicated set of integrated tools and common practices for effective scientific data management, seeking to support Open Innovation, advance Science and increase community participation. The framework will build on and advance existing Open Science elements and will develop new capabilities to achieve the ambitions and vision set forth in the 2025 Agenda, supporting the European Green Deal. The four main pillars of the initiative are: i) EO Digital Platforms, interoperability and standardisation, ii) Accessible and Reproducible EO Science, iii) Inclusive and collaborative research and iv) Strategic Partnerships. Contributing to the second pillar, ESA is developing an EO Open Science Catalogue tool to enhance the discoverability and use of products, data and knowledge resulting from Scientific Earth Observation exploitation studies. Adhering by design to the "FAIR" (findable, accessible, interoperable, reproducible/reusable) principles, the Open Science Catalogue aims to support better knowledge discovery and innovation, and facilitate data and knowledge integration and reuse by the scientific community.
The Open Science Catalogue is based upon the EO Exploitation Platform Common Architecture (EOEPCA) and shares its basic Open Source components, but extends it with additional functionalities:
- The Static Catalogue is a hosted STAC Catalogue, comprised of static Catalogue, Collection, and Items that represent the Themes, Variables, Projects, and Products
- The Open Science Catalogue Frontend is a Vue.js based client application, that allows the efficient browsing of the Open Science Catalogue
- The Backend API allows users to make submissions to create, update, and delete Themes, Variables, Projects, and Products. These submissions are then handled as GitHub Pull Requests, where they can be further reviewed, discussed, and finally accepted or denied.
The Open Science Catalog makes use of various geospatial Open Source technologies such as pycsw, PySTAC, and OpenLayers.
In this presentation we will review the EO Open Science Catalogue architecture, technology stack, and how this tool can be used to discover and publish Earth System Science products from ESA activities. We'll also look at future evolutions of the product and how it contributes to the overall ESA EO Open Science Framework.
In this talk we'd like to present the results of a research project, that was funded by the Federal Ministry of Transport and Digital Infrastructure (https://www.fair-opendata.de/) in Germany. The goal of the project was to simplify the exchange of information and data between the German Meteorological Service (DWD) and economic and public, as well as private actors. With this goals, we think we can add a great contribution and maybe also an impetus for other countries to address the provision of meteorological data.
Climate and weather data play an important role for e.g. identifying measures against climate change and optimising industries. However, a correct understanding and handling of such data is often difficult for many potential users without a meteorological background. Moreover, to process and analyse this data, users often need specialised software solutions and an infrastructure that is able to handle large amounts of data; another hurdle to be able to put the data to value.
A range of services has been developed to improve the discoverability, processing, visualisation and delivery of meteorological data. In addition to weather APIs and app developments, the data products of the german meteorological service (DWD) are offered via various portals, but also through a REST API. An upload area also enables easy data provision from third parties towards the public and the meteorological service.
In the presentation, we will present the base-ideas and the implemented products that, from our point of view, increase the usability of the freely-available meteorological data.
MapComponents is a new component framework to quickly and easily build dynamic geospatial web apps. It includes React front-end components for all kinds of projects, from small apps with a narrow and specific focus up to complex geospatial suites for the web. Server-side components are also planned to aid the development of flexible backend services.
- is a modular framework to create tailored geospatial web apps built upon modern webbrowser technology
- can be used to visualise and analyse geo data
- can be used for desktop and mobile applications (online and offline; progressive web apps (PWA))
- provides independent components which can be combined into full-fledged geospatial web applications (e.g. dashboards, WebGIS, ...)
- provides a catalog of components and example applications
- uses a flexible core which theoretically supports any kind of mapping library (currently supported are MapLibre, Mapbox GL JS and OpenLayers)
- is easily integrated into existing stacks
- makes it easy to rapidly design and deploy a map centric web app
MapComponents is developed by WhereGroup GmbH and is available under the MIT license.
We will present the project, with its current state and goals, and will show practical examples.
OpenAerialMap.org (OAM) was built in 2015 to serve as a platform and tools for sharing openly licensed satellite and aerial imagery. For Humanitarian OpenStreetMap Team (HOT) and its partners, open imagery has been critical for disaster response and preparedness mapping projects. Those images have traditionally been difficult to share and access because of the large file sizes and technical skills required to publish them online. Since its inception, OAM provides an easy means of contributing to and accessing a large repository of open imagery, with over 11,000 images added. The OAM browser is designed to easily index, visualize and filter images, while the data itself is stored in Cloud Optimized GeoTIFF (COG) format in the Open Imagery Network, a federated network of highly available imagery buckets from different cloud providers.
OpenAerialMap is the only platform built on open-source software that allows anyone to upload and share aerial imagery of anywhere on Earth. With advances in drone mapping technologies and their proliferation in places where cost and access used to be a limiting factor, there are now massive amounts of images that can be easily made available through OAM. Once uploaded, all imagery becomes instantly accessible via scalable TMS and WMTS services for mapping in OpenStreetMap or for any other use. Since its creation, OAM has been democratizing high resolution Earth observation and promoting the sharing of aerial imagery through open data licenses.
This year HOT joined with Kontur to take a fresh look at OAM and redesign the platform. Using modern, equitable, human-centered design principles, we evaluated how this tool could be used to better support HOT’s vision that everyone has access to high quality map data and can use that data responsibly to improve their lives and their communities. The development will build on and integrate emerging standards for geospatial data such as the Spatio-Temporal Asset Catalog (STAC) specification. A broad range of users and stakeholders will be involved in the design process, to ensure that OAM v2 will result in a modern and accessible platform. In this talk we will present an update on the development of OAM as informed by the results of that design work and share a preview of its implementation using open-source geospatial software.
The need for access to open imagery has never been greater, with advances in UAS imaging and processing technologies and their proliferation in places where cost and access are a major factor. This year HOTOSM joined with Kontur to take a fresh look at OAM and redesign the tool with modern, equitable, human-centered design principles to better support HOTOSM’s vision that everyone has access to high quality map data and can use that data responsibly to improve their lives and their communities. In this talk we will present an update on the development of OAM as informed by the results of that design work.
Teaching GIS Through Geospatially Aware Agent-Based Modeling.
AgentScriptGIS is a web-based platform that provides a geospatially aware agent-based modeling programming environment. The goal is to enable programmers to generate geo-agent-based models with minimal barriers to entry. The platform provides a programming environment that includes an agent-based modeling library (agentscript.org), a geo-aware programming context, and a map display (leafletjs.com).
The platform was designed to reduce the overhead needed for programmers to begin modeling. We want to empower modelers who come from a wide array of backgrounds with the ability to write and animate geospatial models with minimal time and effort. The intended audience of this platform are users who want to explore geospatial and agent-based modeling but may have little to no experience interacting with these types of models.
GIS software is typically professional in nature and leans on being sophisticated and precise, but is often overburdened with complexity. The hobbyist GIS programmer faces a steep learning curve when starting, including choosing appropriate tools and information sources, deciding on data formats and understanding projections. Our platform intends on removing these burdens on the user by trading versatility for simplicity and ease of use.
We are preparing our platform to be used initially in academia, but can see it being applied in a variety of settings. AgentScriptGIS focuses on facilitating new ways to engage students, teachers and modelers with geospatial issues. This platform provides a low barrier of entry to users and promotes the modeling of local and regional problems by leveraging real-world data and empowering low skill users with the ability to model various geospatial phenomena.
Our digital maps are not always up to date with the real world. New road constructions and road blockages could reduce the accuracy of the map data. In a logistics company like Gojek that serves millions of users per day in South East Asia, the core undertaking revolves around routing and ETAs. Any inaccurate local map data can lead to a direct negative impact on business metrics.
So how do we ensure that map inconsistencies are detected and fixed promptly to minimise interference of our services? When manual detection is labor intensive and not scalable to millions of road networks in vast regions, how can we effectively automate this at scale?
This talk is a story of how we, at Gojek, built a pipeline that uses bad customer experience as the trigger to identify potentially faulty data in OpenStreetMap. Our solution makes use of noisy GPS traces and Overpass, an open source tool, to automate this detection.
This solution enabled us to identify 100s of potential issues per day, categorise them, associate business impact to each map issue and allow our map analysts to fix them seamlessly.
You would like many people from your team or crowdsourcing to fill data in your geodatabase. One way to do that is to make appealing, easy to use and well-constructed forms avoiding wrong inputs. Also, you do not want people to give up filling because the form is too long while in the same time you could automatically fill some entries based on others. With QGIS Desktop, it is possible to make great maps but also advanced forms by using expressions to control field visibility, default values, proposed values, constraints and more. It is very powerful but now how to share those forms to anybody whatever their device or operating system? Could it be possible to share a link for people to open and fill those forms in their web browser?
Let’s see how you can get most of these features for your forms in web browsers thanks to QGIS Server and Lizmap.
It started as a way to help us publish geospatial data. It quickly morphed to something quite different. You can call it scope creep. And despite this term being close to a swear word in ICT, it turned out to be very good thing. And that's because while still serving its main purpose, which is to provide an out-of-the-box web based mapping app with all the trimmings (navigation, measure, layer control and search tools), EVRYMAP also:
Provides client-side editing tools
Provides a modular design that allows you to implement custom business logic by simply writing your own apis. EVRYMAP will consume these APIs automatically by defining them in configuration as 'modules'
Implements 1-n relationships between your spatial data and other related data. Which may come from the same or external databases
Can be run as standalone or within an iframe to spatially enable third party applications (and provides the communication mechanism)
Using EVRYMAP at the core, we have also deployed a few systems in production environments as commercial apps, namely:
-Landify, a mini-cadastre for organizations with a real-estate portfolio. It allows users to easily review, catalogue, and manage real estate data (land parcels and buildings).
-MapTheYA, a map-based information system for the management of water networks including topology checks.
-Building permits/Expropriations Management
Examples of not "map-first" systems, meaning that while the bulk of their functionality are text/form based (applying for electronic copies of documents) they also include embedded maps to improve user experience.
This presentation will provide a brief introduction to EVRYMAP, the way it works, how you can configure and extend its functionality and what we plan for the future. And being the new kid on the block, ask the community for input and feedback!
In the planning and expansion of water supply schemes, there needs to be detailed mapping and documentation of existing pipeline network and their assets. However this is usually not the case, especially where the construction of these pipelines predates advances in mapping, geoinformation and database where they in prior existed as drawing and engineering plans. In order to migrate to a fully documented inventory, digitalisation and management of a water supply network database to estimate demand and supplies to plan expansion and population growth, there needs to be an inventory of existing scheme. Historically, mapping has been done with expensive mapping and survey equipment that can pose a challenge for small organisation’s budget, making it difficult to have a complete mapping inventory of its network.
This article presents a geographical information system–based free and opensource software architecture for the mapping and inventory of urban water supply network. This architecture is especially useful where budget is tight and decision relating to meeting the water and sanitation-related Sustainable Development goals needs to be made. The architecture consists of data management, data collection, data analysis and project host environment tools and software.
PostGresSQL with PostGIS was used for design and management of water supply network GIS database, basing the creation and design of features and attributes on prior knowledge of what exists on water supply networks. Features created are transmission and distribution pipelines, hydrants, valves, chambers, junctions, leaks, encroachments, pumps, pump stations, reservoirs, bulk flowmeter, treatment stations with attributes across that include diameter, pipeline material, operational status, condition, encroachment, photo; sizes, capacity, models, manufacturer etc. The PostGIS database was connected to a QGIS project environment where custom forms to were designed to capture attributes created in PostGIS. The QGIS project was linked to an android based mobile app data collection software called Qfield, hosting custom forms designed in QGIS to capture the content of the water supply features, location and attributes. Using the form on Qfield, the water supply network is mapped and attributes captured and once data capture has been carried out using Qfield software, data from field capture is synchronised to QGIS project and following edits to the data captured, it is updated to the PostGresSQL PostGIS database. QGIS software acting as the project host environment also functions as the software for mapping, visualising and analysis of data hosted and managed
The architecture presented is an opportunity for any organisation seeking a free and open source GIS option in capturing and documenting and managing their water supply network data. As one of the weaknesses, is that data captured using Qfield has the inherent horizontal and vertical accuracy acquired from android devices which is less accuracy than that from a survey equipment. However, Qfield has the option of connecting to GNSS equipment by blue tooth, inheriting the sub cm horizontal and vertical accuracy it offers and thus improving locational and elevation information and still offering a higher accuracy free open source option.
Geospatial data and analysis is more central than ever to data science, research, and policy analyses. This is especially evident in the explosion of tools, both open source and proprietary that have been developed over the past 5 years to help users manage and gather insights from their data. However many of these powerful tools, like geopandas (analysis and modeling) and deck.gl (visualization)— are technically inaccessible to analysts and researchers without the available time or skills for advanced coding. A number of commercial ventures (Carto, ESRI etc) attempt to overcome this limitation by bringing these tools together as part of polished, graphical user interface driven platforms. While these platforms offer ease of use, they raise concerns about longevity, data ownership, and academic support.
Matico is a new free and open-source platform we are developing at the Spatial Data Science center that seeks to fill the gap between open but technically focused tools and commercial platforms. Consisting of a suite of interoperable components, Matico enables organizations and individuals to manage and visualize their geospatial data while easily maintaining their own infrastructure. A backend server allows users to easily load, clean, analyze, and distribute data through APIs, queries, and in-browser data editing tools while a powerful app builder allows users to develop their own rich applications that target diverse audiences.
This talk will demonstrate the current features of Matico, our future roadmap , and demonstrate relevant use cases. Matico is now and will forever be open through a permissive MIT open-source license. Learn more at https://matico.app/
QGIS and Dataviz
Creating plots is out of the main scopes of QGIS but thanks to the simple Python API, it is easy enough to create additional scripts and plugins. The DataPlotly plugin has been developed for QGIS(the first release was created in 2017 while nowadays the plugin has been downloaded more than 100,000 times). It's today a well maintained Python plugin with a growing community of developers, users and testers.
The plots are completely interactive so that plot elements are directly linked with map items; therefore the user is able to query map items from the main plot canvas. Thanks to a crowdfunding campaign, the functionalities of DataPlotly were extended: a complete refactoring of the code, more plots but especially the creation of plots in the layout composer, also for atlas layouts.
The plugin is also compatible for QGIS server. Lizmap Web Client is an opensource server application to publish QGIS project on the web without any coding skills needed. It’s using QGIS Server in the backend so users have the same rendering between their QGIS Desktop and the web version of their project.
Thanks to the DataPlotLy plugin installed on QGIS Server and to the Lizmap application, it allows users to print PDF with plots from in their web-browser.
For decades, the open source community has made an enormous contribution to remote sensing in terms of tools and solutions, leading to support and inspiration for research.Open source software packages capable of processing digital images have increased the availability of remote sensing datasets. QGIS is an open source software package involving experts and users from around the world. QGIS is a useful tool for spatial visualization and analysis, making it a viable alternative to other costly software packages. It has became an effective tool for processing and analysing the climate data, based on the potentiality and flexibility of graphical modeler imbedded plugin. Therefore, the aim of this work is to assess impact of Rainfall variability on Vegetation Phenology cycle in Niger Delta from 2010 to 2019 using Open source QGIS graphical Modeler. The rainfall data was collected from Tropical Rainfall Measuring Mission (TRMM).The Moderate Resolution Imaging Spectroradiometer (MODIS) data MOD11A1 of 2010-2020 were download for two years intervals from the United States National Aeronautics and Space Administration (NASA) website. The QGIS Graphical modeler was built using arithmetical expression and raster calculation geoprosessing tool to convert the modis data to Normalized Difference Vegetation Index(NDVI) and Rainfall variability from TRMM . The mean Rainfall and NDVI were extracted using the administrative boundary shapefile of the study area. The results were presented in figures, maps and graphs, and they show that Rainfall Variation have occurred with considerable impacts on the vegetation phenology cycle in Nigeria between 2010 and 2020. Rainfall data and NDVI indicate a high correlation with vegetation with the implication that as rainfall increased, the vegetation also increased and inversely. In view of the above findings and observations, the occurrence of rainfall variability and its grave impact in Nigeria has been empirically determined. Therefore, while the use of remote sensing and GIS technique in continuous monitoring of climatic indices and vegetation phenology cycles are highly recommended, The open source software should also be highly encouraged and freely made available for scientific research.
Kaoto is an graphical tool to orchestrate components in a visual, low-code and no-code editor. Once you have your workflows defined, you can deploy them directly to any kubernetes compatible cloud. Kaoto both be deployed as a SaaS platform or used as a standalone application.
The user interface have both a source code text editor and a drag and drop graphical space. This way users can work both no-code and low-code at the same time, simplifying the learning curve of Apache Camel to create integrations.
Kaoto is highly customizable. It supports custom views for your specific needs, like showing manuals and helpers for your specific use cases. You can also add your own domain specific languages and extensions to use different underlying frameworks with the same user interface. This helps your non tech savvy users adapt to new environments.
Kaoto augments your productivity, accelerating new users and helping experienced developers to build complex integrations.
Web Mapping as a technology and a method is now twenty years old. Within the
OSGeo Community, it has been fostered by projects such as OpenLayers
and Leaflet. They evolved tightly intertwined with the framework imposed by
free data providers, initially around commercial efforts like Google and later
OpenStreetMap. While useful in providing an easy entry to web mapping, and
convenient background layers, these data providers also triggered a regression
towards centuries-old cartography techniques, in particular the Mercator projection.
This has become a major hurdle to web mapping, particularly concerning global
The Mercator map projection was created to aid sea faring in the XVI century and
was rendered useless with the advent of global positioning systems. Its use in
cartography may still be acceptable at large scales, neighbourhood or city
level, but at smaller scales it imposes severe distortion to distances and areas.
For global datasets in particular, the Mercator projection is unusable, for it
cannot represent the full surface of the planet.
Web mapping developers may work around this framework with libraries such as D3
or proj4js, and by setting up bespoke base layer services. But in doing so they
face a different problem: the deep dependence on the CRS index created by the
European Petroleum Survey Group (EPSG). Primarily concerned with the survey and
extraction of fossil fuels, the EPSG leans heavily on local or regional CRSs,
largely ignoring global CRSs. Hardly any of the more than 100 map projections
and coordinate systems developed since the beginning of the XX century feature
in the EPSG index. Landmark projections such as the Eckert series, the
Homolosine, Eumorphic, Dymaxion or the Snyder series were never included in the
EPSG index. Not even the classical Mollweide projection (one of the turning
points towards modern cartography) appears in the EPSG index. With a FOSS4G
stapple such as MapServer, this forces the leveraging of map (re-)projections to
the client, which is not always possible.
Web mapping with global data thus remains a technical challenge with FOSS4G.
This address reviews several techniques and work-arounds making global web
mapping possible with familiar FOSS4G technologies. Starting with the
appropriate configuration of CRS managing software, going through the set-up of
data servers and finally providing examples with web mapping clients.
Micro animations are small animations on a website that support the user by attracting focus to where we want their attention. They can also be used to support relationships between elements in a web application, for example a list element and a map feature, or simply to spark a little joy. Users today have come to expect these animations in their online experiences. How can we provide these features in a web map? Map libraries gives you some animations out of the box today, but what if you want something custom?
This presentation will give examples on how small animations can be used in web maps to support interactivity. We will walk through building our own, custom animation that can be used as a starting point for many types of animations in web maps. The technique is library-agnostic, so we’ll show examples in both MapLibre GL JS, Leaflet and OpenLayers.
The QGIS Open Day are organised on the principle of self-organisation and community participation. The monthly sessions are open to anyone in the opensource community and cover various topics from presenting new developments and releases, tutorial style work-flows and interactive open sessions.
In the year the channel has been active, QOD has generated over 50 videos obtained 4000 subscribers and on average QOD channel receives Approximately 5000 views each month. Most QOD viewers are from the United States, Germany, India, France and the UK and 94% of QOD viewers are male. The most popular video on the channel is “A geological map work-flow in QGIS with Chris Lambert with” 5277 views.
Looking forward, the QOD channel aims to be the official platform to show the functionality of the new QGIS releases, plugins, work-flows, and opensource GIS platforms. The channel aims to increase support, viewership and participation from a wider, more diverse audience and encourage different regions to contribute. Join the QOD community and let’s learn from each other.
Mapping is time-consuming and requires a high volume of a workforce when it comes to keep maps up to date periodically. This brings the need of finding alternative approaches to keep maps up to date. Mobile mapping is the process of collecting geospatial data from a mobile vehicle using a 360º camera, laser scanner, GPS/IMU positioning system, and other sensors.
Many devices now include a geotag for every photo captured, and GPS accuracy can have major effects on the quality of street-level imagery and derived data. Join us in an exploration of the different accuracy levels of GPS-enabled cameras, where we will take a look at how different devices compare, and what varied levels of GPS accuracy look like both for image location and for data extracted using computer vision and structure from motion.
Understanding the differences between devices is an important step in planning street-level imagery capture, as it will align your expectations with the advantages and limitations of the hardware you use. We tested various devices and will share the results of our investigation, with the aim of equipping you to capture street-level imagery with the tools and methods that fit your needs.
My name is Nicera a slum dweller living in Kibra and founder Community Mappers. At times walking in slums can be irritating for you have to step on all kinds of garbage, bare with the smelly stench coming out from the heaps of garbage that are scattered allover along the thin footpaths in the slum. when it rains some of this trash is swept straight to the river, some to broken and open sewer. houses along the river banks are swept away and this causes flooding in our slums which has led to lose of many lives in our slums, they say rains are blessings but for us its a curse. and for this reason my organization Community Mappers saw a need to conduct a trash study research in the slums and take it to the next level by negotiating with County Governments and local stakeholders of tangible solutions which will lead to no deaths and no floods in our slums. In this study we bring out a community based approach of solving its problems through research this was brought out through qualitative and quantitative approach in this communities. We are the communities and through data and proper research we can solve some of our problems.
The term Analysis Ready Data started as a way to describe a Landsat product that would efficiently allow time-series based analysis by providing a consistent, grid and pixel-aligned product corrected to surface-based measurements. Since then it has come to mean a wide range of things, but without a clear set of standards on how to characterize ARD there is little to no interoperability among datasets that call themselves ARD.
The Analysis Ready Metadata initiative uses the SpatioTemporal Asset Catalog (STAC) spec as the vehicle for describing well-characterized data. This goes beyond the basic geospatial and temporal characteristics captured in the core STAC spec and into detail about the processing level of the data, corrections that have been applied, as well as spatial and measurement uncertainties. Having well-characterized data through it’s STAC metadata enables discovery of usable data, automated processing using interoperable workflows, and tracking of data provenance of derived products.
The CEOS ARD (previously CARD4L) specifications require certain metadata and processing to be done for it to be compliant and can use this STAC metadata to automatically assess the potential for a dataset to be compliant with the needed requirements. This talk will cover elements of STAC, ARD, and the CARD4L family product specifications.
The scientific community is faced with a need for greatly improved data sharing, analysis, visualization and advanced collaboration based firmly on open science principles. Recent and upcoming launches of new satellite missions with more complex and voluminous data, as well as the ever more urgent need to better understand the global carbon budget and related ecological processes, provided the immediate rational for the ESA-NASA Multi-mission Algorithm and Analysis Platform (MAAP).
This highly collaborative joint project of ESA and NASA established a framework between ESA and NASA to share data, science algorithms and compute resources in order to foster and accelerate scientific research conducted by ESA and NASA EO data users. Presented to the public in October 2021, the current version of MAAP provides a common cloud-based platform with computing capabilities co-located with the data, a collaborative coding and analysis environment, and a set of interoperable tools and algorithms developed to support the estimation and visualization of global above-ground biomass.
Data from the Global Ecosystem Dynamics Investigation (GEDI) mission on the International Space Station and the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) have been instrumental in the first products of MAAP including the first comprehensive map of Boreal above-ground Biomass and a current Global Biomass Harmonization Activity, but the platform is also being specifically designed to support the forthcoming ESA Biomass mission and incorporate data from the upcoming NASA-ISRO SAR (NISAR) mission. While these missions and the corresponding research which includes airborne, field, and calibration/validation data collection and analyses, provide a wealth of data and information relating to global biomass estimation, they also present data storing, processing and sharing challenges. The NISAR mission alone will produce about 80TB/day. These large data volumes present a challenge that would otherwise place accessibility limits on the scientific community and impact scientific progress.
Other challenges being addressed by MAAP include: 1) Enabling researchers to easily discover, process, visualize and analyze large volumes of data from both agencies; 2) Providing a wide variety of data in the same coordinate reference frame to enable comparison, analysis, data evaluation, and data generation; 3) Providing a version-controlled science algorithm development environment that supports tools, co-located data and processing resources; and 4) Addressing intellectual property and sharing challenges related to collaborative algorithm development and sharing of data and algorithms.
MAAP products can be explored on the MAAP Dashboard at https://earthdata.nasa.gov/maap-biomass or the joint platform entrance at scimaap.net. MAAP also can be accessed through individual NASA (https://maap-project.org) and ESA (https://esa-maap.org/) landing pages.
For sustainable public transport (PT) operations and planning, the most important part of the decision-making cycle is the accurate and up-to-date collection and sharing of static public transport data. One of the biggest problems experienced is that the data for comprehensive analyzes are not accurate and powerful, and data formats change according to PT agencies. To solve these problems, PT data formats and visualization methods have been investigated.
GTFS (General Transit Feed Specification) is the most commonly used format for specifying PT schedules,line,stop and route. In other words, GTFS is a data specification that allows PT agencies to publish their transit data in a format that can be consumed by a wide variety of software applications. Today, the GTFS data format is used by thousands of PT providers.
Our company has developed an open-source GIS-based GTFS editor using its knowledge in data analysis, big data, and data visualization in the field of PT. Within the scope of this study, firstly, open PT data (line, stop, schedule, etc.) were obtained from different municipalities and visualized using open source GIS applications such as QGIS, SAGA GIS, JOSM, GeoDa, and data quality evaluation was made. State of art in the field of ITS has been researched and open source GTFS editors have been studied.
Open source map servers such as GeoServer were used for sharing, editing and organizing stop & line maps produced in GIS applications. For geospatial data analysis, stop points were verified by using geopandas and shapely libraries. PostgreSQL database was used to store geo-based stop & line data and the PostGIS extension was used. PostGIS adds spatial capabilities to PostgreSQL so it can store, query, and manipulate spatial data. On the server side scripting, GeoAlchemy(an extension of SQLAlchemy) was used for working with spatial databases and geospatial queries. Turf was used for any spatial operations. It is a geospatial engine, and it includes spatial operations and helper functions. - MapboxGL: WebGL-powered library was used for interactive vector maps on the web application. To render more than 1k of stops & lines with high performance, WebGL powered geospatial visualization framework DeckGL was used. NebulaGL provides geospatial drawing and editing tools for lines. It was added for creating & editing routes from the map. Osm-Nominatim is a geocoding library. It allows users to find a stop address from a location.
As a result of the studies, the open source GTFS editor was developed, which will create a geographical base for PT planning and operational methods. The developed GIS-based editor provides fast and more effective PT management by enabling the visualization, creation and management of lines and stops of PT agencies with user-friendly interfaces.
Maybe you've heard of Kart, the great new geodata versioning tool from the team at Koordinates? But did you know that Kart also has a QGIS plugin so you can do real data versioning without needing to leave QGIS?
In just 5 minutes we'll demonstrate how to import data into a new Kart repository, make and review some changes, merge a branch, and push everything to a remote server. All from QGIS!
We’re drowning in data, but the geospatial world lags badly behind in versioning tools compared to our software counterparts. Kart (https://kartproject.org) is solving this with a practical open tool for versioning datasets, enabling you to work more efficiently and collaborate better.
Kart allows you to quickly and easily manage history, branches, data schemas, and synchronisation for large & small datasets between different working copy formats, operating systems, and software ecosystems.
Modern version control unlocks efficient collaboration, both within teams and across organisations meaning everyone stays on the same page, you can review and trace changes easily: ultimately using your time more efficiently.
It is well-known that Free Open Source Software is part of Space and Planetary Exploration, and the latest generation of rovers and drones on Mars embed FOSS components and frameworks. But what about Free Open Source for Geospatial software and data access and availability? We will travel the timeline of planetary cartography, from the first steps of remote and direct observation of the bodies of our Solar System to the era when Geographic Information Systems spread in Planetary Science and FOSS4G starts to play an essential role in studies and missions to environments beyond planet Earth.
We're living in the world of APIs. CRUD operations are base of lot of operations. Many smart frameworks such as Django, Flask, Laravel provides out of the box solutions to filter the data, which covers almost all needs to separate data based on column values.
When it comes to Geospatial data, we expect to filter data based on their location property instead of metadata. This is where things get complicated, if you are using framework that doesn't have package, library built to handle such use cases, you are likely to be dependent on either database or any external package to handle it.
Fortunately Geodjango[https://docs.djangoproject.com/en/4.0/ref/contrib/gis/] (Django's extension) allows us to create databases which understands geometry and can process it[https://docs.djangoproject.com/en/4.0/ref/contrib/gis/geoquerysets/#gis-queryset-api-reference]. It also provides support to write APIs using Rest Framework extension [https://pypi.org/project/djangorestframework-gis/] which takes this to next level allowing user to output the data in various formats, creating paginations inside geojson, create TMSTileFilters, etc.
In this talk we'll scratch the surface of this python package and see how to build basic CRUD APIs to push, pull GIS data along with filtering it to the PostgreSQL database
Over the years, OSGeo's Google Summer of Code initiative has transformed into an initiative full of contributions towards geospatial software development. In the last 16 years, many OSGeo projects comprising incubating projects, community projects, and guest projects have progressed attributed to the contributions of student developers. Some of these students continued to participate as contributors for the projects and went on to take mentoring and organizing responsibilities. This is a true sense of FOSS4G in terms of individual and collective growth of the student developers and the OSGeo community. In this talk, the OSGeo GSoC Admins team would try to appreciate the efforts of all the mentors and students involved till now and present the state of the GSoC 2022. The Admins would also present possibilities for new projects to be part of the GSoC with OSGeo as an umbrella organization.
Mergin Maps (Mergin synchronization server and the Input app) is a package of free and open-source components developed by Lutra Consulting since 2017. It allows users to seamlessly share QGIS projects with others and keep a history of the geo-data. Moreover, it allows collecting data in the field with the mobile application Input, fully based on the QGIS core engine. No more paper for the collection of vital data in the field! We will briefly present published case studies to show the capabilities and features of the solution.
We will talk about the recent development of the product. In the Input app, where we focused on improving the field survey experience by allowance to use of precise external GPS receivers, stake-out navigation mode or attaching multiple photos to a single feature.
On the server-side, in the Mergin, we will demonstrate the ability to store, version and share your geo-data with your team. You will see the new feature to show a map overview of your Mergin project on the dashboard. To fully integrate into CDI, the DB-sync tool for two-way synchronization between Mergin and PostgreSQL will be presented. Advanced features for usage in large teams, such as selective synchronization and work packages (subprojects for teams within companies) will be explained.
At the end of the talk, we will uncover the upcoming roadmap for the new features coming in the second half of 2022.
PDAL is Point Data Abstraction Library. It is a C/C++ open source library and applications for translating and processing point cloud data. It is not limited to LiDAR data, although the focus and impetus for many of the tools in the library have their origins in LiDAR. PDAL allows you to compose operations on point clouds into pipelines of stages. These pipelines can be written in a declarative JSON syntax or constructed using the available API. This talk will focus on the current state of the PDAL Pointcloud processing library and related projects such as COPC and Entwine, for pointcloud processing. Coverage of the most common filters, readers and writers along with some general introduction on the library, coverage of processing models, language bindings and command line based batch processing. First part will be covering new features for current users. Some discussion of installation method including Docker, binaries from package repositories, and Conda packaging. For more info see https://pdal.io
The Copernicus Climate Change Service (C3S) Climate Data Store (CDS) is a single point of access to a wide range of free, quality-assured climate data, along with a suite of tools for performing cloud-based analysis and visualisation of very large datasets. Launched in 2018, the CDS provides over 100 datasets and 30 interactive applications for a global, interdisciplinary and intersectoral audience of over 100,000 users.
The Copernicus Data Store (CopDS) project aims to reimagine the CDS, making use of modern technologies and knowledge gained during the development of the existing system to expand and streamline its functionalities and improve its performance and scalability. We present a high-level blueprint of the in-development CopDS, with an emphasis on how we plan to overcome the limitations of the original CDS. We explore our plans for the development of a new suite of open-source Python tools for performing retrieval, analysis and visualisation of climate and atmospheric data under the CopDS project, along with our plans for offering free cloud-based infrastructure for processing and visualising very large datasets through an easy-to-use Python web interface. We also discuss the development of tools for transforming simple Python code into high-quality web applications for exploring CopDS climate and atmospheric datasets, providing tools for interactive mapping, graphical user interfaces and a results cache for responsiveness.
COMTiles (https://github.com/mactrem/com-tiles) is a streamable and read optimized file archive for hosting map tiles at global scale on a cloud object storage. Currently most geospatial data formats (like MBTiles or Shapefiles) were developed only with the POSIX filesystem access in mind. COMTiles in contrast is designed to be hosted on a cloud object storage like AWS S3 or Azure Blob Storage without the need for a database or server on the backend side. The map tiles of a COMTiles archive can be accessed directly from a browser via HTTP range requests. COMTiles are already successfully used in some projects to significantly reduce the hosting costs and simplify the handling of large tilesets in the cloud.
Structure of the talk:
- Basic concepts of COMTiles like the structure of the streamable index table (pyramids vs space-filling curves vs fragments)
- Comparison of COMTiles to existing cloud native geospatial formats regarding the visualization of large datasets in the browser
- Advantages of using a streamable archive format like COMTiles over directly hosting the map tiles in the cloud
Although integration of GRASS GIS with Python has been well supported for several years, using GRASS with computational notebooks such as Jupyter Notebooks was inconvenient up until recently. Computational notebooks allow users to share live code with in-line visualizations and narrative text, making them a powerful interactive teaching and collaboration tool for geospatial analytics. In this talk, we’ll introduce a new GRASS GIS package, grass.jupyter, that enhances the existing GRASS Python API to allow Jupyter Notebook users to easily manage GRASS data, visualize data including spatio-temporal datasets and 3D visualizations, and explore vector attributes with Pandas. We’ll demonstrate how to create interactive maps through integration with folium, a leaflet library for Python, and we’ll look at an example use case: using notebooks to teach an advanced geospatial modeling course for graduate students at NC State University.
Grass.jupyter is still under active development but is available experimentally in GRASS version 8.0 and officially with GRASS version 8.2.
The proliferation of client-side analytics and on-going vulnerabilities with shared code libraries have fueled the need for better safety standards for running executables from potentially unknown sources. WebAssembly (WASM), a compilation target that allows lower-level languages like Rust, C, and Go to run in the browser or server-side at near-native speeds. Much like Docker changed the way we run virtualized workflows, WASM runtimes create safe virtual environments where access to the host system is limited.
We present dask-geomodeling: an open source Python library for stream processing of GIS raster and vector data. The core idea is that data is only processed when required, thereby avoiding unnecessary computations. While setting up a dask-geomodeling computation, there is instant feedback of the result. This results in a fast feedback loop in the (geo) data scientist’s’ work. Big datasets can be processed by parallelizing multiple data queries, both on a single machine or on a distributed system.
In geographical information systems (GIS), we often deal with data pipelines to derive map layers from various datasets. For instance, a water depth map is computed by subtracting the digital elevation map (DEM) from a water level map. These procedures are often done using open source products such as PostGIS and QGIS. However, for medium to large datasets (> 10 GB) the extent of these analyses are costly due to memory restrictions and computational cost. As a rule, these issues are tackled by manually cutting the dataset into smaller parts. However, this is a tedious and time-consuming task. In case one needs to this regularly, this is not feasible.
We present the open source Python library dask-geomodeling  to solve this issue. Instead of a script, dask-geomodeling requires a so-called “graph”, which is the definition of all operations that are required to compute the derived dataset. This graph is generated by plain Python code, for instance:
plus_one = RasterFileSource('path/to/tiff') + 1
Note that these operations are lazy: there is no actual computation done and therefore the above line executes fast. Only when actual data is requested:
plus_one.get_data( bbox=(155000, 463000, 156000, 464000), projection='epsg:28992', width=1000, height=1000 )
An array containing the data is computed. No need to load the whole TIFF-file in memory if you only use a small part!
The computation occurs in two steps. First, a computational graph is generated containing the required functions. While generating the computational graph, the operations may be chunked into smaller parts. Second, this graph is evaluated by dask , using any scheduler (single thread, multithreading, multiprocessing, distributed) that is provided dask.
This library is open source under the name “dask-geomodeling” and is distributed on Github, PyPI, and Anaconda. A hosted cloud version is also available under the name Lizard Geoblocks . Currently, we have implemented a range of operations for rasters, vectors, and combinations. The community is welcome to use our library, benefit from it, and expand it!
- dask-geomodeling, https://github.com/nens/dask-geomodeling, https://dask-geomodeling.readthedocs.io/
- dask, https://dask.org/
- Lizard Geoblocks, https://lizard.net/
GeoMapFish is an open source WebGIS platform developed in close collaboration with a large user group. It targets a variety of uses in public administrations and private groups, including data publication, geomarketing and facility management. OpenLayers and an OGC architecture allow to use different cartographic engines (MapServer, QGIS Server). Recently new features have been added such as vector tiles integration, from raw data to visualization. In order to get rid of AngularJS dependency, a roadmap has been established for a migration to a web components architecture. Everything has been planned so that our users can continue to develop their projects during this process. K8S support is evolving with the implementation of the necessary tools for Azure environments. Highly integrated platform, large features scope, fine grained security, reporting engine, top performances and excellent quality of service are characteristics of the GeoMapFish solution. In this talk we ll present the key usages, web components migration process and latest developments, including vector tiles support.
The presentation will share how UNESCO’s International Institute for Educational Planning (IIEP) applies FOSS4G technologies to advance Ministries of Education’s use of geospatial data in planning better educational results among school children. Our work here at the IIEP-UNESCO is to design tools for educational planners all around the world, and FOSS4G has been the cornerstone of our work.
Educational planners are the professionals who work in Ministries of Education –in district offices or in the central office-- that are tasked with designing the best possible strategies and interventions to make sure that all learners will get good quality access to relevant and efficient educational services. For decades, planners have been using geospatial insights with minimal computing capacity and- to be honest- very little spatial data.
Over the last few years, we have been completely refurbishing the methods and the data that we use as planners, and working with the FOSS4G community has been instrumental in fulfilling our mission.
This talk is about sharing concrete applications and use cases of geospatial data in educational planning. For example, we spatialize the number of students that will enrol in each grade in different communities, we plan for the training, recruitment, deployment, and retention of the teaching staff, we lead suitability analyses to check where to best build a new school or where to refurbish existing ones.
So in this presentation we will show you examples of application of tools and methodologies all built on FOSS:
- Spatialized school-age populations in Jamaica
- Routing optimization of inspection circuits in Finland
- Geographically-weighted regressions for improving learning in Colombia
- School infrastructure and natural hazard risk model in Indonesia
- Sea level rise and historical floods in Viet Nam
- School catchment areas based on travel time (check out the presentation submitted by Riku Oja from GISPO, it’s our joint work!)
As an institution here at IIEP we have sought to advance this line of work by (1) making a complete switch to free and open source software (FOSS) and open access documentation and data sources (OpenScience), (2) bringing geospatial approaches and big, small, and thick data, to update EDplanning processes, (3) creating technical partnerships with instances such as GISPO, UNOSAT, among others, and (4) collaborating on informing education policy-making with geospatial insights.
This talk is an invitation to all geospatial data geeks to join us in shaping the future of educational planning.
We’re drowning in data, but the geospatial world lags badly behind in versioning tools compared to our software counterparts. Kart is solving this with a practical open tool for versioning datasets, enabling you to work more efficiently and collaborate better.
We will introduce you to Kart and demonstrate some of the key features, including our QGIS plugin. And we'll highlight what’s coming next on our roadmap.
Since 2021 we have added support for Raster and Point Cloud datasets, and we'll be showing how we build on Kart's versioning and spatial filtering techniques to efficiently navigate, access, and use large and small datasets.
Kart allows you to quickly and easily manage history, branches, data schemas, and synchronisation for large & small datasets between different working copy formats, operating systems, and software ecosystems.
Modern version control unlocks efficient collaboration, both within teams and across organisations meaning everyone stays on the same page, you can review and trace changes easily: ultimately using your time more efficiently.
Mesh Abstraction Library (MDAL) has become an integral part of QGIS over the recent years. MDAL is used in QGIS to parse meteorological and hydrological data. MDAL is an open source library and recently has joined the OSGeo family as a Community project.
MDAL data can be 1-dimensional, 2D or stacked 3D data. QGIS has been extended to render all those types of data in 2D and 3D map canvases. Once data are loaded in QGIS, users can easily style and explore temporal dimension of the data using QGIS generic tool. Additional plugins have been developed to leverage on mesh data in QGIS to slice and dice the mesh data.
In addition to visualising the data, new tools have been developed to directly edit the unstructured mesh data in QGIS. Users can edit geometries and values of the faces and vertices of the mesh data. The built-in validation tools for mesh editing, ensure the resulting mesh is topologically correct during and after mesh editing operations.
OpenPlains - Is it the new web GRASS?
The European Centre for Medium-Range Weather Forecasts (ECMWF) is an independent intergovernmental organisation which is producing and disseminating numerical weather and environmental predictions to its users in national meteorological services as well as commercial customers. As of recently, ECMWF started the move towards serving data to users beyond operational forecasters in Member states and commercial customers for a charge, by adopting an open data policy which will be implemented in phases from 2020 to 2025. The first phase included opening hundreds of web forecast charts and making archived data available under a Creative Commons (CC BY 4.0) open licence in 2020. The next step was in January 2022 when the production of open subset of real time medium range forecast began.
This phased move towards free and open data aims to support creativity and innovation in the field of scientific research as well as weather applications. It also represents a step towards more reproducible open science. However this can not be achieved by only opening the real time data. The users need to be able to find and easily use the data and integrate it into their own research work or application workflows. Reliable access to the data is achieved by making it available both through ECMWF https service and via the Microsoft Azure cloud, where the archived data is kept as well.
In order for the data to be more FAIR (Findable, Accessible, Interoperable and Reusable), additional development work is being done. This work includes the design of an API to easily download the geospatial data, and the development of open source Python libraries to process and visualise it. These open source libraries make use of open geospatial software, such as proj to deal with different projections. To present these new tools and help users understand how to retrieve and process ECMWF data, a set of Jupyter notebooks was created, each of them reproducing one open weather forecast chart from the downloading the data to the visualisation.
This talk will give a short overview of which data is available in the open data set, and will then focus on the software and Jupyter notebooks development.
The European Flood Awareness System and the Global Flood Awareness System (EFAS and GloFAS), are the two Early Warning Service for floods part of the Copernicus Emergency Management Service (CEMS), operated by the EU Joint Research Centre (JRC). EFAS and GloFAS aims to complement national and regional service by providing medium-range flood forecasts and hydrological outlooks for large, transboundary rivers. Data and products are accessible to eligible users through the Climate Data Store and dedicated web interfaces. ECMWF, having the role of the computational centre within CEMS, is responsible for running the forecasts and the post-processing, on top of co-developing and hosting the EFAS and GloFAS information systems.
These two information systems consist on back-end/front-end web services based on OGC standards and open-source software. As it is often the case, a web-based mapviewer allows to display different layers, produced by a WMS back-end. These layers are the graphical representation of the output of the hydrological models and meteorological observations, like flood probability, soil moisture, return period, observed precipitation etc. For most layers a new forecast is produced every 12 hours for EFAS and every 24 hours for GloFAS.
Unlike many similar services, however, the aim of EFAS and GloFAS is not only to offer the latest forecasts or the latest observations but also to browse through data from previous days, so that older forecasts can be compared with actual observed events. This inherently means supporting the time dimension within the WMS standard, and managing large quantity of data that accumulates every day. In the case of EFAS, for example, an additional 1.5 Gb of data is produced twice a day.
It also means handling the inevitable changes in data formats and structures that arise as the service grows and new features are added, without breaking backward compatibility. New layers are added, old layers are removed, changes in the geographical domain or the projection for a certain layer must be supported from a certain date onward, etc. Not to mention increasing the number of forecast cycles from one per day to two or more.
To make matters worse, data access must be restricted on both front-end and back-end based on a matrix of user privileges, requested product and requested date. For example some layers are offered to all users with no time restrictions, while others are restricted to some users for the latest 30 days, and freely accessible to all users for dates older than 30 days ago.
In this talk we describe the challenges of developing and operating an authentication-aware web service heavily based on large geospatial datasets with a strong diachronic component.
For many years QGIS has been focused on 2D spatial data and support for
3D data was very limited. This has changed in the last couple of years
and QGIS is getting a full suite of tools to work with 3D data.
QGIS development team has been actively working on better support for
data with elevation - such as point clouds, raster digital elevation models,
3D vectors or meshes. This has been possible mainly thanks to the successful
crowdfunding campaign run in autumn 2021:
In this talk, we will show outcomes of these development efforts including:
- a brand new profile tool for detailed inspection of elevation data
- new 2D/3D visualization options for data
- great improvements to the usability of 3D map views
- support for Cloud Optimized Point Cloud (COPC) format
We will also discuss the plans for future releases and how QGIS can even
better fit requirements of users with the ever increasing supply of 3D data.
Three dimensional representations of surface terrain and structure is essential for a range of widespread applications and forms a base dataset that underlies many decision making processes. A few examples include land use planning, areal overview, operational analysis, emergency handling, route and transport planning, geographical and meteorological modelling etc. Recently, the Norwegian Government and the Norwegian Mapping Authority tasked the acquisition of high resolution Light Detection and Ranging (LIDAR) data covering the entire mainland with a minimum of 2 point measurements per meter. In addition, all aerial lidar acquisitions that were tasked by the government since the early 2000s are also publically available for download. In this work using FOSS, we discuss the height accuracy of ground classified datasets (i.e. Digital Terrain Models, Digital Surface Models) with varying original acquisition ground point densities. We create classification pipelines that allow us to calculate derivative products such as a “normalized” vegetation density and further compare these over time. This work in progress discusses our experience with open source tools on open source data and some of the challenges we encountered scaling our methods for big data.
The Mapping Service at the Center for Urban Research at the City University of New York (CUNY) Graduate Center engages with foundations, government agencies, businesses, nonprofits, and academics to use spatial information and analysis to develop research projects. Our most recent set of web maps focus on the decennial redistricting process in the United States. Redistricting is often a complex and complicated process. Delays in publishing data from the 2020 Census due to COVID-19 shortened the time frame for redrawing legislative lines in many states. Given the often rushed nature of redistricting it was crucial to provide fair district advocates, journalists, and lawmakers with accurate maps and data shortly after the proposed districts became publicly available.
In previous projects, we relied on proprietary back-end stacks using ArcGIS, Microsoft SQL, and the .NET framework. These products afforded a viable but inflexible solution to our GIS needs. The online mapping platform for ArcGIS is not as elegant as its open source counterparts, Microsoft SQL did not provide a solution for directly serving vector tiles, and each upgrade of Windows, IIS, and Visual Studio presented unique challenges.
Last year we implemented a new back-end stack to connect our spatial databases to our web sites using FOSS solutions: QGIS, PostGres with PostGIS, Mapbox, and Nodejs. The result is a free, fully customizable solution that is easy to update, maintain, and migrate. We are currently using it in about a dozen applications to serve vector tiles and query demographic and other data. With our new workflow we were able to quickly upload dozens of map proposals, calculate metrics to analyze the potential impacts of each one, and present them on our website within hours of the data being made available to us.
Rockfall risk analysis and mitigation activities are key points in land management in mountain areas and along coastal cliffs, aimed at the protection of population, structures, infrastructures and involved economic activities such as viability, industry and tourism.
Rockfall is a complex landslide phenomenon, widespread over large areas and characterised by high variability. As a function of the amount of available data to describe such variability, the risk analysis can be carried out at different levels of detail, i.e. at different reference scales, each one characterised by specific objectives, procedures, and input data (Fell et al, 2008).
At the detailed scale (> 1: 5000), in order to design risk mitigation works, it is necessary to analyse localized rockfall phenomena through specific methodologies requiring a careful identification of danger scenarios, a statistical description of the parameters, and sophisticated probabilistic calculation tools.
At the medium-large scale (1: 5000 - 1: 25000), on the contrary, due to the difficulty in finding detailed information over larger slope portions, it is possible to analyse widespread instability sources based on simplified mechanical considerations and several spatial approximations. Such large scale analyses can be used as a management tool for territorial planning and can be easily implemented in GIS software.
This work presents a medium-large scale Rockfall Quantitative Risk Assessment procedure fully developed within the QGIS environment. The procedure is based on the IMIRILAND methodology (Castelli and Scavia, 2008), which allows to obtain risk maps through integrated and consequential phases and simple raster calculations. The main steps of IMIRILAND methodology are:
• hazard analysis, aimed at defining, for a given rockfall scenario, the potentially involved area, the intensity of the damaging phenomenon and the temporal probability of occurrence;
• identification of the elements at risk and definition of their value and their exposure with reference to physical, social, environmental and economic considerations;
• analysis of the vulnerability of the elements at risk, i.e. the degree of loss of the element as a consequence of the impact with the falling block;
• calculation of the risk, combining the hazard with value, exposure and vulnerability of the elements at risk.
The IMIRILAND QRA procedure was applied to the mountain site of Sorba Valley (VC), North-Western Alps. The site involves an area of about 10 km2 with altitudes ranging from 750 m up to 2035 m a.s.l. The site is prone to rockfall events, which historically involved some hamlets and some sections of the valley main road. However, very little information on such events is available, and no indication can be obtained in terms of rockfall recurrence and involved volumes. Due to this, it was not possible to take into account temporal aspects and relative (spatial) risk maps were produced in this work.
All the analysis was carried out using open data available as web services and datasets from the Regione Piemonte GeoPortal:
• DTM with 5 m x 5 m raster resolution – GeoPortale Piemonte;
• Orthophoto AGEA 2018 – GeoPortale Piemonte;
• Piemonte Land Cover BDTRE (Base Dati Territoriale di Riferimento degli Enti) – GeoPortale Piemonte;
• vehicular mobility TGM (Traffico Giornaliero Medio) – GeoPortale Piemonte.
Three rockfall design scenarios were identified regarding homogeneous rockfall source areas associated with different design block volumes. Each scenario included more than 3600 source points, extracted through the analysis of the DTM (slope and aspect) and the observation of the orthophoto for the identification of rocky outcrop zones. For each scenario, a quick estimation of a time-independent hazard was performed using the QGIS QPROTO plugin (Castelli et al, 2021). The plugin is based on the Cone Method (Jaboyedoff and Labiouse, 2011) and runs a visibility analysis through the r.viewshed GRASS GIS module, combined with simplified topographic, geomorphological and mechanical considerations. The result of the analysis is a series of raster maps with the distribution of computed values of velocity, energy, and relative spatial hazard.
The following step of the IMIRILAND procedure is the analysis of the damage, based on the collection of information on the exposed elements. To this aim:
• the elements at risk were classified according to various Land Cover categories from BDTRE, associated with relative hierarchical values. Physical and social values were taken into account for each element. Physical value is mainly linked with the type of element and with the reconstruction costs while social value is linked to the presence of persons and the social utility of the asset;
• the physical exposure of the elements at risk was defined for each hazard scenario with reference to the computed runout area. The social exposure was defined taking into account the time spent by people inside buildings or on the roads.
• the physical vulnerability of the elements at risk was defined on the basis of the intensity of the phenomenon in terms of rockfall energy and the type of element. The social vulnerability is the same as the physical one inside buildings and is 100% outside buildings.
Physical and social damage maps were then obtained for each hazard scenario through the product of the value, the exposure and the vulnerability of the elements located in the involved area. Due to the lack of information on the temporal probability of occurrence of the scenarios, damage maps correspond to relative, time-independent, risk maps.
The results show that the highest risk is concentrated in the inhabited areas and some portions of the valley road, according to the few historical information available on the site.
The QPROTO plugin is available at the GIT repository of FAUNALIA (gitlab.com/faunalia/QPROTO) and can be easily used by professionals, public administrators, managers of roads, railways or infrastructures for land planning purposes or for preliminary analyses aimed at defining the most critical zone of a wide area, where resources and more in-depth analyses can be focused for mitigation purposes.
WebGIS publishing platforms like MapStore, are usually very feature-rich, to cover a lot of different scenarios, from the QGIS-like do-it-all web application, to a simple interactive map for a company website.
This comes with an important trade-off: even when removing most of the unneeded functionalities for the simplest use case, the cost of the platform needed to run your maps can be overwhelming.
The problem here is that a single platform is not always the best choice for every kind of usage.
This talk shows how to use the popular MapStore platform as an application builder, to publish interactive maps that can run on a very light engine, de facto scaling down a quite heavy platform to the needs of performance and simplicity that better suit a lot of general user oriented applications and websites.
We will start by creating a map from different data sources, using MapStore, then we will export the map and publish it to our alternative light engine.
We will then highlight all the advantages this approach can offer.
Finally we will give some insights on the technical aspects of this project.
Orfeo ToolBox (OTB) is an open-source project for state-of-the-art remote sensing, made for large-scale image processing. It is written in C++ and a Python interface is available. However, the use of plain OTB in Python requires a lot of code; more than what a Python user is used to!
pyotb aims at making the use of Orfeo ToolBox easy in Python. In this talk, discover:
- how to run any application of OTB in just one line of code
- how to build complex processing chains containing several applications in an intuitive way.
- how to interact easily with NumPy and Tensorflow.
- some pythonic features made for user convenience.
- some functions written to mimic the behavior of some well-known NumPy functions:
pyotb.any... and counting!
OTB has an amazing pool of applications and can run on all types of computers: from resource limited laptops to high performance clusters. With pyotb, unleash the power of OTB in Python!
We will make you love the way you can use OTB in Python. You can find more info on the project on the pyotb repository: https://gitlab.orfeo-toolbox.org/nicolasnn/pyotb
Tables are a great way to store data and this format is often used to make data available for the public on websites. While these tables technically meet their intended goal of sharing data, they do not make it easy to understand the spatial and temporal patterns in the data they contain. In this talk, I will demonstrate how an automated toolchain of web scraping and text processing in R, and interactive visualization in Leaflet is automated with GitHub Actions and applied to aid data interpretation and generate new insights from a daily-updated online tabular dataset using a case study of the University of California Davis’ Potential Worksite Exposure Reporting data for COVID-19.
In the United States, California Assembly Bill 685 (AB685) requires employers in the state of California to notify employees of potential worksite exposures to COVID-19 to the geographic scale of individual buildings. The University of California Davis meets this requirement by listing any potential exposures on a website, giving the date reported, the dates of the potential exposure, and the building name as reported by the employee. To make a map from this data, the dates and building names had to be standardized and joined to a vector layer of campus buildings before they can be added to an interactive Leaflet map. Because the data updates daily, the whole process needed to be automated so no one had to run the scripts every day to update the map. The result is a map that gives uses a much clearer understanding of the spatial and temporal distribution of potential exposures to COVID-19 on campus.
The Data Operations Systems and Analytics team at NYC DOT’s primary mission is to support the data analysis and data product needs relating to transportation safety for the Agency. The team’s work producing safety analysis for projects and programs typically involves merging data from a variety of sources with collision data, asset data, and/or program data. The bulk of the analysis is performed in PostgreSQL databases all with a geospatial component. The work necessitates ingesting input data from other databases, csv/excel files, and various geospatial data formats. It is critical that the analysis be documented and repeatable.
Moving data around, getting external data into the database, transforming it, geocoding it etc., previously occupied the bulk of the team’s time before, reducing capacity for the actual analysis. Additionally the volume of one-off and exploratory analyses resulted in a cluttered database environment with multiple versions of datasets with unclear lineage and state of completeness.
Modeled on the infrastructure as code idea, we began building a python library that would allow us to preserve the entire analysis workflow from data ingestion to analysis and to output generation in a single python file or Jupyter notebook. The library began as a way to reduce the friction and standardize the process of ingesting external data into the various database environments utilized. It has since grown into the primary method to facilitate reproducible data analysis processes that includes the data ingestion, transformation, analysis, and output generation.
The library includes basic database connections, and facilitates quick and easy import and export from flat files, geospatial data files, and other databases. It provides both inferred and defined schemas, to allow both quick exploration and more thoroughly defined data pipeline processes. The library includes standardization of column naming, comments, and permissions. There are built in database cleaning processes, geocoding processes, and we have started building simple geospatial data display functions for exploratory analysis. The code is heavily reliant on numpy, pandas, GDAL/ogr2ogr, pyodbc, psycopg2, shapely, and basic sql and python. The library is not an ORM, but occupies a similar role, but geared towards analytic workflows.
The talk will discuss how the library has evolved over time, the functionality and use cases in the team’s daily workflows as well as where we would like to extend the functionality and open it up for contributions. While the library is not currently open source, we are actively working on creating an open version and migrating to Python 3.x. This library has greatly improved the speed and simplicity of conducting exploratory analysis and enhanced the quality and completeness of the documentation of our more substantial data analytics and research.
The library should be of interest and utility for anyone working with data without the support of a dedicated data engineering team to facilitate the collection of multiple datasets from a variety of formats, as well as anyone looking to standardize their data analysis workflows from beginning to end.
Work started at the end of 2019 to integrate MapStore as a WebGIS viewer for the geOrchestra SDI (a free, open source, modular and interoperable Spatial Data Infrastructure software born in 2009 to meet the requirements of the INSPIRE directive in Europe). The work, led by GeoSolutions, was funded by Rennes Mètropole with the goal to meet the expectations of the large geOrchestra community for a new, more ergonomic, modular and customizable WebGIS based on updated technologies.
The project also triggered a significant evolution of the MapStore product by developing several interesting new tools and enhancements to the MapStore framework. Thanks also to this powerful integration MapStore significantly increases its strengths by opening the door to further and more advanced developments and evolutions. Below is a list of main enhancements and new features that have been part of the integration:
- Application Context Manager: an administrative tool designed to build and configure MapStore's viewers
- General evolutions of common existing tools in MapStore to enrich the user experience: Map viewport enhancements, CRSs management, TOC, translations, styling of layers, advanced measure tool, layer metadata, various catalog tool extensions to support additional data sources (like TMS, WFS etc), Attribute Table enhancements for the editing mode and more
- Enhancements on the MapStore security tier aimed to the integration
- Extension Manager: extensions are plugins that can be distributed as a separate package (a zip file), and be installed, activated and used at runtime in an existing MapStore installation
- MapStore Data Directory: to make more portable and manageable the MapStore configuration and installed extensions
The first integration received positive feedback from the geOrchestra community but also from those who were already using MapStore as a standalone application as well as from GeoNode users. Thanks to MapStore, some application flows have also been strengthened and consolidated in geOrchestra; further developments made after the first integration work also had the aim of migrating other custom tools (such as Cadastrapp and Urbanisme) leveraging on the MapStore Extensions system and also the geOrchestra community has taken its own steps in this direction with the inclusion of further custom extensions for MapStore.
This great collaboration is progressing fruitfully even today where further evolutions and developments are expected for 2022, including new features and functionalities for MapStore.
Publicly available data tends to be spatially aggregated to administrative units, limiting the feasibility of nuanced analyses that reflect the natural state of communities and provide actionable insights for a wide range of stakeholders. While higher resolution data is generally available within government agencies, access for external researchers is limited due to well-established privacy concerns. Inspired by our own use case of developing a regional quality of life metric for neighborhoods in Denmark, our team at Aalborg University’s Department of the Built Environment, in collaboration with data.org’s Growth and Recovery Challenge, and Data Clinic, set out to develop and open source not only foundational granular spatial units and data that adhere to privacy laws, but also the accompanying methodology that has the potential for broad applicability in other countries.
In this presentation, we will demonstrate the methodology’s generalizability, particularly across common European land use and geographical features, and show how the resulting high-resolution shape files and community data can become crucial tools for government decision-makers, community organizations, and researchers in their efforts to increase transparency and engage in practical, actionable research.
Focused initially on our Denmark use case, we algorithmically create spatial units with minimum household and population counts from country-wide hectare cell level data. Our approach uses data on road networks and administrative boundaries to create socially meaningful component polygons. This is achieved by developing tools based on already existing open source packages available in R and Python. The hectare cells are then mapped onto the polygons and clustered using the max-p regionalization algorithm with constraints on the minimum population and household counts to arrive at the final set of spatial units.
To improve the accessibility of this data to not just researchers but also administrative decision-makers, community organizations, and the general public, we are developing an online tool to explore and visualize indicators within the resulting fine-grained regions such as disposable income, educational level, housing prices, migration rates, distances to public institutions, and labor market attachments in Denmark. Regional inequality in Denmark has increased over time, and with the help of this tool, we hope to provide the ability to study these key metrics both within and across municipal regions. In the development of the tool, we prioritize user feedback and common use cases to ensure both applicability and longevity.
This project has been developed with an open-source mindset by: 1) creating flexible open data resources that can adapt to a wide range of public use cases 2) open sourcing the methodology for use in other countries/regions and 3) enabling the use of existing open data and tools such as Open Street Maps, R and Python in the pipeline.
We firmly believe that the project has the potential to improve knowledge sharing and collaboration between GIS experts, decision-makers, researchers and the general public not only in Denmark, but also in Europe and beyond.
This is a presentation about mapping of electricity poles in Sierra Leone with the use of open source geospatial tools. It highlights the use of community members to create and use data for social and economic purposes. It delves into how YouthMappers are sharing and using acquired skills within their chapters and communities as a whole.
The project starts with the skills assessments of existing YouthMappers and development of strategies to train or capacitate new mappers with the required skills, this includes the . It entails the processes of community engagements and working with stakeholders and relevant authorities in project implementation. It highlights the workflow with the use of Open Source Geospatial tools and their outcomes. It also talks about the levels of collaboration between different organizations to achieve a common goal. The presentation will be an engaging with questions and answers and live showcase of tools to make the presentation clear to those that would be participating.
Natural Earth is a public domain map dataset available at 1:10m, 1:50m, and 1:110 million scales. Featuring tightly integrated vector and raster data, with Natural Earth one can build a variety of visually pleasing, well-crafted maps with cartography or GIS software.
GeoServer GeoCSS is a CSS inspired language allowing you to build maps without consuming fingertips in the process, while providing all the same abilities as SLD.
In this presentation we’ll show how we have built a world political map and a world geographic map based on Natural Earth, using CSS, and shared the results on GitHub. We’ll share with you how simple, compact styles can be used to prepare a multiscale map, including: * Leveraging CSS cascading. * Building styles that respond to scales in ways that go beyond simple scale dependencies. * Various types of labeling tricks (conflict resolution and label priority, controlling label density, label placement, typography, labels in various scripts, label shields and more). * Quickly controlling colors with LessCSS inspired functions. * Building symbology using GeoServer large set of well known marks.
Join this presentation for a relaxing introduction to simple and informative maps.
pgRouting not only allows to get routes for any kind of transportation, from 0 wheels to 18 wheeler.
A road is closed? Calculate how can traffic be diverted.
Don't get a result? Check if your graph is connected.
Need to deliver to several clients? Traveling Salesperson problem helps to determine the route.
Find out what is planned for version 4.0 that is a work in progress.
Learn about the spin off for vehicle routing problems.
pgRouting extends the PostGIS / PostgreSQL geospatial database to provide geospatial routing functionality.
Advantages of the database routing approach are: * Data and attributes can be modified by many clients, like QGIS through JDBC, ODBC, or directly using Pl/pgSQL. The clients can either be PCs or mobile devices. * Data changes can be reflected instantaneously through the routing engine. * The “cost” parameter can be dynamically calculated through SQL and its value can come from multiple fields or tables.
Climate change is here. heating, construction, cooling is estimated to contribute to 30% of the C02 emissions for France. And yet, we don't really have a database of those buildings. We have footprints by the French National Geographic institute, tax raising datasets on cadastral parcels, many derived datasets for energy consumption, performance certificates, but all of them are far away from a usable and centralized reference dataset.
The national adress geolocation (BAN) project unlocked the key pivot database between all them. The Scientific and Technical Center for Building (CSTB) a public industrial and commercial company, decided to dedicated efforts to build a permanent reference dataset, and push it as an open database.
The full stack is using open source technologies (Pandas / GeoPandas, to PostGIS, Apache Spark, MLflow, QGIS, MapLibre ...), and with massive datasets (21 Millions buildings, >400 descriptors). It allows to run analyses and predictions for all the climate change related indicators, such as housing price and energetic performance relation, heat wave impact, solar potential, etc..
As the first versions are now published, the next challenges are :
- make the data easier to reuse
- Push toward a official common identifier of each building, housing and parcels, through the BatID project and Etalab open government initiatives
- Enrich the dataset with new statistics and predictions twice a year
- Consolidate its economic rationales to make this viable on the long run
This talk will also show cool dataviz and geoviz stuff for geonerds audience :)
When natural disasters such as floods and earthquakes occur, much work is required for recovery and reconstruction. Depending on the scale of the disaster, it is often not enough for the government and local governments to do the work alone. Disaster relief through volunteer activities by many individuals and organizations is also necessary.
In this kind of work with diverse participants, it is very important to collect and analyze information, plan the work, and divide the work among the various members.
It would be very beneficial to have computer technology available to survey disaster situations and collect, consolidate, and share data
It would be useful to be able to use the smartphones that many people own for information gathering and sharing mechanisms.
Japan's Coordinating Organization for Volunteer Organizations in Disasters
(Japan Voluntary Organizations Active in Disaster) and its component organization, the IT Disaster Assistance and Response Team, developed a smartphone-based disaster information collection system in 2019 that has been used to collect and use information at many flood sites.
This system transmits disaster information collected by smartphones and consolidates it into a Google Spread Sheet.
An API has also been created to distribute information collected from Google Spread Sheet in GeoJSON format.Using this API, it is also possible to use disaster information on web maps and QGIS.
In this presentation, I will describe the status of the creation and use of this system.
The Covid-19 pandemic has had many impacts beyond health - economic, social, etc. The Cities Covid Mitigation and Mapping (C2M2) project, from the US Department of State's MapGive initiative, sought to map and help direct policy around these secondary impacts of Covid in several countries globally. Map Kibera and GroundTruth Initiative worked to track these impacts in Nairobi, focusing on the themes of education, water and sanitation.
This talk will present the outcomes of the project, which focused on the mapping in OSM of schools, water points, and toilet facilities in the informal settlements of Kibera and Mathare. These updates to existing OSM data help show how the pandemic affected these sectors by looking historically at changes. Additionally, individual surveys about access to water during shortages and impacts of school closings and disruptions help paint a picture of how Nairobi's lower income residents have been particularly impacted by the pandemic. There is also a strong gender component to the impacts which will be highlighted.
The project used a combination of tools, which will also be presented: Kobo Toolbox for mapping and individual survey collection, OSM for map data, and data analysis in QGIS. The Kenya team was supported by many other team members from the C2M2 project for data analysis. Additionally, participants in Africa included Bukavu in the DRC and Pemba in Mozambique; we will briefly share their map outcomes as well. The "Africa Hub" which included Nairobi, Pemba and Bukavu showed that across the continent, economic and social impacts of Covid-19 on vulnerable groups were particularly challenging.
GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping, as well as to process data, either in batch or on the fly.
GeoServer powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage, disseminate and analyze data at scale.
This presentation provides an update on our community as well as reviews of the new and noteworthy features for the latest releases. In particular, we will showcase new features landed in 2.20 and 2.21, as well as a preview of what we have in store for 2.22 (to be released in September 2022).
Attend this talk for a cheerful update on what is happening with this popular OSGeo project, whether you are an expert user, a developer, or simply curious what GeoServer can do for you.
Lizmap is an opensource server application to publish QGIS project on the web without any coding skills needed.
It's using QGIS Server in the backend so users have the same rendering between their QGIS Desktop and the web version of their project.
QGIS Server and Lizmap are reading QGIS project to publish layers with their legend, forms, print layout, layer relationships... Some additional Lizmap configuration can be added to have dataviz capabilities, decide or not to publish the attribute table or to configure the feature filter form. No coding skills are required, all the configuration is done using QGIS Desktop user interface.
The QGIS project is adapted for web browsers and have a responsive UI. Lizmap include some Access Control List at different levels such as project, layer or even features.
The goal of this presentation is to show the state of this opensource project hosted on GitHub and to explain the roadmap.
Globally, the population living in urban areas is increasing with a strong impact on land use patterns, particularly on the availability and use of green spaces. The impact of green spaces is beneficial to health, for example, by reducing mortality or improving mental health. These effects are also related to different ecosystem services provided by green spaces, such as regulating temperature, modifying air pollution and noise levels, and offering more opportunities for physical activity.
GreenUr is a plugin for QGIS that aims at putting together knowledge and information on the impacts of green space on health. It is developed as a prototype representing a work in progress coordinated by the World Health Organization (WHO) to provide an educational tool to introduce the relation between green spaces, health, and well-being and raise awareness of the importance of green spaces in cities globally. The tool can also be used as ‘quickscan’ for urban spatial planners that would like to orientate on possible effects of current and new green space design. The plugin has been tested with different experts and locations, and it will be downloadable via the QGIS Plugin manager from the project website.
The GreenUr tool allows the users to estimate the impacts of green spaces on health in a given population. The main questions addressed by the current version of the GreenUr prototype are the following:
- How much green space is available for the population of a specific city?
- Which are the pathways through which green spaces relate to health?
- Where within a city are health-related benefits of green spaces the largest?
- Which are hypothetically different land-use scenarios for green spaces?
- What would be the magnitude of the change in health impacts if future green space would be changed in cities?
All calculations performed by GreenUr are based on methodologies established by social, environmental, and epidemiological studies identified by WHO. The computational backend used is GRASS GIS and other processing methods available in QGIS. The plugin is running any common operating system and offers a demo database.
Giswater (www.giswater.org) is a open source software aimed at being a corporate tool in water utilities with which to manage network assets in an excellent way and at the same time have the assets ready for hydraulic simulation, a feature that today Today it is known how to have a digital clone of network assets.
Technologically, it uses a set of Open Source technologies such as EPANET, SWMM, PostgreSQL, PostGIS or QGIS, all of them mature and proven, which give it a very powerful base for growth and consolidation.
Its 'database centric' architecture gives it enormous potential with which maintenance operations (network outages) can be managed in an integrated way, longitudinal profiles can be made, events inventoried, among others.
It has a data model with dual-face architecture, which allows full integration of inventory and hydraulic model data, both for drinking water networks (https://github.com/Giswater/giswater_dbmodel/wiki/epanet- dual-dbmodel) and for urban drainage and sanitation networks (https://github.com/Giswater/giswater_dbmodel/wiki/swmm-dual-dbmodel) , giving full flexibility to the modeler to work with hydraulic capacities without any impact on inventory data for each asset item.
The EPA file export module has certain "on the fly" transformations to make the two different geometries (remember the dual-face) of the inventory elements compatible for both EPANET (https://github.com/Giswater/giswater_dbmodel/ wiki/epanet-on-the-fly-transformations) how to for SWMM (https://github.com/Giswater/giswater_dbmodel/wiki/swmm-on-the-fly-transformations).
It allows you to work with different scenarios to create different modeling conditions in order to check the worst case scenario or check how the network will respond in future scenarios. For Water Supply networks it is possible to work with demand scenarios (https://github.com/Giswater/giswater_dbmodel/wiki/epa-demand-scenarios) and for Urban Drainage projects it is possible to work with DWF scenarios (https: //github.com/Giswater/giswater_dbmodel/wiki/epa-dwf-scenarios) and hydrological scenarios (https://github.com/Giswater/giswater_dbmodel/wiki/epa-hydrology-scenarios)
Additionally, it also allows working with alternatives to plan new elements of the network without interfering with the elements of the asset inventory. By creating an alternative (https://github.com/Giswater/giswater_dbmodel/wiki/masterplan-capabilities) you can modify the physical reality of the network without affecting the real assets at all.
Another especially interesting feature is that it allows collaborative work. Large hydraulic engineering projects have been worked on to date in a sequential or fractional way, but not collaboratively. Thanks to Giswater it is now possible to work on projects in a real collaborative way, given the inherent multi-user characteristics of the Databases on which the project pivots.
The project was born seven years ago and in its version 3.5 it incorporates interesting novelties, among which the following stand out: Complete refactor of the python code, new hydraulic model capabilities with the management of multi-scenarios or the improvement of the usability of numerous tools such as dynamic zoning or the info among others.
Open data movement has been very active and trendy lately. Many solutions brought fresh air in the metadata ecosystem. Nevertheless, no one really pushed forward the confluence of the open data world and the geo metadata world (often powered by ISO or INSPIRE standards).
Actually, many organizations still use both systems, which leads to confusion for the end users: datas are duplicated, metadatas are harvested in both directions, many websites aim to serve the same goal. Overall, this split does not help to easily find your data.
It can also give headache to platform administrators, developers and architects who try hard to keep all catalogs synchronized.
Based on this analysis, we are convinced that an ultimate solution could take the advantages of both ecosystems. Complex ISO standards, INSPIRE rules and opendata light schemas can co-exist in the same catalog. All new great ideas like quick data visualization or dataviz widgets can be supplied for any kind of data. The datahub literally came out from the need to centralize any kind of public dataset within the same platform.
Thought as a backend API agnostic solution, the datahub first implementation has started based on the GeoNetwork 4 api, with an ElasticSearch backend.
The search is fast, accurate, multilingual and customizable. The solution has been designed from use cases: how do you want to help the end users to find, use and value their datas. It brings a new experience to old fashion INSPIRE catalogs and aims to embrace modern challenges like the community, vote, favorites, publishers, usages of the datasets, dataviz and so on.
Leveraging technical challenges to merge open data and metadata, the datahub emphases on a pure, intuitive and fluent user experience.
GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping. It powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale.
What can you do with GeoServer? This visual guide introduces some of the best features of GeoServer, to help you publish geospatial data and make it look great!
GeoServer has grown into an amazing, capable and diverse program - attend this presentation for:
A whirl-wind tour of GeoServer and everything it can do today;
A visual guide to some of the best features of GeoServer;
Our favourite tricks we are proud of!
New to GeoServer - attend this talk and prioritize what you want to look into first. Expert users - attend this talk and see what tricks and optimizations you have been missing out on.
Forest disturbance can have a significant impact on the hydrologic regime and health of watersheds, aquatic habitats, and their overall ecological functions. Although these impacts can vary as a result of physical and hydroclimatic conditions in watersheds, over the past decades a simple metric known as equivalent clearcut area (ECA) has emerged to quantify the cumulative disturbance at any point in time in a watershed, accounting for the temporal dynamics, including recovery, of historical disturbance.
ECA is widely used as an indicator to quantify forest disturbances as it not only covers all disturbance types but also considers the subsequent recovery of these disturbed areas through space and time. An ECA coefficient of 100% means there is no hydrological recovery due to planting trees, on the other hand, an ECA of 0% means 100% of the disturbed area has reached its maximum potential in recovery.
The prime objective of this project was to generate an annual time series of ECA for every watershed within the area covered by the Nadina Natural Resource District in British Columbia, Canada. The disturbance types identified in this project were forest fire, timber harvest, pest infestation, and permanent infrastructure development. The time and location of these disturbances were integrated with the BC Freshwater Atlas, in order to provide quantitative estimates of ECA for the watershed associated with stream reaches in the study area.
Due to the cumulative nature of ECA over space and time, all types of forest disturbances have to be combined and recovery factors must be applied to account for vegetation and hydrological recovery over time. Previous field research was used to find the relationship between the tree stand height and the age following logging.
In order to apply the methodology, open source data and tools were used. We used two publicly available global satellite image derived raster products of forest change for our initial disturbance data source. We updated the forest change raster products using GDAL with local public geospatial datasets to assign change types for each year from 1985-2020, creating a unified multiband raster of change. The multiband raster was processed within Python, using GDAL and NumPy libraries, and recovery factors were applied to each pixel dynamically based on the year and type of change. The result of this processing was a collection of time series for watersheds broken down by year and type of change present.
The ECA results for the study area were loaded into PostGreSQL for delivery to a web application that allows a user to compare a location’s ECA with the forest disturbance input. The application relies on Mapbox GL to interact with a web map and the spatial data display of the forest disturbance locations. D3 and Crossfilter JS libraries were used to interact with the disturbance data, allowing a user to filter histograms based on a variety of attributes, which are reflected in the histograms and web map in real time. ECA time series data is displayed in interactive histograms and line charts.
Water and Sanitation Corporation (WASAC) started mapping rural water supply system in Rwanda since 2018. WASAC conducted the data collection by using QGIS, QField and PostGIS all over the country of Rwanda, and now all GIS data is available as open data and visualized in this website by using Mapbox Vector Tiles. WASAC is trying to achieve universal access to water in SDGs Goal 6 by keeping updating and utilizing GIS data.
We are developing GIS system as open source, and all of source code was developed in Github through WASAC organization repositories here under the collaboration with The United Nations Vector Tile Toolkit team. Our approach uses quite low-cost technologies which are more sustainable in low and middle-income countries. All our data is also available in OpenAFRICA.
Our achievements of the project were presented in previous global conference of FOSS4G 2019 Bucharest and FOSS4G 2021 Buenos Aires. In FOSS4G 2022, we would like to update our current situation of the GIS system to the community.
GeoCat Bridge for QGIS is a Python plugin that enables users to publish map layers as OGC data services (WM(T)S/WFS/WCS) to GeoServer or MapServer. It can also publish layer metadata to the GeoNetwork spatial catalog (CSW), linking service to metadata and vice versa, so that users can easily bind to a service from a catalog search result or find the relevant metadata for an exposed dataset.
Bridge can also export metadata, symbology and geodata to local files, so you can modify them and/or upload them manually.
Since its first official release at the FOSS4G in Bucharest (2019), GeoCat has been gradually improving the plugin. One of the most requested and anticipated changes to Bridge for 2022 relates to GeoServer workspace publication. The next upcoming major release involves some major UX changes, which will allow more control over a workspace. For example, users can soon add (or overwrite) single layers to an existing workspace, whereas in older versions all workspace data was removed prior to publication. We would like to take the opportunity to discuss the upcoming release, highlighting this and other new features and improvements.
Hey everyone, my name is Saheel Ahmed. I work as a senior data scientist at Blue Sky Analytics. We are a climate tech startup primarily focused on creating environmental datasets for better monitoring and climate risk assessment for various stakeholders across the globe. To achieve this, we are leveraging the potential of geospatial analytics by creating a catalogue of comprehensive and accurate climate data to drive sustainable decision-making. And all this is only possible because of the open-source tools & knowledge made publicly by the good folks organising the event.
Greenhouse gas (GHG) emissions from biomass burning (which includes the combustion of forests, savannas, and croplands) play an important role in regional air quality, global climate change, and human health. In the year 2021, all the continents except Antarctica witnessed major wildfires. These enormous blazes some the size of a small country aren’t just destroying native forests and vulnerable animal species. They’re also releasing billions of tons of greenhouse gases into the atmosphere, potentially accelerating global warming and leading to even more fires. Accurate assessment of biomass burning emissions is paramount to understanding and modelling global climate change.
By combining open-source tools with geospatial data, we present a global dataset that estimates the total GHG emissions due to biomass burning globally. We achieved this by linking satellite-based fire observations, aerosol optical depth (AOD), and vegetation type (based on land cover classification) to directly estimate how much carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) were emitted from each fire. We conducted further analysis of estimated emissions by comparing our estimates with existing datasets from NASA's global fire emissions database and ESA's Copernicus global fire assimilation system. Overall, our estimates agree well against both of these sources with an R2 score of 0.91, 0.71, and MAE score of 9, 14 MtCO2e/yr against GFEDv4.1s and GFASv1.2 respectively across 245 nations between 2015-2020. The dataset includes country-level estimates of gross GHG emissions across different vegetation types such as forest, cropland, shrubland, and grassland for the last 5 years.
The dataset is currently a work in progress as we aim to add more features such as covering other landcover types, ground truth alternatives. The dataset and its documentation are available at https://github.com/blueskyanalytics/get-started. The dataset is also our contribution to the global coalition Climate Trace (https://www.climatetrace.org/), an independent group for monitoring & publishing GHG emissions across different sectors.
Due to its SQL-based engine, the OpenSource Software pgRouting is the most flexible routing engine available.
A common misconception is that pgRouting is the least performant routing engine.
So how to keep both performance and flexibility with pgRouting ?
Many factors should be taken into account.
To begin with, the use cases.
What level of precision do you need ?
Will you be computing short or long routes ?
Will there be many routes computed at the same time ?
Simplification is the most common way to deal with performance issues.
However, when accuracy is at the core of decision making, a minimum level of precision must be kept.
Reducing the number of rows the routing engine will have to process is the number one tip to enhance performances.
But there are many other technical and functional optimizations that can make pgRouting run much faster.
We will look at some choices we had to make in various projects.
How the data is the first key to optimization.
But also how to help the routing engine make the best of it.
Which algorithms are best used for which use cases, and how fine tuning the database can help too.
Geological data usually suffer from very low visibility because they are specialized data that are only accessible to a few people and can only be visualized and processed using special software. The swissgeol.ch portal, newly launched by swisstopo (Swiss Federal Office of Topography), aims to change this by making the data accessible on the Internet in a low-threshold and simple way using 3D visualization based and promoted with open-source technology and code.
swissgeol.ch is a web application for the visualization and analysis of geological sub-surface data. It has been publicly available at https://viewer.swissgeol.ch since 2020, and the open source code is can be downloaded at https://github.com/swissgeol/ngm.
In addition to the geo-portal of the Swiss Confederation (https://map.geo.admin.ch) which focusses on 2D spatial data, swissgeol.ch extents its functionality to 3D data above, on and below the surface. For this, it relies on 3D visualization on the web, which is based on CesiumJS and offers numerous expert tools.
CesiumJS is the most widespread open source 3D globe library and is used worldwide in many different applications. It not only visualizes large-scale global data, but also very detailed data at the local scale, such as buildings in the 3D view of map.geo.admin.ch.
With the development of swissgeol.ch, an underground navigation option was developed in CesiumJS for the first time, which allows the visualization of 3D objects below the terrain. In addition to navigating underground, it is also possible to see through the earth's surface using transparency settings, as well as to slice the 3D-scene vertically.
With the use of 3D tiles and precise terrain (2m precision), the data is delivered in an optimized format for the web. At the same time, the download of original data of entire layers or individual objects in the layer is offered.
After the positive echo of last year’s talk, we are going to present this year selected features and data in greater detail and introduce recent developments: A new user experience and a specific user space for projects, as well as new data and formats, e.g. using the new voxel Cesium Next Tiles specs.
I am the Founder and CTO of Blue Sky Analytics, a Climate-Tech Startup using satellite-derived climate intelligence to power financial decisions. We provide datasets through API spanning flood, drought, wildfire, heat risk for monitoring, measuring and mitigating climate risk which can be leveraged for various use-cases.
In 2 years, we have analyzed TBs of data, delivered 5 datasets & built platforms for data visualisation and distribution from scratch using FOSS technology. This has been a rocket-ship of a journey, chasing our mission of building a ‘Bloomberg for environmental data’.
However, the not-so-secret sauce to achieving these milestones has been FOSS. We are often asked how we procure raw geospatial data and how much we spend on it. Thanks to the abundance of open data, our data acquisition cost has been 0. Due to the generous open data policy of amazing organisations like NASA & ESA, we have been able to build a business collecting TBs of data daily & crunching them into useful insights.
This helped us scale our vision to build a global environmental data stack for tracking climate change in real-time. Moreover, before this data can be applied to climate mitigation, it needs to be analysed. This is true for any big data and today, less than 1% of global data is analysed.
Given that satellite data is the most significant source of tracking climate variables, it became imperative to tap this source. We discovered that the path to providing environmental datasets was by building a powerful geospatial data refinery along with SpaceTime™ and our dev portal.
There was limited infrastructure available to support the delivery of geospatial datasets so we built it, leveraging open-source tools like Postgres, QGis, GDAL, k8s etc.
While we have proprietary layers to our models, as a team of young developers, data scientists and designers, many self-taught, our cultural ethos stands firmly with FOSS and we plan to be a leading contributor to FOSS for climate action. The most significant step for us in that direction has been providing annual, country-wise data on biomass emissions to the Al Gore-led Climate TRACE inventory that can be used by the public via CC-BY-4.0 framework.
As our business expands, we aim to open-source tools & innovations we have internally developed while building our infrastructure and have started that journey with the Raster Playground.
Climate change is the most pressing challenge of our times, throwing at us various questions that need to be answered. This is not possible without data. Data helps us understand the problem and quantify the risk to various assets.
Open source tools and data have made it possible for 20-year-old data scientists to access sophisticated satellite data to understand the changing planet and answer these questions. BSA is a testament to the fact that fighting climate change is not possible without FOSS.
The OGC APIs are a fresh take at doing geo-spatial APIs, based on WEB API concepts and modern formats, including:
Small core with basic functionality, extra functionality provided by extensions
JSON first, while still allowing to provide data in other formats
No mandate to publish schemas for data
Improved support for data tiles (e.g., vector tiles)
Specialized APIs in addition to general ones (e.g., DAPA vs OGC API - Processes)
Full blown services, building blocks, and ease of extensibility
This presentation will provide an introduction to various OGC APIs and extensions, such as Features, Styles, Maps and Tiles, DAPA, STAC and CQL2 filtering.
While some have reached a final release, most are in draft: we will discuss their trajectory towards official status, as well as how good the GeoServer implementation is tracking them, and show examples based on the GeoServer HTML representation of the various resources.
Mapillary is the platform that makes street-level images and map data available to scale and automate mapping. There are many tools available within Mapillary’s ecosystem, as well as many real world use cases where Mapillary can have an impact. In this talk, we will give an overview of the state of the Mapillary platform in 2022. This will include a look at compatible camera devices, upload methods, data and imagery management, download methods, integrations, and stories about users who apply Mapillary to solve a challenge.
You should walk away from this talk knowing how you want to use Mapillary to improve maps important to you, and what tools you need to get started.
If you are interested in improving OpenStreetMap, contributing to open data, capturing imagery in your community, or leveraging Mapillary street-level imagery and GIS data into your professional work, this talk is for you. No coding or technical experience is necessary, and the tools and features available can be adapted to any skill level. Join us!
Geospatial datacubes--large, complex, interrelated multidimensional arrays with rich metadata--arise in analysis-ready geopspatial imagery, level 3/4 satellite products, and especially in ocean / weather / climate simulations and [re]analyses, where they can reach Petabytes in size. The scientific python community has developed a powerful stack for flexible, high-performance analytics of databcubes in the cloud. Xarray provides a core data model and API for analysis of such multidimensional array data. Combined with Zarr or TileDB for efficient storage in object stores (e.g. S3) and Dask for scaling out compute, these tools allow organizations to deploy analytics and machine learning solutions for both exploratory research and production in any cloud platform. Within the geosciences, the Pangeo open science community has advanced this architecture as the “Pangeo platform” (http://pangeo.io/).
However, there is a major barrier preventing the community from easily transitioning to this cloud-native way of working: the difficulty of bringing existing data into the cloud in analysis-ready, cloud-optimized (ARCO) format. Typical workflows for moving data to the cloud currently consist of either bulk transfers of files into object storage (with a major performance penalty on subsequent analytics) or bespoke, case-by-case conversions to cloud optimized formats such as TileDB or Zarr. The high cost of this toil is preventing the scientific community from realizing the full benefits of cloud computing. More generally, the outputs of the toil of preparing scientific data for efficient analysis are rarely shared in an open, collaborative way.
To address these challenges, we are building Pangeo Forge ( https://pangeo-forge.org/), the first open-source cloud-native ETL (extract / transform / load) platform focused on multidimensional scientific data. Pangeo Forge consists of two main elements. An open-source python package--pangeo_forge_recipes--makes it simple for users to define “recipes” for extracting many individual files, combining them along arbitrary dimensions, and depositing ARCO datasets into object storage. These recipes can be “compiled” to run on many different distributed execution engines, including Dask, Prefect, and Apache Beam. The second element of Pangeo Forge is an orchestration backend which integrates tightly with GitHub as a continuous-integration-style service.
We are using Pangeo Forge to populate a multi-petabyte-scale shared library of open-access, analysis-ready, cloud-optimized ocean, weather, and climate data spread across a global federation of public cloud storage–not a “data lake” but a “data ocean”. Inspired directly by the success of Conda Forge, we aim to leverage the enthusiasm of the open science community to turn data preparation and cleaning from a private chore into a shared, collaborative activity. By only creating ARCO datasets via version-controlled recipe feedstocks (GitHub repos), we also maintain perfect provenance tracking for all data in the library.
You will leave this talk with a clear understanding of how to access this data library, craft your own Pangeo Forge recipe, and become a contributor to our growing collection of community-sourced recipes.
QGIS and PostGIS are now renown as one of the best combination to setup a GIS application.
The support of PostGreSQL and PostGIS in QGIS have grown very mature and offers great features to deal with your database stored geographic data. It allows to create powerful business application easily without any advanced programming language skills, only plain SQL and a well configured QGIS.
This presentation will showcase some of the interesting features you should be aware of in order to build the perfect GIS user application. I'll explain how to:
- Properly configure relations between layers/tables,
- Enable transactions (and whether or not you should do it),
- Communicate from PostGreSQL to QGIS using notifications,
- Deal with triggers using data dependencies,
- Store your QGIS project and style information in a PostGIS database,
- Output your processing result directly to your database,
- Manage your database directly from the browser.
These will be some of the key features that will be demonstrated along with a given use case.
Access to long-term biodiversity datasets is vital for monitoring, managing, and protecting freshwater ecosystems. Detecting critical ecosystem changes, such as losing unique biodiversity and ecosystem services, is dependent on access to data. A wealth of biodiversity data exists for river ecosystems in South Africa, but an operational information system to access these data is currently not available. To address this knowledge gap, the Freshwater Biodiversity Information System (FBIS) has been developed. FBIS is a platform for hosting, visualizing, and sharing freshwater biodiversity information for South African rivers. The project seeks to mobilize and import to the system baseline biodiversity data, identify strategic long-term monitoring sites, and train key organizations on how to use the information system. Using map-based visualizations, user-friendly dashboards and rapid data extraction capabilities, the system will improve knowledge of freshwater biodiversity and long-term river health trends, thereby supporting better-informed river management decisions and conservation planning projects.
We discuss the "Diet Hadrade" codebases, which provides an open-source, lightweight mechanism for leveraging remote sensing imagery and machine learning techniques to aid in humanitarian assistance and disaster response (HADR) in austere environments.
In a disaster scenario (be it an earthquake or an invasion) where communications are unreliable, overhead imagery often provides the first glimpse into what is happening on the ground. The rapid identification of both vehicles and road networks directly from overhead imagery allows a host of problems to be tackled, such as congestion mitigation, optimized logistics, evacuation routing, etc. Such challenges often arise in the aftermath of natural disasters, but are also present in crises like the current invasion of Ukraine where roads are choked with civilians fleeing the fighting.
Automobiles provide an attactive proxy for human popuplation due to their mobile nature and the necessity of population movement in many disaster scenarios. In this project, we deploy the YOLTv5 computer vision object detection codebase to rapidly identify and geolocate vehicles over large areas. Vehicle detections yield significantly greater utility when combined with road network data. We use the CRESI computer vision framework to extract up-to-date road networks with travel time estimates, thus permitting optimized routing. The CRESI codebase is able to extract roads using only overhead imagery, so flooded areas or obstructed roadways will sever the CRESI road graph; this is crucial for post-disaster scenarios where existing road maps may be out of date and the route suggested by cloud navigation services may be impassable or hazardous.
Diet Hadrade provides a number of graph theory analytics that combine the CRESI road graph with YOLTv5 locations of vehicles. We combine the car detections with the road network to infer how congested certain areas are. Congestion information is important for everyday life, but also crucially important in disaster response scenarios when roads may become impassable due to both natural phenomena as well as traffic.
We leverage the detailed road graph and vehicle location information to illustrate a number of scenarios, such as: bulk evacutation, optimal aid disbursement locations, critical intersections, and detection and automated avoidance of dangerous locales. These capabilities are presented in an interactive dashboard that computes optimal routes on the fly based on user inputs.
Chestnut stands in Piemonte are presently suffering a severe decline due to the concurrence of climatic and silvicultural factors. A project funded by Piemonte region involving the Regional Chestnut Centre and IPLA S.p.A. started in 2018 with the aim of defining technical guidelines for proper interventions in declining stands. The present contribution deals with the activities of spatial monitoring of declining areas through satellite images interpretation and GIS analysis making use of QGIS and Grass tools. Methodological approach was based on the selection of Sentinel 2A e 2B images taken at the beginning and at the end of the summer season in 2017, 2018 e 2019 on a test area. Those images were then processed calculating some indexes with raster functions implemented in QGIS. NDWI Normalized Difference Water Index (B8-B12/B8+B12) resulted the more sensible to the presence of declining stands
Accurate mapping of areas suffering different degree of damages on the whole Region was then carried out starting from a preliminary analysis of experimental parcels surveyed on the field.
As climate change progresses, we are experiencing an increase in the frequency and severity of extreme weather events in many parts of the world. Climate models predict the frequency and severity of these weather events to continue to increase in the future as surface air temperatures rise.
In 2021, the Canadian province of British Columbia (BC) experienced one of the most severe fire seasons on record which destroyed communities and ecosystems across the province. In the same year, an “atmospheric river” precipitation event led to widespread flooding causing severe damage to roads and communities across BC. There is a correlation between severe wildfires and increased runoff following precipitation events in some regions.
There is a need for better prediction, monitoring, and management of fire and flood events to mitigate the damages caused by post-wildfire flooding. Remote sensing data and analysis techniques play a key role in monitoring climate-related natural disasters and helping understand and mitigate risks to communities, ecosystems, and infrastructure in areas that may be exposed to flooding. Free remote sensing datasets along with free and open source software can greatly reduce the costs and increase availability of this monitoring capability, increasing stakeholder access to geospatial intelligence.
This talk presents a tool developed at Sparkgeo for automated mapping of burn severity and extent within watersheds of interest. The tool uses multi-source public remote sensing data in a cloud-based workflow, taking advantage of recent open source initiatives including the SpatioTemporal Asset Catalog (STAC). The tool can help assess flood risk from significant rainfall events and may offer essential flood mitigation and risk management knowledge. We present the tool’s deployment to map 2021 wildfires in several British Columbia watersheds.
The GeoNode, according to the project's website, is a platform for managing and publishing geospatial data. It brings together mature and stable open source software projects into a consistent, easy-to-use interface, allowing non-specialist users to share data and create interactive maps. In Brazil there is a growing use of GeoNode, observed mainly in governmental institutions and universities. One of the main ways of installing and configuring GeoNode is the so-called Geonode Project. It consists of a custom Django Project template, which contains, in addition to the main project files, a set of Dockerfiles of GeoNode components, such as GeoServer, Nginx (reverse proxy) and PostGIS. From a detailed analysis of the components of the GeoNode Project created, it was found that the original dockerfiles contain a series of security holes and also unnecessary packages for the execution of the stack, not recommended for production environments. A Dockerfile that follows best practices eliminates the need to run privileged containers (as root), the use of unnecessary packages, leaked credentials, like mail passwords or database DSNs, or anything that could be used for an attack. Removing known risks in advance will reduce security management work and service overhead. The objective of this talk corresponds to discuss the possible security holes found in the Geonode Project and, with the application of best practices in Dockerfiles, to make it leaner and safer for production environments. For demonstration purposes, there will have a project to be used as an example and will be hosted at https://github.com/geonode-br/hardening-geonode-docker.
This presentation examines the global impact of COVID-19 on traffic across 40 cities in Europe, South East Asia, Australia, and North America. I analyzed monthly rush hour traffic from 2019 to 2021 with GeoPandas, leafmap, and HERE's Traffic Analytics data. In addition, I correlated traffic volume with COVID positivity, mortality, and vaccination rates to examine how these factors influence the resumption of pre-pandemic traffic patterns.
Because of the volume of the traffic data (billions of records), desktop GIS software, including spatially enabled databases, could not reliably process it on a desktop computer. Online solutions, such as Google Colab, would have been a costly alternative given the amount of data. However, GeoPandas and Jupyter notebook on notebook computer was able to process the traffic data and enhance the spatial road data and support joining the result to the road network for visualization. The method for processing the data will be discussed in detail. In summary, open source tools give researcher unprecedented abilities to process large amounts of data.
The Open Geospatial Consortium API family of standards (OGC API) are being developed to make it easy for anyone to provide geospatial data to the web, and are the next generation of geospatial web API standards designed with resource-oriented architecture, RESTful principles and OpenAPI. In addition, OGC APIs are being built for cloud capability and agility.
pygeoapi is a Python server implementation of the OGC API suite of standards. The project emerged as part of the OGC API efforts started in 2018 and provides the capability for organizations to deploy OGC API endpoints using OpenAPI, GeoJSON, and HTML. pygeoapi is open source and released under an MIT license. pygeoapi is built on an extensible plugin framework in support of clean, adaptive data integration (called "providers'').
Elasticsearch (ES) is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
The Elasticsearch data provider for pygeoapi is one of the most complete in terms of functionalities and it also includes CQL support with the CQL-JSON dialect, which allows you to take extra advantage of the ES backend.
This presentation will provide an overview of OGC APIs, pygeoapi and Elasticsearch integration, and demonstrate usage in a real-world data dissemination environment.
MapFish Print is a mature Java-based open source software (BSD-2 license) for printing maps. Opposed to frontend solutions such as inkmap (https://github.com/camptocamp/inkmap), MapFishPrint runs server side and is integrated in several open source GIS frameworks like GeoMapFish ( for creating geoportal applications) or geOrchestra (spatial data infrastructure).
The classic approach to deploy MapFish Print is using a WAR-file in a Servlet Container (for example Tomcat), while it can also be integrated into cloud environments with prebuilt Docker images. Alternatively, MapFish Print’s core printing library can also be integrated into other projects programmatically.
MapFish Print supports the common data formats and standards (WMS, WFS, WMTS, GeoJSON, etc.) and provides access to rich cartographic features like rotations, grids, north-arrow or legends and multi-page printing. The layout is defined by a JasperReports template and a YAML configuration file. The template allows users to define the layout, include elements for maps, legends, grids and alphanumeric tables. Clients request a concrete print-out with a JSON-Request, providing along information like bounding-box, map layers and other data. The final report will be rendered by MapFish Print either as PDF or as a raster image and returned to the client.
We will present a summary of existing features as well as new (e.g. tiled WMS with buffered tiles for rendering large areas without label conflicts) and planned features of the MapFish Print open source project.
Micronutrient deficiencies (MNDs), so-called ‘hidden-hunger’, can have serious ramifications for the health of individuals affected and the economy of the country in which they live. MNDs are a global problem but disproportionately affect populations in low-income countries. Work to alleviate these deficiencies aligns with the UN’s Sustainable Development Goals (SDG), especially SDG2 – access to adequate safe and nutritious food. Data which can support the understanding of the scale and location of these deficiencies can be fragmented in their availability and accessibility, creating a barrier to their use in planning interventions by stakeholders in the very nations where the impacts of MNDs are most severe.
The Micronutrient Action Policy Support (MAPS) tool is a web-hosted open access platform providing a unique enabling environment for the wider agriculture-nutrition community and beyond which allows users to view and explore MND risks at various spatial and temporal scales. The tool can provide users with dietary micronutrient supply estimates of all nations in sub Saharan Africa using national-scale and subnational-scale data. Preprocessing steps to clean these data in R language are made available through the open github repository, so that any user can replicate the data used in the tool.
Priorities for the data and functionality have been co-designed with key users from project proposal stage. Stakeholder feedback is used in continued iteration as richer content, supporting material, and functionality is planned, developed and released.
The platform is built on open-source technologies utilising Postgres and PostGIS to store, combine and interrogate a range of heterogeneous datasets to calculate micronutrient supply estimates, node.js for data APIs and web map services using Geoserver. Further data processing is conducted using R, with the front-end interface utilising Angular, leaflet.js and chart.js. Metadata is managed and served via Geonetwork. The code for the platform, as well as data processing scripts, methods and processes are all open source at https://github.com/micronutrientsupport.
This talk will provide an overview of the platform along with the datasets and open-source technologies that underpin its functionality, and the UX approaches taken to ensure that this tool meets the currently unmet needs of priority users.
The objective of ARCOS is to design and implement an early-warning system providing continuous monitoring of the Arctic Region. Designed to generate actionable products in the security domain by processing and fusing multi-sensor data, the system integrates available information from space, non-space sources and products available from multiple Copernicus services.
ARCOS generates information at three different levels of scale and user interaction:
1) Level 1: Automatic Early-warning System. Integration of space and non-space data sources for the triggering of alarms on the region when certain conditions happens. Automatic early-warnings are generated in case anomalous behaviours are detected. For this wide-area monitoring, automatic extraction of analytics and AI techniques are applied.
2) Level 2: User-Driven Alert System, where space and non-space data is processed on specific locations provided by the user. The alarms can be configured based contextual information based on the user input.
3) Level 3: Geospatial Intelligence Products. Following early-warnings generated in Level 1 or 2, geospatial intelligence products requiring human intervention are provided upon user request.
The system is developed using GeoNode (https://geonode.org/) as a Core component and integrates OpenEO services (https://openeo.org/) for the generation of innovative contents from open datasets like Copernicus Sentinels data, Copernicus Services data, AIS data and Social media.
Over the last years, new architecture arose with the aim of easing the deployment and the usage of geospatial data and software in the cloud. Large accounts are starting to leverage the power of Kubernetes in public clouds such as AWS or Azure associated with traditional OSGEO software such as GeoNetwork and GeoServer.
In this talk we'll present our experience and results of working with large institutions in the public sector (civil defence, judiciary) or in the private sector (insurances, telecoms). We'll demonstrate how working the agile way with open-source software and high-level contributors allow to tackle successfully even the most ambitious challenge.
As a result of these efforts (multiple 100-man days contributed to the communities), we could participate significantly in developing GeoServer cloud as well as GeoNetwork microservice. Both projects aim to solve the cloud native challenge and are well underway of succeeding.
After such an initial effort, we encourage every party using open-source software to participate in the maintenance and contribute to open-source development. Only with a real open-source engagement can we as a community achieve producing sustainable best of breed open-source software. Finally, we recommend customers to make sure their service providers have a positive impact in the communities.
Sensordata (IoT) is widespread in both private and public sector. However, making use of sensordata across different sectors and applications is challenging - in particular with respect to a geospatial application across different use cases. This encompasses both enviroment/climate sensors, like water-level sensors to smart-building monitoring and water pipe sensors. An interdisciplinary team from diverse sectors is working towards building national standards, an open architecture and implementing proof-of-concepts on a national sensorhub for sharing streams and archives of sensordata in Norway. The team builds upon the very successful open data ecosystems (SDI) that exists in Norway for standardized geospatial data. The project is funded from a range of partners including municipalities, the mapping authority and the maritime ports of Norway. The working group includes open source tech expertise on sensor technology alongside user and demand expertise from the different sectors.
This talk will focus on the technological advances made from the team both on software and architecture. There will be particular focus on the open architecture and software prototyping that has been developed in the working group. Both of which will be available under an open license.
The Open Geospatial Consortium (OGC) and the Open Source Geospatial Foundation (OSGeo) have a long and natural tradition of collaborating. In 2022, the Memorandum of Understanding between both organizations was updated - to pay tribute to ongoing and future activities.
In the initial MoU (2008), OGC and OSGeo agreed to work closely to coordinate with each other’s memberships regarding new standards developments and standards changes that may be required as a result of open source programs. Another important aspect of the relationship is to keep each other well informed of the respective activities and directions. Both aspects have proven to be of great importance. One goal was and is to coordinate activities in such a way as to maximize the achievement of both organizations’ mission and goals.
That includes to identify open source technologies that can be used as reference implementations for and validate compliance tests developed for OGC adopted standards.
Since the first MOU, there has been an increase in OGC on developer focus and engagement of software communities and activities. Increased collaboration has also occured by way of the OGC API code sprints. In addition, key opportunities for cross pollination have evolved given shared missions (FAIR data) and the viewpoint that FOSS4G software is beneficial for all software.
The development of the OGC API suite of standards is an excellent example on how the MoU works in practical terms. The OGC APIs are a family of Web APIs that have been created as extensible specifications designed as modular building blocks that enable access to spatial data that can be used in data APIs. These revolutionary APIs make location information more accessible than ever before through the use of RESTful principles, and the OpenAPI specification for describing interfaces. OGC APIs have been tested in close collaboration with the global developer and end user communities through hackathons, sprints, and workshops to provide a modern solution to tomorrow’s location sharing issues. For example, the 2021 Joint Code Sprint organized by OGC, OSGeo and the Apache Software Foundation (ASF) included open source implementations of OGC APIs - and became a standing sprint activity that was repeated in 2022.
This presentation provides a deeper dive into the new Memorandum of Understanding and how both open standards and free and open source software can benefit from one another.
ETF is an open source testing framework for validating data and APIs in Spatial Data Infrastructures (SDIs). It is used by software solutions and data providers to validate the conformity of geospatial data sets, metadata and APIs.
For example, ETF is the underlying framework used by the INSPIRE Reference Validator to validate INSPIRE metadata, datasets and services against the requirements of the INSPIRE Technical Guidelines. ETF is also used extensively in Germany by the Surveying Authorities of the Laender to validate their datasets. This includes Real Estate Cadastral data, Topographic data, Control Points, 3D Building Models, House Coordinates and Building Polygons. In the test environments of the German Laender, a comprehensive series of attributive, relational, geometric, and topological tests are performed on the data, in addition to interacting with APIs and checking for errors in the interface contracts. Other European Union (EU) Member States are also reusing ETF to allow their data providers to test resources against national requirements. Finally, some software tools such as GeoNetwork open source include validation based on the ETF API in their workflow.
Goals in designing the ETF software were to create test reports that are user-friendly and self-explanatory as well as to be able to validate large amounts of data, which can be several hundred GB in size. In order to cover different validation tasks and present them in a unified report, the architecture is modular and different Test Engines can be used. Currently the following Test Engines are supported: SoapUI for testing web services, BaseX database for testing XML data, Team Engine to validate WFS and OGC Web APIs using the OGC CITE tests, NeoTL Engine for testing WFS, OGC Web APIs and datasets.
The SDI landscape is changing through the emerging OGC API standards. We are addressing this development in ETF with NeoTL, a new domain specific language. Most recently, the language and its implementation were improved during the OGC Testbed 17 where ETF was used to test OGC Web APIs for compliance with the new OGC API - Processes standard.
As a horizontal and reusable tool, which could be extended to satisfy the needs of different communities and domains, ETF is currently considered as a component of the so-called Common Services Platform under the new Digital Europe Programme of the European Commission. Within this context, activities are planned in 2022 to: i) include the ETF in the OSGeo Live v.15, and ii) submit the ETF as an OSGeo Community Project.
In this talk, we will introduce the ETF testing framework, the deployment scenarios and address the current and future technical developments.
For several years Gter has been involved in the development and maintenance of the webGIS related to the management of the road network of the Province of Piacenza. Recently, the client requested the integration of the images, deriving from the photographic survey of about 520 Km of routes, in the existing webGIS (an instance of Lizmap Web Client which public version can be found here https://catastostrade.provincia.pc.it/lizmap/lizmap/www/index.php/view/map/?repository=progettipubblici&project=catasto_strade_pub).
In this case, the Public Administration needed the photographic survey in order to update a set of old images sparsely distributed along the network and to have a customized tool similar to services like Google Street View. Therefore, an integration of the Mapillary viewer in Lizmap Web Client has been proposed and developed; hence the survey was performed with a camera that uses front and back optics to have 360-degree photos.
The workflow consisted of four main tasks. The first step involved the photographic survey of the road network using a GoProFusion360 mounted on a car that took photos of the surrounding environments. The next step consisted in the processing of the images, stitching the front and the back photos in order to obtain a 360-degree panoramic image. This step has been automated through the development of a Python script together with the use of the available software of the camera from the command line. About 50000 photos were uploaded on the Mapillary platform. Images have been integrated into the Lizmap Web Client webGIS through the Mapillary viewer and utilities. The integration was achieved by developing a new feature for Lizmap Web Client based on Mapillary JS, an Open Source library provided by Mapillary that helps developers interact with the Mapillary API.
The final result is a tool that makes the Public Administration able to navigate and reach the photos uploaded on Mapillary directly from a window in the webGIS.
In this presentation, we show how GeoServer-Cloud has matured into a production ready, cloud-native micro-services application. It has already been successfully deployed in production for three major organizations in their respective kubernetes environments.
GeoServer-Cloud is a spring-boot/spring-cloud based micro services application built on top of GeoServer. The main goal of this implementation is to have an effective and easy way to scale the different services horizontally, splitting GeoServer geospatial services and API offerings into individually deployable components.
All the services communicate with each other via a messaging queue. There’s no wait-time between configuration changes and their reflection across all services in the cluster, nor the need to reload the applications.
The last year has not only been spent hardening the code, but also a lot of emphasis has been put on the deployment procedures. In this presentation we will explain how to deploy GeoServer-Cloud in a kubernetes environment. We will showcase the official helm chart that can be used to install it everywhere.
GeoServer-Cloud allows per-service auto scaling and server resource dimensioning, hence optimizing each service based on its performance characteristics. We will discuss how to achieve good load balancing based on service metrics.
The Routing Engine Valhalla has been extended with a solution of the Chinese Postman Problem (CPP). This means that the most efficient route to travel all roads and paths in the area can now be calculated within a defined polygon.
The CPP is a well-known problem from graph theory, where the goal is to visit every edge in a graph while minimizing the distance traveled. In theory, a graph can be either directed, undirected, or mixed.
In this implementation, the CPP has been implemented for directed graphs, as this corresponds to the representation of graphs in Valhalla and the data structure of OpenStreetMap (OSM). The latter forms the data basis for the calculation of the CPP route.
The CPP is solved using the following set of algorithms: the Floyd-Warshall algorithm, the Hungarian method, and the Hierholzer method. After successfully implementing the theoretical code base of the CPP, the main challenge was to make the route calculation executable using real-world road networks (OSM).
A key problem with the implementation of the theoretical CPP is that in real-world graphs, not every edge is always reachable by all other edges. Therefore, various extensions had to be made to allow the computation of a CPP route using OSM data. For example, within a larger area, rarely all road segments are accessible exclusively via the roads located in the area. It is often necessary to leave the area to access these otherwise inaccessible parts of the road network.
Eventually, we were able to create a working prototype of the CPP in Valhalla. In addition to the function of freely selecting the area to be traveled, restricted zones, so called no-go areas, can also be defined. After selecting the vehicle type (car, bicycle, pedestrian, etc.), the CPP route can be calculated, which also includes turn-by-turn navigation.
OpenMapTiles is an open-source set of tools for processing OpenStreetMap data into zoomable and web-compatible vector tiles to use as high-detailed basemaps. These vector tiles are ready to use in MapLibre, Mapbox GL, Leaflet, OpenLayers, QGIS as well as in mobile applications.
Dockerized OpenMapTiles tools and OpenMapTiles schema are being continuously upgraded by the community (simplification, performance, robustness). The presentation will be demonstrating the latest changes in OpenMapTiles. The last release of OpenMapTiles greatly enhanced cartography and map styling possibilities, especially for roads such as adding new tags (expressway, access, toll). But also adding concurrent route labels or motorway junctions. Improvements were also done in the countryside by adding important tracks and paths (displaying from zoom 12), cliffs, aretes, and ridges. Another enhancement is the possibility to show mountain heights in customary units (feet in the US). OpenMapTiles is also used for generating vector tiles from government open-data secured by Swisstopo.
We will give a status report on the GDAL software, focusing on recent developments and achievements in the 3.4 and 3.5 GDAL versions released during the last year, but also on the general health of the project. In particular, we will present new drivers such as the one handing Zarr datasets (format for the storage of chunked, compressed, N-dimensional arrays) or the Spatio-Temporal Asset Catalog Items driver to create virtual mosaics from STAC items, and potential future additions such as a new JPEG-2000 based driver using the Grok library, a driver for the SAP Hana database or driver for columnar storage format such as Apache Parquet and Arrow. The topic of coordinate epochs in geospatial datasets and how we’ve addressed it in various formats (GeoTIFF, GeoPackage, FlatGeobuf) will also be mentioned. As well as other improvements such as the JPEG-XL codec for the GeoTIFF format, or support for 64-bit integer data types in rasters. We will present the new CMake build system, the roadmap for its implementation, and its advantages for users and developers.
UN Maps is a program led by the United Nations Department of Operational Support in support of several peacekeeping and political missions such as UNSOS, MONUSCO, MINUSCA, MINUSMA and UNISFA.
By leveraging internal and crowdsourcing capabilities, UN Maps aims not only to enrich topographic and operational data in UN mission areas but also to provide peacekeeping and humanitarian actors with topographic maps, operational geo-information, search and navigation tools, and imagery and street-level base maps, leveraging OpenStreetMap, the Wikipedia of maps.
In order to achieve its goals, the UN Maps Initiative is building a thriving community around the collection, validation, usage, and dissemination of open geospatial data. This community is called UN Mappers.
It benefits from the established crowdsourcing activities, such as mapathons, training opportunities and other collaborative events involving several stakeholders as the UN staff on the field (peacekeeping and agencies, funds and programs), academia (high schools and universities in Africa, EU and US), local communities and remote volunteers.
Together, the UN Mappers community give substantial support not only in the production of maps and web services but also in the development of innovative applications using virtual reality and data analytics. Some of the results obtained with open data to these applications will be presented during the intervention.
Furthermore UN Mappers are working on translating and updating OSM documentation material in all 6 UN official languages, which is distributed with open license.
No elevation data exists of Molokini Crater, an island off the coast of Maui that gets 300,000 visitors annually. To make my own, I created a photogrammetric model using a ten year old YouTube video fed into an open source photogrammetry package. I exported the resulting mesh as a point cloud and filtered it using opensource LiDAR program CloudCompare and exported it as a raster. I then georeferenced the raster and scaled the model vertically with QGIS. The National Geodetic Survey reports that my model, down sampled to 1m resolution and made with OSS, is the most comprehensive survey of Molokini Crater in existence, and it was done in less than a day with a zero dollar budget. Photogrammetry is how Google Maps produces it's 3D function and it's an old technique but a proliferation of tourist and drone videos could open up new avenues for creating maps of small extents. This process is easily automated and it's only a matter of time before photogrammetry software is capable of integrating images from different times of day which massively increases the amount of usable media for modeling. This could be the beginning of internet-archived archaeology where environments captured as video can be reconstructed and viewed in a new way.
QGIS turned twenty this year. The first lines of code were written in mid-February of 2002 and the first time the code compiled and ran, it could do one thing:
Connect to a PostGIS database and draw a vector layer.
Quoting Gary Sherman - "The mythical man of QGIS that no one has ever met":
This was the humble beginning of one of the most popular open-source GIS applications. GRASS GIS is of course the granddaddy of open source GIS, but the 20th birthday of QGIS is a testament to its longevity and commitment of all those who have made it what it is today.
In this talk I'll share a walkthrough of the most game-changing features and events that shaped QGIS and its community in the past 20 years making it one of the top ten most important C++ open-source projects  and an overall amazing project to represent :)
Happy Birthday QGIS!
Connectivity of roads in a map is essential for many use cases including navigation. We present a graph-based solution to the road conflation problem which takes into account the connectivity of the road network. First, we generate a road network graph in both sources based on bifurcation points. Second, we carry out node and edge matching between the graphs where we follow shortest distance as a matching criterion. This is followed by the merging stage where graph edges with matching end nodes get conflated. Newly added roads are connected with the graph based on node and edge matching. We carry out experiments on conflating open source footway datasets from multiple cities with the OSM. The resulting conflated map contains up to 16x map feature improvements per city with geometrically accurate and smooth results around road junctions. Future work involves using different graph matching criteria to improve on the conflated output.
The National Land Survey (NLS) of Finland decided in the fall of 2020 to develop a national topographic data production system based on open source technologies and especially on QGIS client. Since then, many significant steps have been taken for us to be able to implement the MVP of the application for the mappers of the NLS at the start of 2024. Later on, we are planning to replace our digital elevation model production system with similar technologies which would enable us to replace the current mapping system by 2026. In this presentation I will talk about our system's architecture, the tools we are developing and furthermore, some insights that we have learned during this project.
The application's architecture is based heavily on Postgres, where we run the main database from which the national topographic maps are made. The operators can access the main database via Job management-plugin and modify the database objects on the client (QGIS). These modifications (inserts, updates, deletes) are saved in the operators' work database, from which they can register these changes to the main database.
Our first challenge was to design how more than 150 operators of the NLS are able to work using the same main database. We decided to use the client-web service architecture. The benefit of that is that we can use QGIS as such as it is and we can build all of the functions required for job management into the backend application. The job management plugin communicates with the web service and the conflicts between the different work databases are handled by a separate tool. With the tool, an operator can solve conflicts that are created by another operator editing the same objects.
Currently, we are integrating stereo compilation with QGIS, which will enable the operators to measure objects from 3D aerial stereoscopic photos. The next steps are to develop comprehensive tools for quality assurance and to improve basic QGIS tools for selecting, editing and digitizing objects. The problem is that we have over a hundred different layers, therefore the default way to choose the layer is not sufficient for the operator. We also want to develop tools for real time quality checking so that the mapper would immediately know if a quality error occurs, making the general workflow smoother.
Although some of the components are custom-made, our purpose is to publish those components that we recognize to be useful for other QGIS users. In addition, we have seen the value of multiple premade plugins to which we are planning to contribute as well. As a whole, NLS is currently looking for new ways to be a part of the open-source community.
Converting OpenStreetMap planet data into vector tiles has been a complex and costly process, but now, thanks to the Planetiler project, it has become possible to do on a single powerful machine in just a few hours – over two orders of magnitude speed up!
OpenMapTiles is a mature customizable tile generation framework and layer specification that can be tailored to specific tile generation needs. It has existed for many years, and allowed users to generate their own layers, optimizing for size or completeness. Over the years it moved to PostGIS-based ST_AsMVT approach, and made numerous small improvements. The biggest downside of OMT was the extensive hardware requirements.
Recently Mike Barry rewrote core functionality of the OMT stack as a single monolithic app, making it possible to generate entire planet data in just a few hours on a single machine. Now the OMT community is actively adapting this new approach, researching if Rust would be even better approach, and experimenting how to make the process customizable and support real-time updates.
Open source software has grown rapidly in recent years in many developed countries. Developing states especially in low income states have also seen this development. The emergence of open source software, especially in the field of geographic information, has created practitioners and communities around these digital tools and commons. However, the freedom, free and easy access is seen by many technicians and decision-makers as less reliable and less efficient than proprietary software. In order to know the perceptions and the level of adoption of free software in Togo, we conducted a small survey among students, governmental and civil society actors and non professionnal cartographers. This presentation will help to highlight the importance of free software in Togo. Thus my presentation will clarify the types of projects in which open source geospatial is used, its interest for the users, in short, to have a clear idea on the state of the open source geospatial software in Togo.
The amount of data we have to process and publish keeps growing every day, fortunately, the infrastructure, technologies, and methodologies to handle such streams of data keep improving and maturing. GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster, and mapping. It powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale. We integrated GeoServer with some well-known big data technologies like Kafka and Databricks, and deployed the systems in Azure cloud, to handle use cases that required near-realtime displaying of the latest received data on a map as well background batch processing of historical data.
This presentation will describe the architecture put in place, and the challenges that GeoSolutions had to overcome to publish big data through GeoServer OGC services (WMS, WFS, and WPS), finding the correct balance that maximized ingestion performance and visualization performance. We had to integrate with a streaming processing platform that took care of most of the processing and storing of the data in an Azure data lake that allows GeoServer to efficiently query for the latest available features, respecting all the authorization policies that were put in place. A few custom GeoServer extensions were implemented to handle the authorization complexity, the advanced styling needs, and big data integration needs.
The Re3gistry is an open source software for creating, managing and sharing reference codes in a consistent way. Released under the European Union Public License (EUPL) v.1.2 (https://github.com/ec-jrc/re3gistry), it is a key component ensuring interoperability in data infrastructures.
The Re3gistry supports organizations in managing and updating “reference codes” through unique identifiers. Reference codes can be used for example to define sets of permissible values for a data field or to provide a reference or context for the data being exchanged. Examples are enumerations, controlled vocabularies, taxonomies, thesauri or simply ‘lists of things’. The Re3gistry provides a means to assign identifiers to such items and their labels, definitions and descriptions in different languages. It provides a user-friendly interface where labels and descriptions for reference codes can be easily browsed by humans and retrieved by machines, including the possibility of downloading them in different formats and exploiting the information using a REST API.
The European Commission’s Joint Research Centre (JRC) started the development of the Re3gistry in 2014 to satisfy the interoperability requirements set by the INSPIRE Directive. In 2020, considering the high reusability of the tool beyond INSPIRE, the JRC released the code as open source. So far, the development of the Re3gistry has been supported by the European Commission under the interoperability actions ARE3NA and ELISE and, more recently, by the National Land Survey of Finland.
In 2021, a second version of the Re3gistry software was released. This version v2.0, complemented by subsequent minor releases, introduced several new features such as a management interface to add and modify the status of items, and the capability to trace the full lifecycle of items following the workflow defined by ISO 19135 standard. The current stable release is v.2.3.0 released in March 2022. The INSPIRE registry (https://inspire.ec.europa.eu/registry) is currently the most popular implementation of the Re3gistry software. It is the central point of access to (currently) more than 7000 reference codes, available in 23 languages and several formats (HTML, ISO 19135 XML, JSON, RDF/XML, Re3gistry XML, CSV), grouped into different centrally managed INSPIRE registers. The content of these registers is based on the INSPIRE Directive, Implementing Rules and Technical Guidelines (as illustration, used to reference INSPIRE themes, code lists and application schemas).
The Re3gistry is currently used by many organizations across European countries (Austria, Finland, France, Italy, Slovakia, Spain, The Netherlands and North Macedonia), different EU-funded projects and private organizations even outside Europe. Since 2021, the Re3gistry v.2.x is included in the OSGeo Live to facilitate discovery and use by the open source geospatial community. In 2022 the software will be also submitted as an OSGeo Community Project.
Summing up, the Re3gistry is more relevant than ever to ensure semantical and organisational interoperability across any kind of systems, including Spatial Data Infrastructures (SDIs).
Spatial stratification of landscapes allows for the development of efficient sampling surveys, the inclusion of domain knowledge in data-driven modeling frameworks, and the production of information relating the spatial variability of response phenomena to that of landscape processes. This work presents the rassta package as a collection of algorithms for spatial stratification developed in the R environment. The core ideas implemented in the rassta package include the multi-scale, hierarchical landscape stratification based on spatial intersection, the application of non-parametric distribution estimators to define the typical landscape configuration of stratification units, and the use of spatially explicit landscape correspondence metrics for non-probability sampling and predictive modeling. The theoretical background of rassta is presented through references to several studies which have benefited from landscape stratification routines. The functionality of rassta is presented through code examples which are complemented with the geographic visualization of their outputs. Moreover, domain-specific applications are presented to demonstrate the applicability of rassta for the spatial modeling of diverse environmental phenomena.
ECMWF is a research institute and a 24/7 operational service, producing global numerical weather predictions and other data for a broad community of users. To achieve this, the centre operates one of the largest supercomputer facilities and data archives within the meteorological community. ECMWF also operates several services for the EU Copernicus programme to provide data for Climate Change, Atmospheric monitoring and Emergency services.
As part of ECMWF's Open data initiative, more and more meteorological data and web services are freely available to a wider community. ECMWF's web services include an interactive web application to explore and visualize its forecast data, a Web Map Service (WMS) server and many graphical products including geospatial weather diagrams so called Ensemble (ENS) meteograms and vertical profiles.
ENS meteograms and vertical profile diagrams are among the ECMWF's most popular web products and presents ECMWF's multi-dimensional real-time ensemble forecast data for a given position globally. They are freely available through various ECMWF web services, and integrated on ECMWF's GIS based interactive web application. Datasets powering the dynamically generated diagrams are formed from a rolling archive of 10 days data, updated twice a day and each update consists of data around half a Terabyte. An upcoming update on ECMWF's forecasting system will increase the data size by a factor of 3-4 times in the near future. In addition to ECMWF's forecast data, similar services are requested as part of various Copernicus projects producing different datasets.
This talk presents migrating legacy data structure used for ENS meteogram datasets to a more flexible, extensible, and high performing one fit to be used by GIS systems by using Free Open Source Software (FOSS). The new data structure uses Python ecosystem. The data preparation workflow as well as the challenges and the solutions that are taken when dealing large and frequently updated geospatial datasets are presented. Talk will also include early experiments and experiences to offer these datasets as part of OGC's Environmental Data Retrieval (EDR) API.
Cloud computing is revolutionizing the way companies develop, deploy and operate software and GeoSpatial software is no exception. With benefits of cloud based deployments range from cost savings to simplified management, flexibility, lower downtime and scalability of dynamic environments it is easy to understand why more and more companies are migrating their on premise systems to the cloud but cloud based setups have their own set of hurdles and challenges.
The migration of the series itself can be challenging. Monitoring, debugging and scaling of applications are very much different than what you are used to.
In this presentation we will share with you the lessons we have learned at GeoSolutions to tackle these problems and share some common patterns for the migration of on premise GeoServer clusters to the cloud. We'll share with you tips on:
- best practices to migrate your existing GeoServer cluster to the cloud
- insights on your geoserver cluster using centralized logging and Monitor plugin
- avoid common bottlenecks to best set up a distributed scalable GeoServer cluster
- work containers and container orchestrators like Kubernetes
TerriaJS is an open-source framework for web-based geospatial catalogue explorers.
It uses Cesium and Leaflet to visualise 2D and 3D geospatial data, and it supports over 50 different Web APIs, file formats and open data portals.
TerriaJS is used across the globe to create next-generation Digital Twin Platforms for open geospatial data discovery, visualisation and sharing - it is used to drive
- National Map (Australian Gov)
- Digital Earth Australia Map
- Digital Earth Africa Map
- Pacific Map
- NSW Spatial Digital Twin (Australian State Gov)
- and many others
Supported formats/protocols include:
- Imagery services like WMS/WMTS and ArcGis Imagery/Map Service
- Feature/vector services like WFS, ArcGis Feature Service, Mapbox vector tiles, GeoJSON, Shapefile, KMZ, GPX, GeoRSS
- 3D sources like Cesium 3d-tiles, GLTF and CZML
- Tabular/sensor data: CSV, SDMX, GTFS, SOS, Socrata and Opendatasoft
- Open Data portals: CKAN, CSW, Socrata, Opendatasoft, Magda, THREDDS, WMS/WFS Servers, ArcGis Portal and SDMX
- Geoprocessing with WPS
- With plans to support new and upcoming services like OGC APIs and STAC in the future.
In this talk, I will show how TerriaJS can connect to Open Data Portals to
- Discover open datasets
- Visualise datasets in 2D and 3D
- Perform aggregation/analysis on datasets
- Create/share maps with the world!
Geo Engine is a cloud-ready geospatial analysis platform that provides intuitive and low-threshold access to geospatial data, its processing, interfaces, and visualization. Users can access the engine in a browser-based user interface as well as with Jupyter notebooks in Python. An important element is the homogenized "Datacube"-like view of heterogeneous data, which allows research groups and companies easy access and low-threshold analyses. At the same time, it is a framework for the creation and operation of geodata portals.
The development is based on research results from the field of spatio-temporal data processing from the database systems group at the University of Marburg, Germany. It is currently used in scientific projects focused on environmental and biodiversity monitoring, where it provides native time series processing, the combination of raster and vector data, and a user interface that enables linked views between maps, tables, and plots. In addition, it is used for the provision of customized apps, for example for web-based remote-sensing learning and project portals.
The presentation gives an overview of the system and its features. The processing backend will be discussed, which allows tile-based (for raster data) or chunk-based (for vector data) processing, taking into account the time semantics of the data. In addition, we show a use case demonstration where we exemplify the seamless transition from Geo Engine’s UI to Python notebooks and also the step back. Finally, we give an overview of future development goals.
The challenges posed to the current urban mobility model by pollution-related and urbanisation issues have resulted in significantly increasing the importance of urban resilience. Mobility management, pandemics’ spreading, equal access to services and climate crisis are just some of the crucial issues that falls within the definition of urban resilience.
One very promising solution aiming to solve many of these issues has been presented in 2016 by Professor Carlos Moreno under the name of “15-minutes city”. The paradigm is based on the idea that every citizen should be able to reach the essential services (supermarkets, shops, parks, etc) walking not more than 15 minutes from their home. The model is being tested in some metropolitan cities around the world (e.g. Paris).
However, reorganizing the city so that it presents a 15-minutes structure is not an easy task. It requires large resources and a careful planning based on data, to make sure that the project undertaken will actually have a positive effect on the urban mobility and no asset is wasted on useless projects.
The Business Innovation team of Dedagroup Public Services used Open Street Map data to develop an index to detect the local level of proximity within the city, showing both the areas that already conform to the 15 minutes model and the ones that do not, where taking action would improve the quality of life of the citizens living there.
The presentation will be focused on this proximity index, describing the assumptions behind its definition, such as the choice of city services to be considered essential, the nature of the road network used to compute walking distances and the area tiling chosen for the task.
The index will be then showcased on the city of Florence, together with an analysis of the city from a proximity point of view and a what if scenario: how would the index change if the municipality (and other relevant stakeholders) decided to make interventions on low proximity areas?
The case of Ferrara will be also presented to show that the proximity index can be the basis for further analyses: coupling the index with resident population count can help to spot the areas that are both under-served and highly populated, that are the ones where more people would benefit from improvements.
The story and the future of the MapLibre community - the project that continues to develop various browser and native technologies for map tile visualization ever since Mapbox changed their licensing on the amazing Mapbox gl js technology that sadly became proprietary restricted to Mapbox own service.
This talk will cover existing lib capabilities, how the project grew to include native, navigation, routing, 3D, and other features. How the project was able to quickly migrate to typescript with lots of additional testing and stabilization efforts. How we became a large non-centralized collective of mapping technologies covering web, android and ios devices. How hundreds of small and large donations from developers and companies have helped with extra incentives.
Some possible future projects and ideas will be presented by individual feature owners, including the possibility of uniting all library efforts using a cross platform compilation from the common Rust code (web assemblies + native libs) and additional styling features.
QGIS releases three new versions per year and each spring a new long-term release (LTR) is designated. Each version comes with a long list of new features. This rapid development pace can be difficult to keep up with, and many new features go unnoticed. This presentation will give a visual overview of some of the best new features released over the last calendar year. This will be a mixture of important/popular features along with those which are easily overlooked or missed. Each highlighted feature will not simply be described, but will be demonstrated with real data. The version number for each feature will also be provided. This will let you know which new features are included in the LTR. If you want to learn about the current capabilities of QGIS this talk is for you! Potential topics include: Annotation layers * GUI enhancements * New Expressions * Point cloud support * Print layout enhancements * New renderers and symbology improvements * Mesh data algorithms * 3D * Editing
The amount of data available from drones, earth observation as well as machinery itself (i.e. telemetry data) plus the advent of cloud infrastructure has given a huge impulse to innovating the way we used to support farmers and farming in general, democratizing access to data and capabilities like never before through precision (or digital) farming solutions.
Precision farming (or digital farming) has therefore become one of main use cases for GeoServer deployments over the past years and at GeoSolutions we have worked with many clients, from NGOs to large private companies (like Bayer), from startups to organizations like DLR in helping them to support their client to make sense of data and information through GeoServer and other geospatial open source technologies at scale, in the cloud.
This presentation will condense 10 years of GeoSolutions in ingesting, managing and disseminate data at scale in the cloud for the precision farming industry covering items like:
- Proper optimizations and organization of raster data
- Proper optimizations and organization of vector data
- Modeling data for performance & scalability in GeoServer and PostGIS
- Deployment guidelines for performance and scaling GeoServer
- Styling to create NDVI and other visualizations on the fly
At the end of the presentation the attendees will be able to design and plan properly a GeoServer deployment to serve precision farming data at scale.
Compass Informatics  is pleased to announce the open sourcing of its routing library Wayfarer .
Wayfarer is a pure Python library that allows spatial features to be loaded into a NetworkX  network format. Once in this format the data can be manipulated and analysed using the huge range of graph algorithms in NetworkX.
The Wayfarer library provides a number of helper functions for example to calculate routes, split edges, find ends of paths, and retrieve features by keys.
The talk will outline the use cases for the library, and when it may be suitable to use rather than alternatives such as pgRouting. Case studies will be presented including Wayfarer’s use in Ireland's Pavement Management System to help designate works and surveys on the road network. Wayfarer is also currently used for an Environmental Protection Agency project to create fully connected river networks in Ireland.
This presentation will be a real story about the process of building webmapping portals with usefull public transport information and the quality of air.
Two portals will be shown one from Warsaw and one from Cracow (https://gdziejestautobus.pl/mapa/, https://www.mapakrakow.pl/).
This will be an use case of using opensource software and open data for building the web mapping portal. The challenges will be presented by constructing layers with live positions of public transport vehicles and the state of air quality in Poland.
The technical details will be presented along with the logistic and an business aspects. Following points will be covered: used development software, user experience challenges, design of project, project organization, effort, cost, legal issues.
There will be shown the sources of data from open public API services in Poland. The one is the open API with vehicles location data of public transport office in Warsaw. The second is the data coming from public office responsible for environment protection and monitoring.
Both presented portals shows the live position of public transport vehicles in the capital of Poland. The portal for Cracow will also show the live state of air pollution in the city. The pollution data come from sensors located in Poland collection the quality of air.
Both portals are the examles of: how to connect opensource Web-GIS tools and open public data to build an interesting web mapping site showing usefull data in the convenient spatial way.
Street-level photographs of New York City from the early 1900s show how people used to live, from their clothes and vehicles to their stores and advertisements. Several open source projects have mapped archival “street view” images of New York, relying on various collections of photos with locations. These interactives, primarily built with Mapbox GL JS, are instructive when visualizing a newly-digitized archive, in this case a set of over 60,000 photos from the construction of the NYC subway between 1900 and 1950 with approximate coordinates.
“Street View, Then & Now: New York City's Fifth Avenue” compares 1911 wide-angle photographs from the New York Public Library to 2015 Google Street View imagery. A mini-map shows each photo’s location and field of view, and a visitor to the site can “go south”, “go north”, or “cross the street” using the arrow keys. The project came out of the NYC Space/Time Directory, an initiative to communicate the history of the city using historical maps, geodata, and open source tools. Code: https://github.com/nypl-publicdomain/fifth-avenue.
“1940s.NYC” places digitized photos of most buildings in the five boroughs of New York City, collected from 1939 to 1941 by the Tax Department with help from the Works Progress Administration, on a map. Zooming in loads georeferenced scans of historical maps, and clicking on a marker opens a panel displaying the historical photos. “80s.NYC” remixed the site, using more recent images from the Department of Finance. Code: https://github.com/jboolean/1940s.nyc, https://github.com/bdon/80s.nyc.
“A Stroll Down Flatbush Avenue circa 1914” strings together 65 photographs, captured approximately every 50 feet, from the New-York Historical Society’s “Subway construction photograph collection, 1900-1950.” Geometries for the photos, which are often set at nearby intersections, were manually modified, and a mini-map showing those points is navigable with the up and down keys. Code: https://github.com/chriswhong/stroll-down-flatbush.
The entire subway construction photograph collection contains nearly 100 times as many photos as shown along Flatbush Avenue with associated latitudes and longitudes across New York City. What are the best practices for mapping these geotagged archival photos, with imprecise and duplicate locations, as well as rich text metadata like titles, topics, dates, and descriptions?
The use of GIS to support energy planning is now widespread and well consolidated, as evidenced by the numerous studies available in the international literature. Many companies and governmental institutions have transferred their data and results into open source web platforms or tools for public access.
Within the broad topic of the interaction between renewable energy and environment, over the last years RSE S.p.A. has faced the necessity to develop and maintain WebGIS and online platforms related to various aspects of the energy system, in order to characterize the territory and its possible influences on renewable energy sources integration in the energy system, thus supporting the decision-making process towards energy transition.
Besides standard WebGIS functionalities, the Integrated Atlas provides the access to TOTEM (Territory Overview Tool for Energy Models), an advanced open source tool for the energetical characterization of the territory, essential for supporting multi-energy modelling. Starting from spatial and energy data, the TOTEM tool estimates electrical and heat demand, wind and solar resource and other significant energy variables on hourly and provincial scale. Concerning technical details, the tool and its web interface are developed in Python and use libraries such as Pandas, Flask ad Bokeh. The tool is opensource and it will be release under MIT License, however only a portion of the input data are currently publicly available due to data providers’ restrictions.
The need to harmonize data and analysis about the relations between energy and environment and provide an access point to the developed tools has inspired the creation of the Energy and Environment Geoportal, based on the Mapstore Open Source web platform developed by GeoSolutions. This platform allows viewing and querying geospatial published data, integrating different remote resources into interactive and immediate representations such as maps, dashboard or geostories.
As an example, detailed results of above cited multi-energy models, which receive spatialized energy data as inputs, have been synthesized in a geostory, an immersive narration which explains how a multi-energy analysis based on detailed territorial characterization of a region can support the evaluation of the best alternatives for its energy development. The geostory is accessible from the Energy and Environment Geoportal.
It must be specified that these products are still under development and subject to continuous updates of both data and technology. More details about contents and tools are left to the final presentation.
Building footprint extraction is a popular and booming research field. Annually, several research papers are published showing deep learning semantic segmentation-based methods to perform this kind of automated feature extraction. Unfortunately, many of those papers do not have open-source implementations for public usage, making it difficult for other researchers to access those implementations.
Having that in mind, we present DeepLearningTools and pytorch_segmentation_models_trainer. Both are openly available implementations of deep learning-based semantic segmentation. This way, we seek to strengthen the scientific community sharing our implementations.
DeepLearningTools is a QGIS plugin that enables building and visualizing masks from vector data. Moreover, it allows the usage of inference web services published by pytorch_segmentation_models_trainer, creating a more feasible way for QGIS users to train Deep Learning Models.
pytorch_segmentation_models_trainer (pytorch-smt) is a Python framework built with PyTorch, PyTorch-Lightning, Hydra, segmentation_models.pytorch, rasterio, and shapely. This implementation enables using YAML files to perform segmentation mask building, model training, and inference. In particular, it ships pre-trained models for building footprint extraction and post-processing implementations to obtain clean geometries. In addition, one can deploy an inference service built using FastAPI and use it in either web-based applications or a QGIS plugin like DeepLearningTools.
ResNet-101 U-Net Frame Field, ResNet-101 DeepLabV3+ Frame Field, HRNet W48 OCR Frame Field, Modified PolyMapper (ModPolyMapper), and PolygonRNN are some of the models available in pytorch-smt. These models were trained using the Brazilian Army Geographic Service Building Dataset (BAGS Dataset), a newly available dataset built using aerial imagery from the Brazilian States of Rio Grande do Sul and Santa Catarina. Pytorch-smt also enables training object detection and instance segmentation tasks using concise training configuration.
This way, considering the aforementioned, this talk presents the usage overview of both technologies and some demonstrations. Using metrics like precision, recall, and F1, we assess the results achieved by the implementations developed as a product of our research, showing that they have the potential to produce vector data more efficiently than manual acquisition methods.
DeepLearningTools is available at the QGIS plugin repository, while pytorch_segmentation_models_trainer is available at Python Package Manager (pip). The Brazilian Army Geographic Service develops both solutions, making their codes available at https://github.com/phborba/DeepLearningTools and https://github.com/phborba/pytorch_segmentation_models_trainer.
In 2020 National Land Survey of Finland had scanned and digitized over 100.000 historical orthophotos from 1931 to 2020. This unique dataset had been open for a couple of years via a WMS-T API service but was not well known to the public at the time, and not truly available to less technically oriented users. Hence, there was a need to get them findable and easily accessible with a web browser.
Instead of building a completely new service, the images were to be published with the Open Source based national geoportal of Finland - Paikkatietoikkuna. The geoportal is built with Oskari Map Application Platform which was enhanced for this use case to support timeseries data. The historical orthophotos are scattered both in time and in geography. To improve end-user experience and make discovering data easier an OGC API Features service was used for metadata.
The final result was received with quite a bit of enthusiasm: after publishing the data in Paikkatietoikkuna geoportal in June 2021, the visitor numbers soared to new records. Nearly all of the feedback has been positive, and now it is possible for anybody to benefit from this extremely valuable and fascinating historical data.
The code is fully open source and can be easily used in any Oskari instance – all you need is the data, which, obviously, isn’t the easiest part of this all.
QWC2 is modular and extensible, and provides both an off-the-shelf web application and a development framework: you can start simple and easy with the demo application, and then customize your application at will, based on your needs and development capabilities.
This talk aims at introducing this application and to show how easy it is to publish your own QGIS projects on the web. An overview of the QWC2 architecture will also be given. It will also be an opportunity to discover the last new features that have been developed in the past year and ideas for future improvements.
Neither the open (geo)data initiative and, needless to say, the open source for geospatial one, are the new kids on the block anymore. Crucial steps have been taken all over the world in establishing the framework of openness, collaborative development and transparency ranging from hands-on events - such as code sprints or collaborative data gathering - mapathons - to funding opportunities and legislative measures. In this context, we present a 3 years-old EU supported initiative founded on open source and open data and that has reached maturity. In our talk, we present the potential support that such initiatives have in the wider framework of environmental monitoring and reporting across Europe - Geo-harmonizer.
Geo-harmonizer stands for EU-wide automated mapping system for harmonization of Open Data based on FOSS4G and Machine Learning. The project unfolded between 2019 and 2022 and was co-financed under Grant Agreement Connecting Europe Facility (CEF) Telecom project 2018-EU-IA-0095 by the European Union.
Since landing on Mars in February of 2021, the Mars 2020 Perseverance Rover has been exploring Jezero crater to investigate an ancient delta looking for the evidence of past microbial life and to better understand the geologic history of the region. In support of Terrain Relative Navigation (TRN), which enables the Mars 2020 spacecraft while landing to autonomously avoid hazards (e.g., rock fields, crater rims), the USGS Astrogeology Science Center generated two precision mosaics: 1) the Lander Vision System (LVS) map generated from three Context Camera (CTX) orthorectified images that was used onboard the spacecraft and was the “truth” dataset that TRN used to orient itself relative to the surface during Entry, Decent, and Landing; and 2) a High Resolution Imaging Science Experiment (HiRISE) orthomosaic which was used as the basemap onto which surface hazards were mapped. The hazard map was also onboard the spacecraft and used by TRN to help identify the final, hazard-free landing location.
This talk will present the workflow used by the USGS Astrogeology Science Center to generate these critical data products including the use of FOSS4G tools like GDAL. Other open-source packages used will also be shared.
The Scientific and Technical Center for Building (CSTB) built the first French database of buildings and houses to address climate change challenge, helping knowledge and decision making for massive retrofit.
The pipeline factory intersects massive datasets (21 Millions buildings, >400 descriptors) and keeps adding new predictions and external datasets all the time. It allows to run analyses and predictions for all the climate change related indicators, such as housing price and energetic performance relation, heat wave impact, solar potential, etc..
While the first versions where a direct image of the classical datascientist’s approach -ie a massive dataframe driven by massive yaml config files and cryptic meta-templated scripts– ease of use and access performance soon became a limiting factor. This is a major concern since this dataset will be one long term foundation of derived information systems.
Between brute force approach based on scaling resources up, and the old fashioned « data diet » normalization and optimization process, the truth is not easy to find.
Abusing from cartoonish humor, this talk will try to explore the benefits of normalizing back hugely redundant geographic datasets and making public interfaces (public SQL model, API’s, vector tiles, OGC API’s) so that both end users can analyze efficiently this dataset, and the data manager team can rely on more stability using those good old’ database constraints.
The SagtaMapdownloader is an online tool that allows teachers to teach GIS concepts that are part of the curriculum in South Africa. This tool bridges the gap for students and teachers in their journey to become GIS specialists. This session will walk users in exploring how QGIS and QGIS Server are used to give an in-depth understanding of all map work that students are tested for during national exams. The map downloader allows geography students to explore and interact with different types of maps namely hybrid, topographic and orthophoto maps. Other tools that are available for exploration include the ability to print maps with magnetic declination for each map series, the ability to create profiles using elevation data available on the system. The session will walk users in the architecture behind the map downloader and how it will be improved in future iteration as the needs of teacher/students changes
Vector tiles are changing the way we create maps. Client-side rendering offers endless possibilities to the cartographer and has introduced new map design tools and techniques. Let’s explore an innovative approach to modern cartography based on simplicity and a comprehensive vector tiles schema.
After a short introduction or useful reminder to vector tiles, take a tour of their graphic capabilities through a series of original map design compositions. A variety of cartographic examples will be illustrated during this talk, with a particular focus on map display and performance. Rendering issues and technical limitations will also be put in perspective with pragmatic solutions or design alternatives.
Get an overview of best practices for vector tiles cartography and learn about simple open-source recipes, towards advanced combinations of fills, patterns, fonts, and symbols. Selected layer parameters and style expressions will be discussed in a visual way and explained with basic syntax that you can take away.
For over a decade now osm2pgsql has been the standard tool for importing OpenStreetMap data into a PostgreSQL/PostGIS database for rendering of raster tiles and many other use cases. Thanks to several improvements in the last years centered around a flexible configuration language and new geoprocessing capabilities, osm2pgsql is now also a great base for creating a vector tile toolchain. It can easily handle imports of a small country in a few minutes as well as scale to a planet-sized database with minutely updates from OSM.
The talk will introduce some of the new features that allow you to customize the database table layout and contents. It will outline the few steps needed to create your very own custom vector tiles based on OpenStreetMap data. We'll see how you can use the configuration language to clean up OSM data on import and prepare it for fast access with a vector tile server like T-Rex.
This talk provides concrete tips on how to improve your open data accessibility and discovery. We use real world analysis of what Europe has today, rather than specifications, guidelines, or theory.
We recently investigated the linkage between Metadata (CSW Dataset and Service Metadata records) and actual downloadable/viewable data (WFS, WMS, WMTS, and Atom). We also looked at other linkages between the documents (for example, metadata document links, "operatesOn" links, Inspire "ExtendedCapabilities", and other MetadataURL links).
Following links isn't as simple as just taking the given URL and resolving it - we will look at "fixing" the URL as well as setting request headers. We will also investigate comparing two different metadata documents (from different URLs) to see if they are "the same" even if they aren't really equivalent.
If you are responsible for an INSPIRE catalogue or web service, attend this talk to learn what works (and does not work) based on real world analysis rather than theory. Or just attend to be sure you did not show up in the examples.
G3W-SUITE is a modular, client-server application (based on QGIS-Server) for managing and publishing interactive QGIS cartographic projects of various kinds in a totally independent, simple and fast way.
This components communicate through a series of API REST.
The application is compatible with QGIS 3.22 LTR and it is based on strong integration with the QGIS API.
It is released on GitHub with Mozilla Public Licence 2.0
Many graphic/functional aspects of the WebGis publication derive directly from QGIS projects as, first of all, the general and OGC services capabilities.
The suite automatically inherits aspects related to the project (themes, 1: N relations, simple and atlas print layout, filter on legend based on map content, layer display order and activation status ...) and related to individual layers (activation scale, interrogability, published attribute fields, join attributes, attribute form, editing widgets ...) .
Of particular interest is the strong integration with the QGIS DataPlotly plugin.
QGIS projects can be published as WebGis services via direct upload (no plugins needed) on the Administration component.
The granular system of permissions and the subdivision into roles of users (individuals or groups) allows the management of services to be delegated to second and third level administrative users.
It is also possible to define consultation permissions on individual WebGis services and editing permissions on individual layers with different editing powers per user.
Finally, it is possible to define geographic and alphanumeric constraints (both in consultation and in editing) differentiated by individuals or groups of users. Alphanumeric constraints can be based on SQL language or QGIS expressions.
It is also possible to define for each layer aspects relating to the preparation of predefined searches, caches and downloads in various formats.
A particularly advanced function is related to online editing and to the possibility of easily creating web cartographic management systems by defining the various aspects at the level of the QGIS project.
This function (operating directly on the data through the QGIS API) allows multi-user editing thanks to a feature-lock system.
Editing works both at the geometric level (with intra- and inter-layer snap), and at the attributes level (editing form and the widgets included) also connected by joins or 1:N relationships.
The call will allow to illustrate the innovations of the current and future versions.
These include the implementation of editing functions, user-based filters linked to the visibility of layers and attributes, the possibility of using QGIS projects based on embedded base projects and the integration of the vectorial and raster Temporal Controller of QGIS.
Online geographic analysis possible thanks to the integration of Processing algorithms through dedicated APIs.
At the beginning of 2022, HOT_tech started a collaboration with Kathmandu Living Labs on the Tasking Manager. This followed our ambition to facilitate an open collaborative process in building and improving our open source technology by forming a ‘collective’. The idea of a ‘collective’ is to bring people with shared purpose together on shared ground to achieve a shared goal. A lot to be shared! In this context the shared goal is the development of the Tasking Manager.
Our vision is for creating with, for, and by the community for making the product more impact-driven and user friendly. The Tasking Manager has proven itself over the years to be not just a software but a platform to bring the different individuals, communities, organizations who share a common goal towards not only humanitarian effort and crisis response but identifying local resources and needs through mapping and data. So whether you are a mapper, a validator, a designer or an open source developer with an interest in the Tasking Manager, you can join the collective.
During this talk we will share our journey in building the collective. You will hear an open and honest reflection on what worked, what didn’t work, what we have learned and what we hope to do going forward. We want this talk at FOSS4G to open a conversation with other open source and geospatial communities on best ways for designing, creating and implementing open, diverse and inclusive spaces for thriving and healthy collectives and communities.
In the city of Ferrara (Italy) Dedagroup Public Services and other partners are involved in AIR-BREAK project (https://airbreakferrara.net/) to implement a set of geo-ICT tools for supporting an improved identification and monitoring of urban air quality.
Different datasets from heterogeneous sources have been already interconnected and integrated in the Spatial Data Infrastructure of the Municipality of Ferrara, based on (geo)standard protocols for data interchange sourced by:
• 173 authoritative AQ monitoring stations from 3 regional environmental agencies, ARPAE Emilia-Romagna (52), ARPA Veneto (33) and ARPA Lombardia (88), for their own whole regional areas;
• 2 private AQ monitoring stations managed by private companies located in Ferrara;
• 14 new AQ monitoring stations installed by Lab Service Analytica (project partner) in the territory of Ferrara
For integrating and sharing dynamic hourly data about air quality and other themes, we adopted the OGC Sensor Things APIs (STA) as the reference standard protocol .
STA is based on the OGC/ISO 19156:2011  and provides an open and unified framework to interconnect IoT sensing devices, data, and applications over the Web. It is an open standard addressing the syntactic interoperability and semantic interoperability of the Internet of Things. It complements the existing IoT networking protocols such CoAP, MQTT, HTTP, 6LowPAN. While the above-mentioned IoT networking protocols are addressing the ability for different IoT systems to exchange information, STA is addressing the ability for different IoT systems to use and understand the exchanged information.
In AIR-BREAK project, FROST solution (FRaunhofer Opensource SensorThings-Server)  has been deployed in the GIS server farm of the Municipality of Ferrara to complement Geoserver and other technologies already providing services for viewing/accessing data based on OGC standards.
Indeed, among the final objectives of the project, the implementation of a standard-based Air Quality Data Infrastructure focuses on:
1. creating a bi-lateral and cooperative communication systems between authorities and citizens about air quality and its perception;
2. defining and implementing a multi-stakeholder Data Infrastructure on Air Quality to integrate existing/heterogeneous/dynamic data from both authoritative sources and crowdsourced by citizens (Rete di Monitoraggio Ambientale Partecipativo );
3. providing such dynamic air quality data to local authorities involved monitoring (i.e. Municipality of Ferrara, ARPAE, Regione Emilia-Romagna) and to citizens, through standard APIs (STA) and with open licenses through the upcoming open data portal of Ferrara (June 2022);
4. testing and validating innovative solutions for air quality monitoring using in-situ IoT sensors and satellite remote sensing (e.g. Copernicus)
GeoRasterLayer is a LeafletJS Plugin for visualizing GeoTIFFs. This presentation will show live demos of new features and discuss the roadmap for the next couple of years.
- Support for nearly all projections, thanks to proj4-fully-loaded and epsg.io
- Super faster rendering thanks to a simple nearest neighbor interpolation
- Use of web workers means seamless integration that doesn't block main thread
- Loads large geotiffs greater than a hundred megabytes
- Supports custom rendering including custom colors, directional arrows, and context drawing
- Doesn't depend on WebGL
- Edge Compute: Cool Stuff You Can Do With COGs in the Browser
- 2019 - Algorithm Walk-through: How to Visualize a Large GeoTIFF on Your Web Map
- Loading the georaster-layer-for-leaflet library along with GeoBlaze via a script tag. You can view the source code here and the live demo here.
- Combining two Cloud Optimized GeoTIFFs together to create an NDVI map. You can view the source code here and the live demo here.
- Identifying Wildfires from a Landsat 8 Scene. You can view the source code here and the live demo here.
- Visualizing Population COG. You can view the source code here and the live demo here.
- Display a COG that represents only one band of a Landsat scene. You can view the source code here and the live demo here.
- Display a COG with YCbCr Photometric Interpretation. You can view the source code here and the live demo here.
The project “Geopaparazzi Survey Server (GSS) and SMASH for Mobile Data Collection in a UN Peacekeeping mission (MONUSCO)” aimed to operationalize the use of GSS and SMASH to support field data collection in MONUSCO. This talk will cover the endeavours of this project, introducing the background, user requirements and use cases, the implementation of the online GSS at a UN central data centre as well as the project outcomes and recommendations.
MONUSCO GIS: MONUSCO GIS Unit uses mobile devices, GPS, mini UAV, satellite imagery from commercial providers, to ensure availability of detailed topographic and up-to-date mission operational data for the various GIS end products (cartographic maps, imagery, infographics, interactive web map applications, geodatabases). Having a central server for mobile data collection will ensure constant availability of standardized, quality controlled and up-to-date operational data, which is key for the provision of quality GIS products & services and consequently data-driven decisions and actions for the Mission. From the beginning, there hasn't been a centralised infrastructure readily available to all the geographically dispersed UN Mission users in the DR Congo for field mobile data collection. This project marked the genesis of it.
UN Open GIS Initiative: Established in March 2016, is to identify and develop an Open Source GIS bundle that meets the requirements of UN operations, taking full advantage of the expertise of contributing partners (Member States, international organizations, academia, NGO’s and private sector). Geospatial Information Systems (GIS) have played a substantial role in providing timely and effective geospatial information products (maps and dynamic tools) to ensure the United Nations operations are equipped with suitable information to support the UN mandates through informed planning and decision-making processes. The UN has been using proprietary GIS software for the past two decades. The rapid growth and development of open-source GIS solutions present the technological potential, operational flexibility, and financial benefits as well as ease to access for UN operational partners and host nations.
WebAssembly's adoption is gaining traction and still, its potential is not yet fully utilized, especially for the processing and visualization of geo data in and outside of browsers. In this session I will give a technical introduction to WebAssembly. I will show its current state and adaptation in FOSS4G projects and will talk about the ongoing advancements of the technology and possible future scenarios.
This will also be a hands-on session, where after showing how to get up and running, I will share my experience, tips and tricks collected while porting the latest versions of GEOS, PROJ, GDAL, SpatiaLite and osgEarth to the web platform.
The composition of existing OSGeo/FOSS C/C++ libraries in a portable and sandboxed form also brings many advantages outside of browsers. The talk will close with some demos about how WebAssembly enables us to build for the web, as well as for any other platform.
"3D Tiles Next" is a major update of the "3D Tiles" OGC Community Standard 1.0. 3D Tiles are designed by Cesium GS, Inc. for streaming massive heterogeneous 3D geospatial datasets. 3D Tiles Next is a set of extensions in the following areas:
- direct use of glTF models
- using glTF for point clouds and glTF extensions for texture compression additional 3D tiles functionality
- semantic metadata stored per tileset, feature, vertex and more
- implicit spatial indexes (quadtree, octtree, S2 subdivision)
This presentation gives an overview of the current "3D Tiles" format and shows new features in the "3D Tiles Next" specification. It also covers other existing 3D OGC (community) standards like CityJSON or «Indexed 3D Scene (I3S)».
Important further topics are :
- overview of viewers for 3D Tiles on the web and in native and mobile applications using game engines
- data processing tools for producing data in these formats
- building a community for creating 3D tiles for GIS and OSM data
Developing a plugin for QGIS is both as simple as one of the countless tutorials and as complicated as a software engineering job facing with the dynamism of the project (maintenance requirements), the size of the APIs and constraints that need to be taken into account (Windows, etc.).
At Oslandia, we create and maintain many plugins for our clients, which leads us to streamline their development... and especially their maintenance! Historically, many extensions were created using the amazing Plugin Builder and the underlying tool (pb_tool) but it no longer fit our needs.
We present here our QGIS Plugin Templater, based on Cookiecutter and the related work on developer tools (tests, documentation, code structure, formatting, linter...). We will also mention the other tools we are using or following closely (the 3Liz toolbelt, the other template from Gispo Coding...).
Yet Another Plugin Generator? Probably but we think it's worth it!
The introduction of technology in the education sphere has brought about improvement regarding the quality of education young and old individuals receive. A country’s development plays a huge factor in the quality of services its people receive, therefore not every country will receive the same quality of services.
Classroom GIS has changed how Geography as an academic subject is taught. It has sparked interest in the practical component of the subject and gives more understanding to the strong relationship between theory and map work. This leads to the concept of spatial thinking and how it has allowed geography educators around the world, some without basic GIS education, to see the importance of including more GIS concepts in the high school geography curriculum.
Several GIS software packages are available that educators can use to teach their students. But taking into account the availability of resources when focusing on the African continent, it is probable that free software and hardware plays a key role in the development of GIS concepts being included in the geography curriculum. It is affordable, and learning resources are readily available, in terms of tutorials, documentation and more.
“This inclination towards GIS textbook lecturing has largely jeopardized the quality of GIS education”. - (Fleischmann and van der Westhuizen, 2020 found in The Journal of Geography Education in Africa)
Advocacy to include GIS practices and strategies in geography education across Africa have been documented, but has not received the necessary exposure it needs from its governments. The majority of GIS teaching has been textbook-based, making the introduction of GIS technology and education a frightening phase that educators may not want to engage in.
To overcome the fears behind understanding and grasping basic GIS concepts in the classroom, interactive GIS tutorials may help to remove these fears and make the adoption of GIS simple, especially within countries where service delivery (education services) is poor.
QGIS for example is an open-source GIS software that has been around for 20 years, it has shown tremendous improvements and upgrades throughout its 20 years. Its user-friendly capabilities have improved, making it ideal to introduce more geography educators and learners to the software. It has tutorials and material that is suited for individuals from different walks of the profession.
The tutorials are interactive and allow first-time users to readily engage with the material. For both learner and educator to understand the material and not get overwhelmed. The key factors to understand are; GIS can be implemented into the African geography school curriculum, open-source software is key to overcoming limitations such as lack of resources and geography educators are willing to take on GIS with sufficient training. More urgent research is needed on reliable and sustainable methods and practices of teaching GIS in a secondary school classroom.
Digital Earth Africa (DE Africa) is an operational platform with a mission to produce decision-ready products and to harness and increase the capacity of Earth observation users across the African continent. DE Africa’s mission is supported by a platform that involves delivery of data and services hosted in the public cloud. More than three Petabytes of Earth observation (EO) data covering the African continent are routinely updated and made available for free.
This talk will explore how cloud native geospatial technologies, such as the Cloud Optimised GeoTIFF data storage format and Spatio-Temporal Asset Catalog metadata standard, enable us to more easily organise, share and analyse these petabytes of data. We’ll discuss how we work with the global EO community to develop standards that enable federation and interoperability. And we’ll demonstrate how DE Africa has been able to to build capacity across Africa, from enabling individuals to run scientific analyses through to assisting national space agencies set up their own platform and supporting industry to deliver innovative products on top of our service.
We will conclude with covering key lessons learnt in building the DE Africa platform, and look to the future, as we transition to African-led operations. The authors would like to acknowledge the support of the Digital Earth Australia team, international partners and many individuals who have helped the platform become realised.
The National Land Survey (NLS) of Finland maintains a registry of aerial imagery in Finland containing metadata of imagery since 1932. As part of the NLS' strategy of moving towards FOSS software, a novel registry management tool is being developed as a QGIS plugin. This talk describes the process of designing and implementing the new registry management software and explores the suitability of QGIS as a platform for creating highly customised spatial data management tools. While the registry management tool is developed for QGIS, the registry is migrated from Oracle to a PostGIS database, following a redesign of the data model.
In 2020, the NLS announced it aims to build its technological environment based on open source technologies. As a result, there is an ongoing effort of re-designing and implementing various existing processes and systems using open-source technologies. One such system is the national Aerial Image Registry. The registry is managed by a group of NLS employees and metadata is used in various workflows for publishing new data products from captured images and planning new aerial imaging missions.
The current registry management software is a technically dated solution based on Visual Basic 6 and an Oracle database. Key features of the new registry management tool include the ability to query the image registry and show the search results on map, editing and archiving existing data in the registry, importing new data to the registry, creating data extracts to PDF maps and spatial formats, and validating plans for aerial imaging missions. QGIS provides an user-friendly platform that is already familiar to many GIS experts that can be easily extended with plugins that provide custom functionality and features.
The NLS has prior experience of designing and developing tailored QGIS plugins to support their unique workflows, including plugins for maintaining the topographic database of Finland and the national point cloud registry. These projects have been well received by users and developers alike, and the positive feedback has encouraged the NLS to continue developing tailored QGIS plugins for its specific needs.
Over the past several decades a significant number of geospatial datasets have been published on the Web. Many of those datasets were published through implementations of classic OGC Web Service standards. As time has gone past, the architecture of web applications has evolved, propelled by new Web and Internet standards. This evolution of web application architecture has led to a revolution in how geospatial datasets are published on the Web. To ensure that the revolution in geospatial data publication has interoperability at its core, the Open Geospatial Consortium (OGC) has developed a series of Web Application Programming Interface (API) standards.
The OGC API suite of standards is a family of specifications that have been designed as modular building blocks that spatially enable Web APIs that offer access to spatial data and implementations of geospatial algorithms. These revolutionary APIs make location information more accessible than ever before through the use of the OpenAPI specification for describing interfaces. The use of the OpenAPI specification means that implementations of OGC API Standards can describe themselves to levels of detail previously unachievable through the classic OGC Web Service standards. Such an ability to self-describe is significant because it has enabled software developers from a variety of disciplines to implement OGC API Standards to address the needs of their communities.
This presentation will provide an overview of the background, current status, and future plans for the development of OGC API Standards. The presentation will describe plans for the development of resources that improve the ability of developers to implement OGC API Standards. The presentation will also present a selection of case studies of open source software that has been implemented or enhanced during OGC Innovation activities such as testbeds, hackathons, and sprints (including the 2022 Joint Code Sprint organised by OGC, OSGeo and the Apache Software Foundation (ASF)).
Humanitarian Openstreetmap Team (HOT) administers several free-software applications with varied deployment architectures on multiple cloud platforms. As an organization that values openness and transparency, we actively seek out open source tools that help us enact our principles of open participation and collaboration. In that vein, we chose Terraform as the tool for managing infrastructure at HOT.
Using HOT's experience managing OSM Galaxy infrastructure using Terraform, this talk describes our use of Terraform to manage infrastructure at scale in order to improve DevOps processes with infrastructure reproducibility, security, cost and change management.
We will present these advantages in the context of our own team's experiences and the challenges we faced trying to build a scaling technology stack and compare Terraform with popular Infrastructure as Code (IaC) alternatives.
The talk will use OSM Galaxy API (galaxy.hotosm.org) as a case study to describe the process of porting infrastructure to Terraform in order to manage infrastructure continuously at enterprise scale - which is particularly relevant for non-profits and organizations that develop compute-intensive technology.
geotiff.js is a reusable library to abstract remote (Geo)TIFF files. With it, both rich visualization frontends or statistical or data access services can be implemented, as it is possible to inspect the geospatial metadata and the full spectrum raster values of the original data, instead of only 8-bit RGB(A).
Due to its file abstractions, it is possible to only read the relevant portions of a file, thus greatly reducing bandwidth and response times. This effect can be further increased when reading Cloud Optimized GeoTIFF (COG) files.
The library tries to be as feature complete as possible in terms of file layout, raster cell values, RGB transformation, image data compression and metadata.
This talk will detail the features of geotiff.js, as well as its most recent additions. Additionally, it will shed light on the greater ecosystem of geospatial libraries and applications where geotiff.js is embedded or building the foundation of.
A team of experienced mappers and language experts at Meta has reviewed a dataset of major map features from OpenStreetMap(OSM), and used the curated results on one of their validation processes to check for quality issues of the Daylight and OSM maps. The dataset of the curated results is considered as a reference library of major map features with their key information. During the validation process, this library is used to compare against Daylight and OSM data to look for suspicious changes on features included in the library. With such reference library, the Meta Basemap Team is able to keep a stable quality on major map features in an efficient manner. To ensure the data in this library is up-to-date and comprehensive, systematic approaches to continuously improve the library are also developed. In our talk, we will share more details about this curated library and processes, and how we maintain the freshness of the library.
Traffic Safety Application:
In the data preprocessing phase, data quality assessment and dirty data were determined on the accident data by using desktop GIS applications such as QGIS,SAGAGIS,JOSM,GeoDa. OpenSource GIS applications and programming languages such as R language and python were used in data cleaning, exploratory analysis and data processing. Statistics for accident data have been extracted. OpenSource map server such as GeoServer is used for sharing, editing and organizing accident maps and base maps produced in GIS applications.
Python software language was used in the server side of the project. For geospatial data analysis, accident points were verified by using geopandas and shapely libraries. PostgreSQL database was used to store geo-based accident data and the PostGIS extension was used. PostGIS adds spatial capabilities to PostgreSQL so it can store, query, and manipulate spatial data. On the server side scripting, GeoAlchemy(an extension of SQLAlchemy) is used for working with spatial databases and geospatial queries.
For the client side, Turf was used for any spatial operations. It is a geospatial engine, and it includes spatial operations and helper functions. MapboxGL-WebGL-powered library is used for interactive vector maps on the web application. To render more than 100k of accidents with high performance, WebGL powered geospatial visualization framework DeckGL was used. NebulaGL provides geospatial drawing and editing tools for lines, polygons etc. It was added for selecting analysis regions from the map. Osm-Nominatim is a geocoding library. It allows users to find accident locations from an address.
OpenSource GIS Tools:
● GIS software for data visualization, processing and analysis:
QGIS, GRASSGIS, JOSM, SAGAGIS, OrbisGIS
● GIS Servers: GeoServer, MapServer, Mapnik, MapGuide, QGIS Server
● Backend(Python) :Geopandas, Shapely, Postgis
Asset Management Application:
With the “Image-Based Information Management System” Application, it is work to make an information management system and Digital 3D road inventory map for the 91,126km road network under the responsibility of the General Directorate of Highways. With this project, it is possible ensured that images (approximately 21 million) of the highway are taken and the collection, digitization, storage and presentation of the road information of the objects base and the realization of asset management, maintenance services, planning, project design, measurement and evaluation processes determined by the administration through the image.
OpenSource GIS Tools:
● GIS software for data visualization, processing and analysis: QGIS
● GIS Servers: GeoServer
● Backend :Postgis, Turf, @swimlane/ngx-charts
● Client :OpenLayers
• Data in semi HD map quality was produced.
• The entire road network can be monitored and analyzed with panoramic images.
• The vertical and horizontal profile of the entire highways network has been created.
• 2.995.845 point inventories and 991.820km line inventories in 42 inventory were produced.
• Detection of inventory deficiencies in the field and maintenance processes can be followed.
• Road inventory data needed can be shared with other public institutions.
The Africa Knowledge Platform is a one-stop-shop for the European Commission (EC)’s scientific knowledge on Africa. Developed by the Joint Research Centre, this open, visual, and interactive platform brings together a wealth of geospatial scientific information on Africa’s social, economic, territorial and environmental development. The Africa Knowledge Platform is developed using open-source geospatial technologies and the whole platform is open to the public, so that its potential value extends to academia and to stakeholders from the public, private and non-profit sectors in both Europe and Africa. The platform brings together information across 62 topics within 10 broad themes: natural resources, sustainable growth and jobs, food and agriculture, climate change, human demography, health, security, economy, energy and digital transformation. This covers all 17 of the United Nations Sustainable Development Goals. We will be taking a tour and in-depth look at the workflow used to develop and publish the platform and the technologies behind it.
It is essential to understand what future epidemic trends will be, as well as the effectiveness and potential impact of public health intervention measures. The goal of this research is to provide insight that would support public health officials towards informed, data-driven decision making. We present spatialEpisim, an R Shiny app (https://github.com/ashokkrish/spatialEpisim) that integrates mathematical modelling and open-source tools for tracking the spatial spread of COVID-19 in low- and middle-income (LMIC) countries.
We present spatial compartmental models of epidemiology (ex: SEIR, SEIRD, SVEIRD) to capture the transmission dynamics of the spread of COVID-19. Our interactive app can be used to output and visualize how COVID-19 spreads across a large geographical area. The rate of spread of the disease is influenced by changing the model parameters and human mobility patterns.
First, we run the spatial simulations under the worst-case scenario, in which there are no major public health interventions. Next, we account for mitigation efforts including strict mask wearing and social distancing mandates, targeted lockdowns, and widespread vaccine rollout to vaccinate priority groups.
As a test case Nigeria is selected and the projected number of newly infected and death cases are estimated and presented. Projections for disease prevalence with and without mitigation efforts are presented via time-series graphs for the epidemic compartments.
Predicting the transmission dynamics of COVID-19 is challenging and comes with a lot of uncertainty. In this research we seek primarily to clarify mathematical ideas, rather than to offer definitive medical answers. Our analyses may shed light more broadly on how COVID-19 spreads in a large geographical area with places where no empirical data is recorded or observed.
The growth of OpenStreetMap communities (OSM) in Tanzania is taking shape as most organizations, institutions, and communities in general, are recognizing the importance of using and contributing to OpenStreetMap data. To support the growth of OSM communities in Tanzania, OpenMap Development Tanzania with her partner the Humanitarian OpenStreetMap Team (HOT) awarded microgrants to seven OSM communities in Tanzania - See the supported communities here.
The grants provided are supporting these communities to leverage the use of OSM and mapping to help solve different community challenges by facilitating training/workshops, purchasing tools and equipment, supporting staff, and other logistics. Most communities work in peripheral regions with a minimal understanding and use of open data and mapping technologies like OpenStreetMap, OpenDataKit QGIS, etc.
The first phase of project implementation ended with great successes and lessons learned from these communities. The general success of the microgrants so far include the following:
Transforming communities from using traditional data collection to digital open source tools such as ODK and Kobocollect has greatly improved data management and analysis. For instance, Agri Thamani Foundation and LAVISEHA are among microgrant recipients who are new to OpenStreetMap and other open mapping technologies for generating open data; however, they now use different tools, i.e. Kobotoolbox, OpenDataKit, OSM, and Tasking Manager, to collect data for their interventions in nutrition and gender-based violence.
Connecting OSM communities in Tanzania and encouraging collaboration in various tasks and opportunities. The grant provided an opportunity for local OSM communities in Tanzania to work together; a good example is Hope for Girls and Women in Tanzania, giving training to LAVISEHA on how communities can use open source tools like ODK to report the cases of GBV for rescue.
Creating awareness about OSM and other communities through participating and presenting in conferences and events such as the State of the Map Africa, Community webinars, etc.
Over 16000 building footprints were mapped and 380 km of roads were uploaded to OpenStreetMap by grantees
Supporting the growth of youth mappers chapters - Three grant recipients are youth mappers from three different universities who are using the grants to solidify the chapter and get exposure to projects while also applying for other funding to expand their projects
Although the microgrant is expected to end in June 2022, OMDTZ is committed to supporting these communities through training and different engagements to ensure they achieve their goal of using open data for decision-making. Together we can add more people to a map by supporting all communities in mapping.
OSGeo has existed in Oceania in various forms for quite a while now. Some of the major contributors to projects such as QGIS are based in Oceania and open geo events have been organised in the region for many years. It is only in more recent times however that we have started to support these efforts through the creation of a local chapter. When a group of us came together to organise Oceania’s first regional FOSS4G SotM conference in Melbourne, 2018, it became clear structure was needed to sustain the momentum we had created.
Structure was established by forming an entity and completing all the tasks that go along with that. This included creating a constitution, financial policies, forming a board of directors, establishing a membership policy, consulting with the community, working out what the entity’s primary purpose is and so much more. We’ve made plenty of mistakes along the way, but we’ve also learned a lot. There are many successes too, such as the establishment of a Microgrant program to support initiatives across the region, the continuation of annual regional conferences, funding travel so that people without the financial means to do so could attend conferences, and the welcoming of new members from far and wide.
This talk is an insight into the journey of OSGeo Oceania. It is not meant to be a how to guide or a pat on the back, but rather a chance to facilitate discussion among the FOSS4G community so that we can find ways to support the use and understanding of open geospatial software in our respective regions.
Closing session with Sol Katz 2022 announcement, FOSS4G 2023 presentation an much more
The OSGeo Annual General Meeting
Also this year, after the online conference, there will be the OSGeo community sprint.
Participation is free of charge and anyone involved or interested in getting involved in OSGeo community projects is welcome.
This year we will try to offer the format an hour with the developer, where we ask seasoned community members to donate time for welcoming new members and introduce them to different projects or guide them through the setup of the development environment or translation tools and get started on the Foss4G journey.
You can choose to contribute to one or more projects. The sky is the limit. There’s always plenty to do – and it’s not all about programming. Translation, documentation, feedback, discussions, and testing are very important for projects! Conference registration is not a prerequisite for participation in the code sprint.
Please register here
Also this year, after the online conference, there will be the OSGeo community sprint.
Participation is free of charge and anyone involved or interested in getting involved in OSGeo community projects is welcome.
This year we will try to offer the format an hour with the developer, where we ask seasoned community members to donate time for welcoming new members and introduce them to different projects or guide them through the setup of the development environment or translation tools and get started on the Foss4G journey.
You can choose to contribute to one or more projects. The sky is the limit. There’s always plenty to do – and it’s not all about programming. Translation, documentation, feedback, discussions, and testing are very important for projects! Conference registration is not a prerequisite for participation in the code sprint.
Please register here