BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//talks.osgeo.org//
BEGIN:VTIMEZONE
TZID:-03
BEGIN:STANDARD
DTSTART:20000101T000000
RRULE:FREQ=YEARLY;BYMONTH=1
TZNAME:-03
TZOFFSETFROM:-0300
TZOFFSETTO:-0300
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-RVNL7M@talks.osgeo.org
DTSTART;TZID=-03:20241204T103000
DTEND;TZID=-03:20241204T110000
DESCRIPTION:# Poster Session I\n\n## Analyzing Randomness in Point Patterns
 : An Algorithmic Approach\n\n- Tony Sampaio\, Earth Science Department\, D
 epartment of Geography\, Federal University of Paraná\, Brazil\; Spatial 
 Pattern Analysis and Thematic Cartography Lab\, Federal University of Para
 ná\, Brazil\n- Cláudia M. Viana\, Centre of Geographical Studies\, Insti
 tute of Geography and Spatial Planning\, University of Lisbon\, Portugal\;
  Associate Laboratory Terra\, Lisbon\, Portugal\n- Eduardo Gomes\, Centre 
 of Geographical Studies\, Institute of Geography and Spatial Planning\, Un
 iversity of Lisbon\, Portugal\; Associate Laboratory Terra\, Lisbon\, Port
 ugal\n- Silvana Camboin\, Geodetic Science Graduate Program\, Department G
 eomatics\, Federal University of Paraná\, Brazil\; Open Geospatial Lab\, 
 Federal University of Paraná\, Brazil\n- Fábio Breunig\, Earth Science D
 epartment\, Department of Geography\, Federal University of Paraná\, Braz
 il\; Spatial Pattern Analysis and Thematic Cartography Lab\, Federal Unive
 rsity of Paraná\, Brazil\n- Edenilson Nascimento\, Earth Science Departme
 nt\, Department of Geography\, Federal University of Paraná\, Brazil\; Sp
 atial Pattern Analysis and Thematic Cartography Lab\, Federal University o
 f Paraná\, Brazil\n- Elaine de Cacia de Lima Frick\, Earth Science Depart
 ment\, Department of Geography\, Federal University of Paraná\, Brazil\; 
 Spatial Pattern Analysis and Thematic Cartography Lab\, Federal University
  of Paraná\, Brazil\; Geography Teaching Laboratory\, Federal University 
 of Paraná\, Brazil\n- Jorge Rocha\, Centre of Geographical Studies\, Inst
 itute of Geography and Spatial Planning\, University of Lisbon\, Portugal\
 ; Associate Laboratory Terra\, Lisbon\, Portugal\n\n## Performance Benchma
 rking for Resource Allocation Optimization in GeoNode Ecosystems on Kubern
 etes Clouds\n\n- Marcel Wallschläger\, Leibniz Centre for Agricultural La
 ndscape Research (ZALF)\, Müncheberg\, Germany\n- Igo Silva de Almeida\, 
 Leibniz Centre for Agricultural Landscape Research (ZALF)\, Müncheberg\, 
 Germany\n- Xenia Specka\, Leibniz Centre for Agricultural Landscape Resear
 ch (ZALF)\, Müncheberg\, Germany\n\n## Applying Spatio-Temporal Analysis 
 for Data Mining on Shooting Data\n\n- Felipe Sodré Mendes Barros\, Facult
 ad de Ciencias Forestales\, Universidad Nacional de Misiones\, Argentina\n
 - Terine Husek Coelho\, Instituto Fogo Cruzado\, Rio de Janeiro\, Brazil\n
 - Iris Rosa\, Instituto Fogo Cruzado\, Rio de Janeiro\, Brazil\n- Davi San
 tos\, Instituto Fogo Cruzado\, Rio de Janeiro\, Brazil
DTSTAMP:20260422T231510Z
LOCATION:Main Hall
SUMMARY:Poster Session I - 
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/RVNL7M/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-VHU3TT@talks.osgeo.org
DTSTART;TZID=-03:20241204T140000
DTEND;TZID=-03:20241204T143000
DESCRIPTION:Standards in the Geographic Information Systems (GIS) domain ar
 e crucial for ensuring interoperability\, data consistency\, and efficienc
 y across diverse applications and platforms. As in other domains\, they ar
 e necessary to ensure that different GIS software can work together. Moreo
 ver\, the continuous improvement and development of these standards are es
 sential to keep pace with evolving technologies and user requirements\, en
 hancing the overall functionality and usability of GIS. By adhering to and
  advancing these standards\, the GIS community can foster innovation\, sup
 port informed decision-making\, and address complex geospatial challenges 
 more effectively.\nTherefore it’s important to be conservative\, using w
 idely supported standards\, but also open to emerging technologies\, prepa
 ring for the future global leap\, and welcoming it proactively. The design
  of the eMOTIONAL Cities Spatial data infrastructure (SDI) is built around
  this approach   (Simoes\, J.\, and Cerciello\, A. (2022). Serving Geospat
 ial Data Using Modern and Legacy Standards: a Case Study from the Urban He
 alth Domain. The International Archives of Photogrammetry\, Remote Sensing
  and Spatial Information Sciences\, 48\, 419-425)\n\nThe environment we li
 ve in affects our mental health and well-being. ​​The eMOTIONAL Cities
  project has set out to understand how the natural and built environment c
 an shape the feelings and emotions of those who experience it. It does so 
 with a cross-disciplinary and data driven approach\, which resulted in num
 erous datasets from more “traditional” GIS based fields like Urban Pla
 nning\, as well as other fields like Neuroscience. The common denominator 
 between all these datasets is the geospatial dimension. One of the main go
 als of the project is to assemble these disparate datasets in a common SDI
 \, in order to enable scientists and eventually the general public\, to di
 scover and access the data for the purposes of analysis and decision makin
 g.\nThe OGC API is a family of modern Standards from the Open Geospatial C
 onsortium (OGC)\, which leverage modern web technologies like OpenAPI\, RE
 ST and JSON (Percivall\, G. (2017) OGC® Open Geospatial APIs - White Pape
 r). Although very appealing to web developers\, they are relatively new co
 mpared to the OGC Web Services (OWS) like WFS\, WMS or WMTS\, which have b
 een in the GIS domain for more than twenty years. When we started the proj
 ect\, we were unsure if it would be possible to set up an SDI\, purely bas
 ed on OGC API\, both because of the maturity of the Standards and the avai
 lability and Technology Readiness Level (TLR) of implementations. This has
  led us to initially create an SDI that contains both a modern and legacy 
 stack (Simoes\, J.\, and Cerciello\, A. (2022). Serving Geospatial Data Us
 ing Modern and Legacy Standards: a Case Study from the Urban Health Domain
 . The International Archives of Photogrammetry\, Remote Sensing and Spatia
 l Information Sciences\, 48\, 419-425). However\, in the past two years we
  have seen huge developments in OGC API Standards\, with many Standards ha
 ving parts approved and implementations catching up on those developments.
  One implementation in particular\, pygeoapi (https://pygeoapi.io/)\, is e
 xemplar in terms of the Standards development process\, by participating a
 ctively in the OGC Code Sprints\, by being an Early Implementer (EI) and e
 ven a Reference Implementation (RI) of several OGC API Standards. It embod
 ies the new OGC paradigm\, where the development of the Standard goes hand
  in hand with the development of implementations\, resulting in published 
 Standards which are market ready.\nThe eMOTIONAL Cities SDI demonstrates t
 hat it is now possible to share geospatial data using OGC API with Free an
 d Open Source Software (FOSS). We have selected Standards to enable the pu
 blication of feature data (OGC API - Features)\, tiles of geospatial infor
 mation (OGC API - Tiles)\, sensor data (SensorThings API) and metadata (OG
 C API - Records). Although that was not the case when we started this work
 \, they are now all approved Standards. The SDI uses a stack of FOSS softw
 are\, with pygeoapi at its core and several supporting services. In order 
 to ease the deployment and reproducibility of the system\, the services we
 re virtualized into docker containers and orchestrated using docker-compos
 e. This resulted in a system that is infrastructure agnostic and can be de
 ployed in any Cloud Service Provider (CSP) in a matter of minutes. The cod
 e is available on GitHub with an MIT license (https://github.com/emotional
 -cities/openapi-sdi) and released in Zenodo with DOI 10.5281/zenodo.659117
 9. We have also set up pipelines to enable both humans and machines to ing
 est data and metadata into the SDI and extensive documentation about how t
 o access the SDI\, using clients such as QGIS\, MapStore or jupyter notebo
 oks.\nThe SDI is live at: http://emotional.byteroad.net/ and it includes 9
 7 datasets from five different cities (e.g.: Lisbon\, London\, Copenhagen\
 , Tartu and Lansing). It has collections that characterize the physical en
 vironment (e.g.: Normalized difference vegetation index (NDVI)\, Annual me
 an NO2 concentration)\, the built environment (e.g.: Buildings with repair
  needs ratio\, Average age of buildings) socio-economic aspects (e.g.: Are
 a Deprivation Index\, Number of People Travel by Bicycle to Work) and heal
 th data (e.g.: Crude percent of adults with depression\, Mortality rate)\,
  as well as results of experiments (e.g.: London outdoor walk test data: A
 ir Quality Temperature\, London outdoor walk test data: Sound Pressure lev
 els). The data can be discovered and queried in the OGC API - Records sear
 chable catalog: https://emotional.byteroad.net/catalogue\nIn this article 
 we would like to share our journey during the process of implementing the 
 SDI\, and how we navigated the technological and human challenges of adopt
 ing emerging technologies in constant development. We hope the results of 
 this project can encourage scientists\, urban planners and other experts w
 ho deal with geospatial data in some way\, to embark on a similar journey 
 and contribute towards making geospatial information FAIR\; e.g.: Findable
 \, Accessible\, Interoperable and Reusable. At the same time\, we hope to 
 promote a family of GIS standards (e.g.: OGC API) that seeks to mitigate t
 he learning curve that has always characterized them.
DTSTAMP:20260422T231510Z
LOCATION:Room V
SUMMARY:A Spatial Data Infrastructure using Modern Standards: Lessons Learn
 ed from the eMOTIONAL Cities Project - Antonio Cerciello\, Joana Simoes
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/VHU3TT/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-9GXCUV@talks.osgeo.org
DTSTART;TZID=-03:20241204T140000
DTEND;TZID=-03:20241204T143000
DESCRIPTION:With the increasing presence of spatialized data and informatio
 n\nin people's daily lives\, the constant need to make data-driven\ndecisi
 ons\, and the expansion of artificial intelligence\ntechnologies in societ
 y\, this work seeks a technological solution\nfocused on simplifying geosp
 atial analyses. The goal is to\ndemocratize access to and understanding of
  these resources for\ncommon users without the need for advanced knowledge
  of the\nspecific geographic information tools currently most used.\nTo th
 is end\, a system was developed that transforms natural\nlanguage question
 s directly into SQL queries\, specifically using\nPostgreSQL/PostGIS (Li a
 nd Jagadish\, 2014\; Ramsey\, 2007).\nThis system is based on a chat model
  built on Gemini\, which\ninterprets user queries and generates the corres
 ponding SQL\nqueries. The back-end API executes these queries and returns 
 the\nresults\, which are visualized in an intuitive and interactive\ngraph
 ical interface. This allows for dynamic exploration of\ngeospatial data\, 
 facilitating the analysis and visualization of\ncomplex information withou
 t the need for advanced technical\nknowledge in SQL. The integration of na
 tural language\nprocessing (NLP) and geospatial database queries represent
 s a\nsignificant innovation. This system reduces the learning curve\nassoc
 iated with traditional GIS tools\, making the technology\naccessible to a 
 broader audience (Craglia et al.\, 2012). By using\nthe Gemini model\, the
  system can understand and process a wide\nrange of natural language input
 s\, translating them into precise\nSQL queries that interact with the geos
 patial database.\nTo demonstrate the system's effectiveness\, a case study
  was\nconducted using a database composed of 20 tables containing\ndata re
 leased by the National Water Agency (ANA)\, the Brazilian\nInstitute of Ge
 ography and Statistics (IBGE)\, and the Energy\nResearch Company (EPE)\, w
 hich were adjusted for reading by\nthe system. This database includes a da
 ta dictionary that provides\ndetailed information on what each value repre
 sents and its\ncorresponding context.\nThe results were evaluated based on
  the accuracy of answers\ngiven to 192 questions posed within the context 
 of the case study.\nOut of these 192 questions\, 167 answers were correct\
 , yielding\nan accuracy rate of 87% in the total evaluated\, allowing deta
 iled\nvisualization of the geometries and information required by the\nuse
 r's query in the developed interface. Enhanced accessibility\nfor non-tech
 nical users is one of the most significant benefits\nidentified in this wo
 rk\, as there is no need for in-depth technical\nknowledge in spatial data
  filters\, in addition to the reduced query\ntime and the ability to gener
 ate valuable insights from these data\nsets.\nThis work also highlights th
 e importance of an intuitive and\ninteractive graphical interface system. 
 The interface allows the\nvisualization of layers and tables resulting fro
 m the query\,\nenabling users to dynamically explore\, filter\, and manipu
 late the\ndata according to their needs\, and obtain insights that would b
 e\ndifficult to achieve without advanced knowledge of GIS tools (Li\nand W
 ang\, 2013).\nIt is important to note that this work represents a first st
 ep in an\ninitiative for this technological solution model implementing\nA
 rtificial Intelligence\, from which it was possible to identify\nseveral p
 oints of improvement not only in the model but also in\nthe databases and 
 their construction and acquisition process. The\ncase study demonstrated t
 hat the system can accurately interpret\nand execute user queries\, provid
 ing reliable and relevant results.\nWith this approach\, new possibilities
  are opened for the\nexploration and analysis of geospatial data\, enhanci
 ng decisionmaking based on information obtained from various areas such\na
 s environmental monitoring\, urban planning\, and territorial\nplanning. F
 uture work should focus on improving the system's\ncapabilities\, expandin
 g its application domains\, and exploring\nnew ways to integrate emerging 
 technologies\, continuing to drive\ninnovation in this critical area.
DTSTAMP:20260422T231510Z
LOCATION:Room II
SUMMARY:Democratizing AI\, making geotechnology accessible to all - Thomaz 
 Franklin de Souza Jorge\, Cauã Guilherme Miranda\, João\, Igor Augusto d
 a Costa Nunes\, Lucas Alvarenga Lopes\, Gabriel Viterbo
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/9GXCUV/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-3YCWGQ@talks.osgeo.org
DTSTART;TZID=-03:20241204T140000
DTEND;TZID=-03:20241204T143000
DESCRIPTION:Tropical forests host half of Earth’s biodiversity (Dirzo & R
 aven\, 2003)\, 62% of global terrestrial vertebrate species (Pillay et al.
 \, 2022)\, and play a crucial role as a carbon sink (Mitchard\, 2018). Des
 pite their importance\, every year\, 3 to 4 million hectares of primary tr
 opical forests are lost\, mainly in Brazil\, Indonesia\, and the Democrati
 c Republic of Congo (DRC) (Hansen et al.\, 2013\; Seymour\, 2022)\, contri
 buting to 22% of total greenhouse gas (GHG) emissions worldwide along with
  agriculture\, forestry and other land use (AFOLU) (IPCC\, 2023).\n\nPreve
 nting deforestation requires understanding its root causes\, particularly 
 the capital availability to the farm sector. In many tropical countries\, 
 rural credit is available as loans at subsidized interest rates to improve
  agricultural production or support agricultural costs (Servo\, 2019). How
 ever\, these loans may be leading to more deforestation. Some studies have
  analyzed this issue on a municipal scale\, but few peer-reviewed studies 
 have linked rural credit to individual property-scale deforestation. Recen
 tly\, the NGO Greenpeace (Greenpeace\, 2024) and the Climate Policy Initia
 tive (Mourão et al.\, 2024) published two studies showing the relationshi
 p between rural credit and deforestation. Understanding this relationship 
 can improve public policies to prevent deforestation from happening even b
 efore it starts.\n\nMethods\nIn this study\, I used open data and FOSS4G t
 o quantify the amount of Rural Credit released to rural properties that co
 mmitted Deforestation. The datasets came from different open data sources.
  The Central Bank of Brazil provides data on rural credit on the SICOR Sys
 tem. The National Institute of Space Research (INPE) provides data on defo
 restation in the Terrabrasilis system. The Brazilian Forest Service provid
 ed data for each property's Rural Environmental Registry (CAR)\, providing
  their boundaries. The Brazilian Institute of Geography and Statistics (IB
 GE) provides data for administrative boundaries (state and municipality).\
 n\nUsing the Terra library in CRAN-R\,. I processed the data sets from thr
 ee states that contributed the most to deforestation: Rondônia\, Mato Gro
 sso\, and Pará. I used a Spatialite database and QGIS Geographic Informat
 ion System to check the results. The novelty here is that by using R scrip
 ts\, it was possible to rebuild the relational database from SICOR in a ge
 ospatial environment\, providing a reproducible environment. All steps are
  described below.\n\nFirst\, using R\, all the data needed for the analysi
 s was downloaded from their source and loaded into the R environment. The 
 second step\, still using R\, was to recreate the SICOR\, CAR\, and PRODES
  Deforestation tables and populate them into a Spatialite (SQLite) databas
 e. This step provides a valuable tool for monitoring by both environmental
  agencies and the banks that provide loans for rural credit.\n\nThe next s
 tep was to intersect the deforestation data with the CAR property boundari
 es\, calculating the amount of deforestation on each property using PRODES
  data between 2008 and 2023. Next\, the total number of loans between 2013
  and 2023 was identified for each property. All these steps were processed
  using the Terra library in R.\n\nResults\nIn 1992\, the Brazilian Parliam
 ent enacted Law 4\,829\, creating subsidies for rural credit\, known as Sa
 fra Plan. The interest rates of the Safra Plan have always been significan
 tly lower than those practiced in the market. In March 2019\, for example\
 , while the average interest rate on loans for non-rural purposes stood at
  31.6% per year\, rural credit was observed at 10.8% p.y. on market rates\
 , and even lower with controlled rates observing an average rate of 6.1% p
 .y.(Servo\, 2019) .\n\nThe results show that from 2013 to 2023\, more than
  BRL 17 billion was loaned to properties with some deforestation in these 
 three states (RO\, PA\, MT). Counting deforestation from August 2008 to Ju
 ly 2023\, 8197 km² is the total amount of clearing in properties that rec
 eived rural credit in those same three states\, representing 8.5% of all d
 eforestation for the period.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:The relationship between rural credit and deforestation - George Po
 rto Ferreira
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/3YCWGQ/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-FPQDTF@talks.osgeo.org
DTSTART;TZID=-03:20241204T140000
DTEND;TZID=-03:20241204T143000
DESCRIPTION:Application-oriented research projects often involve diverse co
 nsortium members\, from universities to research institutions\, to authori
 ties\, to domain end users. This requires the integration of very heteroge
 neous data sources\, the facilitation of their combined processing\, and t
 he presentation of the results in adequate ways. The challenge here is not
  only the technical realisation itself\, but maybe even more to design sol
 utions catering to this wide user group\, balancing feature-richness with 
 easy usability. Data is abundant\, and processing plentiful\, but it all n
 eeds to go the last mile to the final user.\n\nOne such research project i
 s “AgriSens DEMMIN 4.0”\, which is advancing remote sensing for the di
 gitalisation in agricultural crop production. Thus\, the user group includ
 es programmers\, domain scientists\, as well as farmers. Until now\, agric
 ulture is not yet widely taking advantage of EO products. Therefore\, the 
 project not only addresses the creation of novel remote-sensing-based appl
 ication techniques and their implementation\, but puts an equally distinct
  emphasis on the development of an accompanying data integration and visua
 lisation system. In this work\, we describe how our architecture – consi
 sting of several pieces of free and open source geo software – closes th
 e gap between data providers and information consumers as it facilitates n
 ecessary analysis steps and combines these with adequate presentation to d
 ecision makers.\n\nIn our IT architecture\, we utilise one central datacub
 e to conquer this problem\, which acts as a cloud-based geospatial data ho
 lding and computation platform. It gathers a multitude of data\, ranging f
 rom optical and radar raster imagery through weather data to in-situ field
  measurements\, and pre-processes it into an interoperable\, analysis-read
 y state. These resources can then be accessed through APIs for external us
 age\, or computations can be carried out directly on the datacube and the 
 results immediately visualised with tools hosted on the same server.\n\nTh
 e whole system is located at the Leibniz Supercomputing Centre of the Bava
 rian Academy of Sciences and Humanities (LRZ). Apart from utilising the co
 mputing resources available there\, this also opens up synergies with alre
 ady-existing projects: We can make use of the enormous amount of EO data t
 hat is already available within the LRZ’s “Data Science Storage” and
  the DLR’s “terrabyte” platform. These storages are directly mounted
  into our server so that the datacube can access petabytes of imagery with
 out having to duplicate it again\, saving costs and emissions.\n\nAt the c
 ore of our infrastructure is an instance of the “Open Data Cube” (ODC)
  software package. Metadata is ingested into its PostgreSQL database and c
 an be retrieved via the built-in web-based data discovery application “O
 DC Explorer” or via an API endpoint of the emergent STAC standard (which
  in turn can be accessed via the “STAC Browser” web application or any
  other compatible software). All raster data is provided in the Cloud-Opti
 mised GeoTIFF (COG) format\, allowing efficient access even from remote ma
 chines.\n\nOur main interface for scientific computation is Jupyter Hub\, 
 enabling collaborative work across institutions. For each user\, a dedicat
 ed Jupyter Lab instance is spawned in its own Docker container that can ac
 cess all the data of the previously mentioned storages and has a certain a
 mount of computing resources allocated to it. Users can write their code i
 n Python or R\, arguably the most popular programming languages in the EO 
 community\, which offer straightforward packages for connecting to an ODC\
 , namely “datacube” and “odcR”. This way\, scientists receive the 
 typical professional online analysis environment in which they can work cl
 osely to the data and use all the powers of these programming languages an
 d their EO-friendly ecosystem.\n\nAnother option to work with the data is 
 via openEO\, the new standardised way to interact with big EO data cloud p
 rocessing backends. We incorporate this standard by utilising the “openE
 O Spring Driver”\, an adapter to translate user-submitted openEO process
  graphs into analysis code that can be run using ODC. For compatibility wi
 th legacy software\, it is also possible to request rendered images of pre
 -configured analysis algorithms via WMS\, which are being served by an ins
 tance of the powerful “datacube-ows” package.\n\nThe final goal\, howe
 ver\, is to connect farmers and other end users to these results\, who can
 not or do not want to deal with complex interfaces. Therefore\, these high
 ly technical tools are not sufficient\, but need to be accompanied by easy
 -to-use graphical interfaces. Drawing on the possibilities of modern web t
 echnologies\, we realise these through purpose-built web apps. Arranged ar
 ound an OpenLayers-powered map component\, data products are streamed in C
 OG format from the datacube and displayed along with the needed additional
  tooling for interpretation. For example\, in this fashion we realised a d
 emonstrator showcasing the results of a water balance model for irrigated 
 potato fields.\n\nAnother outlet of the project is the “FieldMApp”\, a
  mobile application designed to be used by farmers both in the field as we
 ll as in the office to digitise and monitor areas of lower yield within cr
 op fields. For evaluating plant vitality\, a vegetation-index-based raster
  product is calculated on the datacube using the latest Sentinel-2 imagery
  and long-term crop-specific averages. Due to the tablet application being
  programmed with the platform-agnostic Flutter framework\, but its standar
 d mapping component not yet supporting COGs\, it was necessary to resort t
 o WMS for serving the raster data. As mentioned above\, this is comfortabl
 y possible by configuring “datacube-ows” on top of ODC.\n\nOverall\, t
 he challenge of interoperating various data supplies\, processing chains a
 nd custom-tailored interfaces – as typically encountered in interdiscipl
 inary research projects – requires complex solutions\, but can be achiev
 ed quite well by utilising a datacube approach with free and open source g
 eo software building blocks. Our integrated system successfully demonstrat
 es such a use case for the domain of remote sensing in agriculture.
DTSTAMP:20260422T231510Z
LOCATION:Room III
SUMMARY:Integrating\, Processing and Presenting Big Geodata with Earth Obse
 rvation Datacubes in an Interdisciplinary Research Context - Christoph Fri
 edrich
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/FPQDTF/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-JWML3M@talks.osgeo.org
DTSTART;TZID=-03:20241204T143000
DTEND;TZID=-03:20241204T150000
DESCRIPTION:3D modeling involves the three-dimensional representation of ch
 aracters or scenes\, providing a greater visualization of details for the 
 object being represented\, creating the concept of depth. This concept als
 o opens up a vast array of applications that a simple 2D drawing would be 
 unable to present. This type of representation is widely used in various f
 ields\, such as the entertainment industry (e.g.\, films and games)\, auto
 motive engineering\, architecture/engineering\, etc.\, having diverse purp
 oses and applications. 3D modeling can be achieved through different metho
 dologies\, with the primary distinction between them being the intended us
 e of the modeled object. Notable methodologies include Box Modeling\, Digi
 tal Sculpting\, and Poly-by-Poly modeling.\nTraditionally\, Photogrammetry
  was defined as the 'science and art of obtaining reliable measurements th
 rough photographs' (American Society of Photogrammetry). Although it is a 
 powerful technique for environmental detail formation\, there is inherent 
 complexity in its equipment\, including both hardware and software\, with 
 a significant cost associated with its acquisition and the necessity for s
 pecialized knowledge for effective use by its operators. However\, with te
 chnological advancements\, the availability of higher processing capacity 
 equipment at lower costs and more user-friendly software has facilitated w
 ider dissemination and use among a larger number of users. Among these tec
 hnologies\, the introduction of drones into the technical and professional
  market\, with new forms of application and use of photography\, has spurr
 ed new growth in the use of photogrammetry across various professional sec
 tors and the development of techniques for processing small-format images.
 \nThus\, 3D modeling through photogrammetry allows for the acquisition of 
 large-scale data\, enabling detailed studies\, sometimes at centimeter or 
 even millimeter scales\, with a level of detail that would previously have
  been impossible or impractical. This contributes positively to feasibilit
 y studies\, risk analysis\, project presentation\, among other application
 s in various fields such as Civil Engineering\, Architecture\, and Surveyi
 ng. The combination of these technologies not only enhances project visual
 ization but also amplifies the collection of geospatial data\, promoting a
  more comprehensive and precise approach.\nVirtual Reality (VR) is a techn
 ology that creates a simulated environment through electronic devices\, th
 ereby providing users with a new way of visualization\, whether applied to
  video games or integrated with other fields. The combination of this tech
 nology with models obtained from photogrammetry provides realistic environ
 ments with impressive detail richness.\nThis study will address 3D modelin
 g through the close-range technique\, using terrestrial photographs. The r
 esearch is divided into four stages. The first three stages involve proces
 sing these images using various types of hardware and software. For the ha
 rdware\, both processing power and image capture quality are considered\, 
 aiming to demonstrate the best possible results at a low cost. Regarding s
 oftware\, the use of open-source programs from various developers was expl
 ored\, with the intention of making comparisons and achieving the best res
 ults among them.\nIn the fourth and final stage of the research\, a market
  evaluation will be conducted to understand the needs of professionals in 
 the field concerning this technology. To carry out the work\, photographs 
 were initially taken with a Canon EOS 200D of the building housing the Gra
 duate Program in Modeling (PPGM) at the State University of Feira de Santa
 na (UEFS). Subsequently\, additional images were obtained using other capt
 ure equipment\, such as mobile phones\, following the same research line. 
 A total of 100 photos were collected and processed using three open-source
  software programs: Meshroom\, Colmap\, and Regard3D\, which are noted for
  their prominence and positive recommendations among free software options
 .\nThe goal was to calibrate parameters to achieve the best possible model
 \, considering software and hardware limitations. With the obtained result
 s\, a comparison was made to determine which software offered the best out
 come\, combining modeling quality\, ease of post-processing\, and compatib
 ility with the graphics engine (Engine) that will be used for creating the
  realistic environment. This engine is called Unreal Engine\, developed by
  Epic Games\, widely used in video game development but with significant p
 otential for application in fields such as Civil Engineering\, Architectur
 e\, and Surveying.\nThus\, the research could delve into the combination o
 f modeling obtained from photogrammetry with virtual reality. One of the s
 oftware programs used\, which demonstrated good performance\, was Regard3D
 \, designed for creating 3D models from two-dimensional images. A machine 
 with low processing hardware was used specifically to compare these result
 s with those from more modern computers. The configuration used is as foll
 ows:\n•	Processor: Intel Pentium G620\n•	Motherboard: DXH61Z M2 Duex\n
 •	Graphics Card: RX580 8GB MingZhou\n•	RAM: 16GB\nParameter selection 
 was carried out iteratively in successive stages to improve processing qua
 lity. It was observed that processing times were high\, particularly in sp
 ecific stages such as mesh computation. During this process\, the software
  analyzes the provided images to find correspondences\, known as interest 
 points\, which are distinct and uniquely characterized points in the image
 s. For the various software programs\, the most commonly used algorithms f
 or point detection are SIFT (Scale-Invariant Feature Transform) (Lowe\, 19
 99) and ORB (Oriented FAST and Rotated BRIEF) (Ethan Rublee et al.\, 2011)
 \, describing them as characteristic vectors for finding common points bet
 ween images. After detection\, filtering is performed to discard points th
 at are misaligned relative to others.\nAdditionally\, significant time was
  observed in the densification process\, which involves increasing the num
 ber of points in a 3D model to add more interest points for better image q
 uality. Various techniques are used in this process\, with interpolation b
 eing particularly noteworthy\, as it uses the characteristics of nearby po
 ints to estimate geometry and generate additional points.\nUsing minimal p
 arameters\, the model generation time was extended on this computer config
 uration\, with high RAM usage. Significant storage was required\, as the m
 odels increased in size at each processing stage\, reaching approximately 
 20GB in the final stage. However\, the results were satisfactory compared 
 to the available paid software on the market. Therefore\, the use of photo
 grammetry-generated models\, combined with virtual reality to create reali
 stic virtual environments\, can be considered positive\, with low cost and
  using free open-source software.
DTSTAMP:20260422T231510Z
LOCATION:Room III
SUMMARY:Photogrammetry and 3D Modelling Applied to the Creation of Virtual 
 Reality in Realistic Environments: Analysis of Free Software for Image Pro
 cessing - Felipe Oliveira Silva\, Rosangela Leal
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/JWML3M/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-8BGPQA@talks.osgeo.org
DTSTART;TZID=-03:20241204T143000
DTEND;TZID=-03:20241204T150000
DESCRIPTION:The field of speleology is dependent upon the accurate mapping 
 and analysis of data to gain an understanding of subterranean environments
 . Free and open-source software (FOSS) has facilitated advancements in spa
 tial data management\, offering robust tools for data collection\, analysi
 s\, and fieldwork. Software solutions such as PostgreSQL with PostGIS\, QG
 IS\, GRASS\, and QField facilitate efficient geospatial data management an
 d data collection. The accurate location determination of caves is of para
 mount importance\, given their significant ecological\, historical\, and c
 ultural value. In Brazil\, the implementation of rigorous environmental le
 gislation has resulted in the establishment of a 250-meter buffer zone sur
 rounding caves. This regulatory measure is designed to ensure the protecti
 on of these vulnerable ecosystems and to regulate activities within their 
 vicinity. The radius may be modified based on the findings of environmenta
 l studies\, thereby ensuring the preservation of caves while facilitating 
 socio-economic development. This study presents EspeleoVale\, a software a
 s a service (SaaS) solution hosted on AWS. It employs open source scriptin
 g languages and frameworks\, including AngularJS\, PostgreSQL with PostGIS
 \, and MapStore2 integrated with Geoserver\, to effectively manage and vis
 ualize speleological data for a mining company in Brazil. By employing SQL
  queries and spatial functions\, users can visualize cave locations and th
 eir restricted areas\, thereby facilitating the assessment of project impa
 cts on these geomorphological features. In this study\, examples of visual
 ization and spatial analysis are presented for five hypothetical caves and
  a hypothetical project\, returning intersections\, differences and merged
  areas\, which are vital for the environmental protection of cavities and 
 for knowing project restrictions. Thus\, the integration of an RDBMS for s
 patial analysis and FOSS tools for data visualization fosters new developm
 ents and promotes more efficient speleology data management.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:From cave buffer zones to protected areas: speleology data manageme
 nt with free and open-source software (FOSS) - Alexandre Assuncao
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/8BGPQA/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-VQK7SK@talks.osgeo.org
DTSTART;TZID=-03:20241204T143000
DTEND;TZID=-03:20241204T150000
DESCRIPTION:OGC Standards and OSGeo Projects have been widely applied to di
 fferent kinds of geospatial data and extended for the implementation of ge
 ospatial data science environments. However\, there’s no review comprehe
 nsively summarising and discussing the progress of these open source techn
 ologies for publishing geospatial databases on the Web. The proposed Syste
 matic Technology Review is a stylized version of the Systematic Literature
  Review\, covering the documentation of OGC Standards and OSGeo Projects. 
 The search strategy consisted of screening OGC and OSGeo websites for the 
 latest version of OGC Standards' implementation (or community) specificati
 on and OSGeo Projects' developers manual. This review considered the techn
 ologies published until June 2024. A total of 80 OGC Standards and 52 OSGe
 o Projects were identified. To recognize the main topics of each technolog
 y in detail\, the documentation was analysed by Latent Dirichlet Allocatio
 n - LDA using the Scikit-learn package in Python. Grid-search was used to 
 find the optimal hyperparameters for the number of components and the deca
 y of the learning rate. With the maximum number of iterations set to 100\,
  the best model was obtained with 8 components and 0.1 learning decay. The
 n\, the most probable topic was predicted for each documentation. The netw
 ork of similarities arising from LDA was exported to Gephi for visualisati
 on\, where ForceAtlas2 layout algorithm was used to create a weighted undi
 rected graph\, keeping only edges with weight greater than 0.33. The lates
 t developments in terms of the OGC Standards for data encoding took place 
 in the GeoPackage standard. For accessing\, processing or visualising data
 \, the trend was the development of OGC API related standards. However\, G
 ML is the most implemented OGC Standard for data encoding in OSGeo Project
 s\, along with Web Services like WMS\, WFS\, WCS and WPS for accessing\, p
 rocessing and visualising the data. Community Standards represented less t
 han 10% of the OGC Standards\, while Community Projects represented more t
 han 50% of the OSGeo Projects. The adoption of these technologies were eva
 luated based on the number of Github forks and stars\, as well as Docker p
 ulls. With more than 100 million pulls\, PostGIS is the most downloaded OS
 Geo Project\, followed by GeoNetwork and Open Data Cube\, with more than 5
  million pulls each. But many of the analysed technologies lacked an offic
 ial Docker image. In terms of Github forks and stars\, the most shared and
  favoured OSGeo project is OpenLayers\, followed by QGIS and GDAL. The Lat
 ent Dirichlet Allocation analyses found eight topics underlying the OGC St
 andards and OSGeo Projects. The keywords of the top four topics were confo
 rmance\, layer\, tile and response. Based on the analysis of the Implement
 ation Standard and Community Standard documentations\, the most similar OG
 C Standards were OGC API - Tiles and Two Dimensional Tile Matrix Set. On t
 he other hand\, based on the analysis of developer manuals\, the most simi
 lar OSGeo Projects were GDAL and MDAL. The strongest relationship of an OG
 C Standard and an OSGeo Project occurred between WPS and ZOO-project\, fol
 lowed by WPS and PyWPS. Overall\, the OSGeo Project most closely related t
 o the entire set of OGC Standards was rasdaman\, followed by MapServer and
  deegree. Notably\, a large group of standards and projects showed scarce 
 connections\, mainly those that are domain specific\, like PubSub\, LAS an
 d PipelineML among the OGC Standards and like Giswater and MobilityDB amon
 g the OSGeo Community Projects\, or those that are the basis of the other 
 technologies\, like Simple Features\, WKT and Coordinate Transformation st
 andards and like PROJ and PostGIS projects. The presented Systematic Techn
 ology Review can promote the evolution of the current OGC Standards and OS
 Geo Projects\, as well as the development of new technologies. It can also
  support developers of new solutions in the geospatial community. Specific
 ally\, this review is the basis for the proposal of a new library for the 
 integrated access of INPE’s environmental databases. An important limita
 tion of this systematic review is that it was not possible to find any PDF
  documentation for almost 20% of the existing technologies\, which were ex
 cluded from the analysis.
DTSTAMP:20260422T231510Z
LOCATION:Room V
SUMMARY:Systematic Technology Review of OGC Standards and OSGeo Projects - 
 Luiz Fernando Satolo
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/VQK7SK/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-BFMPHD@talks.osgeo.org
DTSTART;TZID=-03:20241204T143000
DTEND;TZID=-03:20241204T150000
DESCRIPTION:Semantic interoperability is essential for integrating open geo
 spatial collaborative and official data. While geosemantics has long been 
 a topic of discussion\, recent research has explored automated semantic in
 tegration without fully leveraging the capabilities of large language mode
 ls (LLMs) in artificial intelligence. This study investigates using chatGP
 T-4 to semantically associate OpenStreetMap (OSM) tags with the Brazilian 
 topographic mapping model\, the Technical Specification for Structuring Ve
 ctor Geospatial Data (ET-EDGV). Focusing on five classes within the buildi
 ngs category\, the study tested three data structuring methods: spreadshee
 ts\, OWL ontology\, and XML. Results indicated that ontology and XML forma
 ts produced more accurate semantic associations than spreadsheets\, with O
 WL yielding the most coherent results. These findings underscore the impor
 tance of properly structured data to capture hierarchical relationships be
 tween concepts better. The study also noted the need for precise and detai
 led queries\, highlighting some limitations in chatGPT's ability to unders
 tand complex geospatial model inputs. Further research is recommended to e
 nhance LLMs' potential in facilitating semantic interoperability and to ex
 plore the role of prompt engineering in optimizing these interactions.
DTSTAMP:20260422T231510Z
LOCATION:Room II
SUMMARY:Advancing Geospatial Data Integration: The Role of Prompt Engineeri
 ng in Semantic Association with chatGPT - Fabíola Andrade
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/BFMPHD/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-TWFHZB@talks.osgeo.org
DTSTART;TZID=-03:20241204T150000
DTEND;TZID=-03:20241204T153000
DESCRIPTION:1. Introduction & Related Work\nPedestrian mobility is crucial 
 in urban environments\, and its promotion can contribute to the achievemen
 t of many UN SDGs (Adriazola-Steil et al.\, 2021). Mapping\, which enables
  public scrutiny and long-term optimized planning\, is indispensable in th
 is context. \nWith the widespread availability of a large set of Open Stre
 et-Level Imagery\, such as Mapillary\, there is now a significant opportun
 ity that presents significant challenges for data extraction (Ma et al.\, 
 2019). The richness of detail in these urban landscape representations can
  help us better understand the peculiarities of urban environments.  The s
 cope of this project is to make use of them for the study of pathways\, fo
 cusing in particular on the verification of their existence\, their catego
 rization (road\, sidewalk\, or general footpath)\, and the identification 
 of their surface material. Since pedestrian crossings are part of the car 
 and pedestrian network\, and road characteristics (such as material and wi
 dth) significantly impact pedestrian safety\, it is worth noting that the 
 study of roads is also fundamental to pedestrian infrastructure (Mesfin & 
 Denbi\, 2022). However\, the central aspect remains sidewalks\, often the 
 most ubiquitous type of pedestrian thoroughfare (Kim\, 2019)\, a valuable 
 space for sociability (Osman\, 2016)\, whose "health" is symptomatic of ho
 w pedestrian-friendly the city is (Mesfin & Denbi\, 2022).  \nDespite the 
 importance of knowledge about them for understanding the urban environment
 \, pavements are often poorly mapped (Vestena et al.\, 2023). Even fewer w
 orks delve into the problem of pathway surface identification: Zhou et al.
  (2023) used conventional Convolutional Neural Networks (CNN) to identify 
 pavement classes limited to asphalt\, gravel\, and cement\; Zhang et al. (
 2022) used a similar approach to identify asphalt-only damage such as "pot
 holes" and "patches"\; only Mesquita et al. (2022) and Hosseini et al. (20
 22) made pixel-level identification\, albeit the first one was limited to 
 only "paved" and "unpaved" categorization\, while the second one notwithst
 anding having a more wholesome approach has its categorization focused on 
 a New-York centered classes and only classifies sidewalks. There is still 
 a gap in approaches considering standardized surface types and generalized
  path detection.\n2. The Framework\n	We propose the Deep Pavements Framewo
 rk to address these issues. It is a modular project\, with each part contr
 ibuting to the solution of the different challenges. The first module is t
 he Surface-patches Dataset\, labeled following the OpenStreetMap surface=*
  tags standard\, supporting the categories of "asphalt"\, "cobblestone"\, 
 "compacted"\, "concrete plates"\, "concrete"\, "grass"\, "gravel"\, "groun
 d"\, "paving stones"\, and "sett" currently\, The second module is the Run
 ner\, to process the data for a given region. The third is the Sample-pick
 er\, which generates random samples for dataset generation. There is also 
 the Sample-labeler to label samples interactively and a central module to 
 guide the potential user into the project's usage. Each module relies upon
  a different set of dependencies\, thus reducing runtime issues. It is imp
 ortant to highlight the primary usage of containerizing engines\, i.e.\, D
 ocker.\nBeyond modularity\, Deep Pavements has as core design principles: 
 1) complete openness\, meaning that all its dependencies must have a broad
 ly permissive license that enables even for commercial usage\; 2) the ease
  of reproducibility by a straightforward setup with an as such and well-do
 cumented\, command line interface (CLI)\; 3) evolvability\, the State-of-T
 he-Art (SOTA) algorithms are constantly changing\, then at each new releas
 e of runner/sample-picker images a new set of tools can be employed while 
 keeping the same CLI\; nevertheless the user would still be able to use a 
 previous release\; 4) Standard-anchored with classes that had been agreed 
 upon by the broad crowd-sourced knowledge base constituted by OSM communit
 y (Rahmig & Kludge\, 2013\; Mooney & Minghini\, 2017). \nThe implementatio
 n of the main modules (Runner/Sample Picker) uses open-vocabulary AI algor
 ithms to perform the data extraction\, following this workflow: 1) Groundi
 ng Dino (Liu et al.\, 2023) based on a free-input\, detects the bounding b
 ox of the detections\; 2) Segment Anything (Kirillov et al.\, 2023) transf
 orms it into a mask\;  3) 3 different versions of the CLIP algorithm (Radf
 ord et al.\, 2021) tests if the detection is not a hallucination\; 4) If c
 onfirmed\, a specialized version of CLIP is used for finally check the sur
 face material using the cheaply clipped biggest rectangle in the detection
 \, assuring the usage of the patch whose texture got less hindered by the 
 effects of perspective (Lederman & Klatzky\, 1995) being no-data pixels\, 
 free\, which is another potential source for classification jeopardy (Kang
  et al.\, 2019). \nThe workflow results from experiments for the presented
  design mainly point out the need for hallucination testing. This procedur
 e acts as a shield for one of the main drawbacks of open vocabulary algori
 thms (Ben-Kish et al.\, 2024). The use of this particular type of algorith
 m was essential due to its flexibility (Wang et al.\, 2024) and the potent
 ial for better semantic understanding of the scene due to its embedded lan
 guage model (Eichstaedt et al.\, 2021). There is also the possibility of a
 llowing the user to opt out of some or all of OSM standardized classes\, w
 hich can be helpful in some scenarios with regional uniqueness.  \n3. Fina
 l Remarks\nDeep Pavements is an innovative and comprehensive toolset under
  continuous development with all modules maintained at Github\, with the c
 entral module available at  <https://github.com/kauevestena/deep_pavements
 _project>. The framework enables creating pavement data that is seamlessly
  plugable into OSM.\n 	As future challenges\, we plan to filter lousy qual
 ity images that can happen on the primary data source (Ma et al.\, 2019) t
 o detect other visually-identifiable pavement traits such as its decay\, s
 tandardizing with OSM tags such as smoothness=*\; to integrate photogramme
 tric tools for obtaining additional modeling of pavements\, having as main
  interest in measuring pavement width\, one of the most relevant info for 
 accessibility assessment (Kim et al.\, 2011).
DTSTAMP:20260422T231510Z
LOCATION:Room III
SUMMARY:Deep Pavements Framework: Combining Ai Tools And Collaborative Terr
 estrial Imagery For Pathway Mapping - Kauê de Moraes Vestena
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/TWFHZB/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-CUVPXC@talks.osgeo.org
DTSTART;TZID=-03:20241204T150000
DTEND;TZID=-03:20241204T153000
DESCRIPTION:Most historical sources\, available in multiple formats (e.g.\,
  tabular and analog data)\, contain valuable geographic information. This 
 data can be transformed to generate both quantitative and qualitative insi
 ghts\, enabling the creation of digital maps and unlocking significant pot
 ential for scientific analysis. However\, the use of historical data prese
 nts several challenges: 1. Sources need to be digitized\; 2. Collections a
 re often spread across multiple archives\; 3. Metadata is often unavailabl
 e\; 4. Standardizing diverse sources and quantitatively reconstructing dat
 a from various periods is difficult\; 5. The reliability of historical dat
 a can be uncertain\; 6. There is limited spatial resolution\; and 7. Inacc
 uracies and text legibility issues are common. These challenges underscore
  the need for novel methodologies aimed at enhancing the quality and quant
 ity of such sources. This paper presents the findings of the exploratory p
 roject AgroecoDecipher (2022.09372.PTDC) dedicated to extracting a compreh
 ensive database from historical textual records and analogue map files to 
 trace agroecological patterns. Employing an exploratory methodology ground
 ed in artificial intelligence (AI) and Geographic Information Systems (GIS
 )\, the projected solutions include the establish-ment of routines based o
 n AI tools that combines GIS\, machine learning (ML)\, and Large Language 
 Models (LLMs). Approxi-mately 271 survey books from the 1950s were digitiz
 ed at the municipal level\, with a total sheet count exceeding 42\,000. Ad
 di-tionally\, more than 100 analogue maps were digitized\, processed\, and
  vectorized\, resulting in a detailed geodatabase map ar-chive. The result
 s are promising and demonstrate that the integration of AI and geospatial 
 tools has proven essential in trans-forming raw historical data.
DTSTAMP:20260422T231510Z
LOCATION:Room II
SUMMARY:The Use of GeoAI Techniques for Gathering\, Storing\, and Analyzing
  Historical Agroecological Data - Cláudia M. Viana
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/CUVPXC/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-BBBU3X@talks.osgeo.org
DTSTART;TZID=-03:20241204T150000
DTEND;TZID=-03:20241204T153000
DESCRIPTION:The National Institute of Colonization and Agrarian Reform (INC
 RA) is a body of the Brazilian federal government\, linked to the Ministry
  of Agriculture\, Livestock and Supply (MAPA). Its main mission is to exec
 ute national agrarian policy\, promoting agrarian reform and land planning
  in Brazil. INCRA works on several fronts to guarantee territorial regular
 ization\, sustainable development and social justice in the countryside.\n
 \nThe Amazon is a strategic region for INCRA due to its unique characteris
 tics and specific challenges. The body works on land regularization to gua
 rantee the legal security of squatters and combat land grabbing\, thus con
 tributing to environmental preservation\, and thus\, favors the control an
 d monitoring of rural properties\, helping to combat illegal deforestation
  and environmental degradation.\n\nFor regularization to become a reality\
 , INCRA has signed Decentralized Execution Terms in several Brazilian stat
 es. In Rondônia\, the partnership was established with the Federal Instit
 ute of Education\, Science and Technology of Rondônia (IFRO)\, through TE
 D 20/2021/INCRA-SEDE/IFRO\, called GeoRondônia Project. The project aims 
 to serve and document more than 25\,000 families and rural properties. To 
 achieve this\, the necessary steps are the georeferencing of properties\, 
 rural environmental registration and occupational supervision.\n\nThe geor
 eferencing of rural properties is a complex process that requires knowledg
 e in surveying\, legal aspects\, precision and efficiency. With the growin
 g demand for land regularization in Brazil\, especially in large areas suc
 h as the Amazon\, it is essential to seek solutions that reduce costs\, au
 tomate tasks and ensure data quality. This article presents the productivi
 ty gains achieved by the GeoRondônia project\, which uses QGIS and the Ge
 oINCRA plugin to automate tasks. The methodology developed involves data v
 alidation and quality control\, with automation implemented in Python\, en
 suring accuracy in property certification and the large-scale generation o
 f .ODS spreadsheets (certification document).\n\nThis is an innovative met
 hodology\, unavailable in any other Geographic Information System\, open o
 r private\, whose steps were developed to meet the highest technical quali
 ty standards for georeferencing projects with large volumes of data\, such
  as in GeoRondônia\, which has worked in settlements that have more than 
 2\,000 properties.\n\nInitially\, all georeferenced data processing dynami
 cs were carried out manually. In this way\, the Space Research Group (GREE
 S/IFRO)\, together with other collaborators from the Paraíba Institute (I
 FPB)\, has supported the project's actions for the development of innovati
 ve technology\, in order to optimize actions involving certification of ru
 ral properties\, using free tools.\n\nThe biggest challenge of the project
  is time and qualified labor\, therefore\, aiming to increase productivity
 \, the Free and Open Source Software Solution was developed\, called FOSSS
  GeoRondônia\, which integrates features from QGis and\, mainly\, the Geo
 INCRA Plugin . This set of functions enabled temporal gains\, professional
  qualification of employees and finances\, as it is completely free\, and 
 can -be replicated to any other large data volume project involving georef
 erencing.\n\nTo optimize demands\, the main steps of FOSSS GeoRondônia\, 
 after adjusting the observations tracked in the field to meet INCRA's posi
 tional accuracies\, are: a) elimination of topological errors in the geome
 tries of the settlement database\; b) elimination of errors when filling i
 n vertices and limits in the settlement database\; c) elimination of error
 s in the name of the property and .ODS spreadsheet\; d) generation of .ODS
  spreadsheets in an automated way and launch in INCRA's Land Management Sy
 stem (SIGEF).\n\nThe results obtained with FOSSS GeoRondônia are notable:
  Process Automation with the assembly of the database with greater securit
 y\, precision and practicality\; Automation in product generation\, where 
 ODS spreadsheets that were previously done manually are now generated auto
 matically\; Reduction of spreadsheet preparation time by around 70%\; Elim
 ination of topological and writing errors\, ensuring accurate and consiste
 nt data\; Cost savings using QGIS and the GeoINCRA plugin\, eliminating th
 e need for expensive licenses\, allowing for a more efficient allocation o
 f public resources.\n\nCurrently\, the GeoRondônia Project has already or
 ganized the databases for 8 Settlement Projects\, which make up around 1\,
 500 properties. A very productive result was the field collection\, proces
 sing and launch of 3 new Settlements (574 properties) in the state of Rond
 ônia in record time\, for launch by the Federal Government\, all carried 
 out between April and May 2024.\n\nThe processing of georeferenced data on
  a very high scale\, using FOSSS GeoRondônia\, allowed the generation of 
 products with precision and the consequent launch in SIGEF with reliabilit
 y\, which can be replicated for any project in Brazil. This has promoted g
 reater efficiency in the services performed\, and helped INCRA in meeting 
 the large volume of Agrarian Reform demands\, to transform rural settlemen
 ts into true agents of sustainability and productivity.\n\nBased on this\,
  we are interested in presenting this new functionality to the national an
 d international public present at FOSS4G\, and who are looking for free so
 lutions for different demands\, in order to promote the dissemination and 
 replication of the use of FOSSS GeoRondônia.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:Free and Open-Source Software Solutions in the GeoRondônia Project
 : Efficiency in Georeferencing of Rural Settlements with QGIS and GeoINCRA
  plugin - Leandro França\, Dra. Ranieli dos Anjos de Souza\, Valdir Moura
 \, Marcelo Vinicius Assis de Brito\, Bárbara Laura Tavares
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/BBBU3X/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-AJ7NJ8@talks.osgeo.org
DTSTART;TZID=-03:20241204T153000
DTEND;TZID=-03:20241204T160000
DESCRIPTION:## Building an AI Dataset to Estimate Vegetation Carbon Using a
  QGIS-Based Annotation Tool\n\n- Yoon Jeongho\, Korea Environment Institut
 e\, Korea\n- Lee Sanghyuk\, Korea Environment Institute\, Korea\n- Son Seu
 ng-woo\, Korea Environment Institute\, Korea\n\n## Violence: Types\, Distr
 ibution\, and Frequency. A Study on the Re-concentration of Homicides in t
 he City of Rosario\, Argentina (2007-2023)\n\n- Silvina Meritano\, Nationa
 l Scientific and Technical Research Council (CONICET)\, Argentina\; Centre
  for Research and Studies on Culture and Society (CIECS-UNC)\, Córdoba\, 
 Argentina\n\n## Use of Vegetation Indices for Monitoring Forest Cover in a
  Conservation Unit in the Far West of the State of Acre\n\n- Ananda Kellen
  Silva Rocha\, Federal University of Acre\, Cruzeiro do Sul\, AC\, Brazil\
 n- Anelena Lima de Carvalho\, Professor\, Federal University of Acre\, Cru
 zeiro do Sul\, AC\, Brazil
DTSTAMP:20260422T231510Z
LOCATION:Main Hall
SUMMARY:Poster Session II - 
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/AJ7NJ8/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-WLKZ3Y@talks.osgeo.org
DTSTART;TZID=-03:20241204T154500
DTEND;TZID=-03:20241204T161500
DESCRIPTION:With the advancement of technologies applied to Global Navigati
 on Satellite Systems and the popularization of Remotely Piloted Aircraft\,
  aerial photogrammetry has experienced significant advances\, especially i
 n geographic and environmental research. Drones equipped with high-resolut
 ion sensors have revolutionized data collection\, essential for topographi
 c mapping and vegetation analysis. However\, many technologies for digital
  processing and spatial analysis of original images are proprietary and ex
 pensive\, making access difficult for institutions and researchers\, espec
 ially in Brazil. This scenario makes free and open-source software\, such 
 as WebODM\, an important alternative to democratize access to high-quality
  tools. In view of this\, this article analyzes the applicability of WebOD
 M in academic works\, through bibliometric analysis in scientific reposito
 ries. The search included variations in the software nomenclature and the 
 data were analyzed quantitatively. The Elsevier and Scopus databases led w
 ith 59 and 39 publications\, respectively\, while the national Scielo Bras
 il database had only one article hosted. There has been a gradual increase
  in the number of publications involving WebODM since 2016\, with a peak o
 f 38 papers in 2023. In comparison\, the term "Agisoft Metashape"\, a prop
 rietary solution for digital aerial photogrammetric processing\, returned 
 1\,595 publications in the same search portals. As an initial contribution
  to the understanding of the state of the art\, it was observed that the t
 hematic axes involving remote sensing\, photogrammetry\, precision agricul
 ture and agricultural management were those that concentrated the largest 
 number of scientific productions on WebODM in the investigated period\, ex
 ceeding 30 papers. It is concluded that WebODM has stood out as a relevant
  tool in scientific research\, especially for the processing of images der
 ived from drones. Future studies should qualitatively evaluate the results
  obtained with the use of WebODM in comparison with proprietary software.
DTSTAMP:20260422T231510Z
LOCATION:Room III
SUMMARY:WebODM free software as a tool for digital aerial photogrammetric p
 rocessing: employability in scientific productions - Fabrício Lisboa Viei
 ra Machado
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/WLKZ3Y/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-KPYFTX@talks.osgeo.org
DTSTART;TZID=-03:20241204T154500
DTEND;TZID=-03:20241204T161500
DESCRIPTION:Innovations\, such as voice recognition and natural language pr
 ocessing (NLP)\, have significantly impacted various fields by enabling mo
 re natural interactions between humans and machines (Mahmoudi et al.\, 202
 3). In geoinformatics\, these advances are crucial for visualising geospat
 ial data\, allowing the creation of interactive and dynamic maps (Craglia 
 et al.\, 2012). Online mapping applications\, like OpenStreetMap (OSM)\, h
 ave democratised spatial information by enabling public participation in i
 ts creation and maintenance (Haklay\, 2010). Geolocation is essential in c
 ontemporary applications\, such as navigation\, emergency services\, and l
 ocation-based services. Google Colaboratory (or Colab) Notebook Environmen
 t stands out in promoting open science due to its accessibility\, ease of 
 use\, and collaborative capabilities\, and enabling the embodiment of the 
 FAIR principles (Camara et al.\, 2021). This study aims to develop a voice
  interaction application in Google Colab Notebook Environment to answer th
 e question: "Is it possible to develop a voice command application for geo
 location and visualisation of geospatial data within the Google Colab envi
 ronment?" The methodology includes FOSS libraries and tools such as geopy\
 , speech_recognition\, ffmpeg\, librosa\, and flask\, subdivided into six 
 stages: Audio Data Acquisition\, Audio Processing\, Speech Recognition\, G
 eocoding\, Visualization\, and Interface Development. The complete code\, 
 under an open license\, and how to reproduce this work are available on Gi
 tHub. Audio capture is performed using the Web Speech API in JavaScript (J
 S)\, which allows real-time voice recognition and integration with the Med
 iaDevices API to access the user's microphone. This method provides an int
 erface for high-quality audio recording\, essential for speech recognition
  and geocoding accuracy. Audio processing involves converting the ".webm" 
 format to ".wav" using ffmpeg\, efficiently maintaining the original audio
  quality. The Librosa library loads the audio\, adjusts the sampling rate\
 , and extracts relevant features from the audio signal\, such as spectrogr
 ams (Bisong\, 2019). Speech recognition is performed with the SpeechRecogn
 ition library in Python\, which provides an interface for various speech r
 ecognition services\, including the Google Web Speech API. This choice is 
 due to its high accuracy and support for multiple languages\, ensuring the
  system's flexibility and accessibility to a diverse audience (Nassif et a
 l.\, 2019). Geocoding transforms textual descriptions of locations into ge
 ographic coordinates\, allowing the visual representation of these locatio
 ns on an interactive map. The geopy library and the Nominatim service from
  OSM are used to convert addresses into latitude and longitude coordinates
  (Mooney & Corcoran\, 2012). For the visualisation of geocoded data\, a we
 b server was implemented using Flask\, a microframework for Python that al
 lows the creation of lightweight and efficient web applications. The user 
 interface was developed with HTML\, CSS\, and JS\, providing an intuitive 
 and interactive experience. The results show that the user and machine int
 eraction occurred satisfactorily. The first message displayed to the user 
 instructs them to slowly state the name of the city\, state\, or country t
 hey wish to geolocate. The use of JS and the Web Speech API allowed the sy
 stem to detect specific voice commands to start and stop recording\, as in
 dicated by the interface colors and states. This step is crucial for subse
 quent steps to ensure that the captured audio is clear and understandable.
  When the start command is recognised\, the interface changes to indicate 
 that the recording is in progress. The message "Command recognised: starti
 ng recording" confirms that the command was detected correctly. If the voi
 ce command is not recognised\, the interface displays a message asking the
  user to repeat the command. After recording\, the audio is saved in ".web
 m" format. If a previous audio file exists\, it is automatically overwritt
 en. This approach simplifies file management and avoids the accumulation o
 f unnecessary data. Next\, the audio is converted to ".wav" format using t
 he ffmpeg library. Then\, the audio is transcribed using the Web Speech AP
 I and the SpeechRecognition interface for the recognised language\, along 
 with the confirmation of the geocoded location and its respective latitude
  and longitude. The visual feedback proved essential for the user to confi
 rm that the entered information was recognised\, improving the system's us
 ability. The displayed information includes city\, region\, country\, lati
 tude\, and longitude. The interactive map allows the user to visualise and
  interact with the located area\, altering the zoom level and receiving a 
 voice message informing the map's current zoom level. This work presented 
 the integration of tools that assist in advances in human-computer interac
 tion in geoinformatics\, offering an intuitive and accessible interface fo
 r users of different technical proficiency levels. The results confirm the
  feasibility of voice command geolocation in Google Colab\, a platform tha
 t can be used for education\, research\, collaboration\, and sharing in sc
 ience\, enabling this work's reproducibility. Future research can improve 
 voice interaction features\, explore geolocation methods such as bounding 
 boxes\, and reduce dependence on JS and Flask. Improving the requirements 
 for peripheral devices could further increase the system's accuracy\, acce
 ssibility and user experience. The importance of geospatial accessibility 
 lies in enhancing service provision\, urban planning\, and social inclusio
 n\, facilitating mobility for people with disabilities\, and improving urb
 an infrastructure (Han et al.\, 2020).
DTSTAMP:20260422T231510Z
LOCATION:Room II
SUMMARY:Natural Language Processing and Voice Recognition for Geolocation a
 nd Geospatial Visualization in Notebook Environment - Nathan Damas
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/KPYFTX/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-YWFDM8@talks.osgeo.org
DTSTART;TZID=-03:20241204T154500
DTEND;TZID=-03:20241204T161500
DESCRIPTION:The Amazon faces significant logistical challenges due to its s
 ize and complex geographical features\, such as the dense rainforest and v
 ast hydrographic network. In Brazil\, Amazonas stands out for its ecologic
 al diversity and difficulties of access\, requiring innovative solutions f
 or educational development. The Amazonas Education Media Centre (CEMEAM) u
 ses In-Person Teaching with Technological Mediation (IPTTM) to overcome th
 ese obstacles\, promoting digital inclusion and democratizing access to ed
 ucation. Implemented in 2007 by the Amazonas State Secretary of Education 
 and School Sports (SEDUC-AM)\, it combines face-to-face classes with satel
 lite videoconferences and other media\, reaching more than 25\,000 student
 s in an area of 1\,571\,000 km². To help with spatial management\, CEMEAM
  was presented with a proposal for a Geographic Information System in a we
 b environment (WebGIS) using free software and plugins (QGIS and QGIS Clou
 d) as well as freely accessible products. The WebGIS enabled a variety of 
 analyses and proved it could contribute to more efficient spatial manageme
 nt\, adapted to regional specificities\, such as the isolation of localiti
 es and the dependence of students on river transportation. Although it is 
 not yet a formalized institutional tool\, SIG Web demonstrates the potenti
 al of geotechnologies in educational management in the Amazon\, serving as
  an important experiment to support CEMEAM in its needs.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:The Web GIS as an Auxiliary Management Tool for In-Person Teaching 
 with Technological Mediation in the Amazon Rainforest: The Case of CEMEAM\
 , Amazonas\, Brazil. - Alexandre Donato da Silva
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/YWFDM8/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-HMEXFV@talks.osgeo.org
DTSTART;TZID=-03:20241204T161500
DTEND;TZID=-03:20241204T164500
DESCRIPTION:Public participation is of utmost importance for community mobi
 lization and engagement\, so that through their networks and relationships
 \, both within and outside the community\, they create space through socia
 l action. According to Goodchild (2007)\, there is a demand to generate in
 formation that helps vulnerable communities to strengthen relationships wi
 th the government responsible for promoting important interventions to bri
 ng about change. It is possible to use the sensitivity and ability to info
 rm the needs of each resident\, from their perception\, and understand the
  needs of their community. Therefore\, it is important to use open and fre
 e mapping tools that represent the community's demands\, producing informa
 tion that allows collective autonomy to carry out strategies that involve 
 the public authorities through co-optation and coalitions aimed at communi
 ty-based interventions (Silva\, 2014).\n\nIn this sense\, OpenStreetMap (O
 SM) stands out as a tool for collaborative mapping and community intervent
 ions in highly vulnerable urban areas\, such as favelas. Given OSM's parti
 cipatory and open nature\, it allows the creation of maps with records of 
 features of various kinds\, but it is also of great value as a platform fo
 r community training and the development of geospatial skills (Bortolini a
 nd Camboin 2019).\n\nThe objective is to analyze from the literature how O
 penStreetMap has been used for training and empowering local actors in pro
 jects aimed at community-based interventions in favelas.\n\nA systematic l
 iterature review was carried out with the support of artificial intelligen
 ce tools (Chat GPT-4\, Elicit\, Semantic Scholar\, Chat pdf)\, bibliograph
 y management software (Zotero)\, and software for visualizing bibliometric
  networks (VosViewer) combined with other research methodologies (P.I.C.O.
 \, Bardin) to assist in the overall evaluation of the literature. Although
  AI tools have great power to aid the review\, they do not replace the nee
 d for critical judgment and human expertise\, demanding confidence in the 
 knowledge of the contents and scientific methodology.\n\nThe following key
 words were used in the platforms of WoS and Scopus collection lists in the
  first search: [Collaborative mapping\, Community intervention\, OpenStree
 tMap\, Community empowerment\, Community mobilization\, Citizen participat
 ion]. In this first search\, some filters were established\, such as the p
 ublication date within the last 10 years and articles that were open acces
 s. Based on these filters\, 43 articles were found that fit these specific
 ations in the Web of Science\, with the vast majority in English. From thi
 s first literature search\, there also arose the need to increase the numb
 er of articles that most closely aligned with the theme. The keywords were
  adjusted based on these articles. In this second search\, other databases
  were also included for the research\, such as Scopus and Google Scholar. 
 The use of artificial intelligence tools Elicit and Semantic was essential
  to find articles using the keywords that were most repeated in the main a
 rticles. Still in this second search\, these adjusted keywords were entere
 d into Chat-GPT 4 to generate search strings under the acronym P.I.C.O (Po
 pulation\; Intervention\; Comparison\; Outcome) for use in the Web of Scie
 nce.\n\nKeywords for the second search: [Collaborative mapping\, informal 
 settlements\, urban slums\, OpenStreetMap\, public participation\, communi
 ty engagement\, community-based intervention\, community intervention].\nF
 inal search string: [("Collaborative mapping" OR "participatory mapping") 
 AND ("community intervention" OR "community-based intervention") AND ("Ope
 nStreetMap" OR OSM) AND ("community empowerment" OR "empowerment") AND ("c
 ommunity mobilization" OR "community engagement") AND ("citizen participat
 ion" OR "public participation") AND (favelas OR "informal settlements" OR 
 "urban slums")]. Finally\, twenty articles were identified in the Web of S
 cience\, Google Scholar\, and Scopus databases.\n\nBased on these 43 artic
 les\, a synthesis framework is being built that aims to systematize inform
 ation about the works found\, informing: 1) Source/Base/Collection\, 2) Re
 ference according to ABNT\, 3) Name of the Journal\, 4) Contact of the mai
 n author - email\, 5) Country of affiliation of the authors\, 6) Country o
 f the mapped community\, 7) Problem/Objective/Hypothesis\, 8) Methodology\
 , 9) Materials used\, 10) Techniques used\, 11) Main results\, 12) Does it
  work with Favela? 13) What is the nature of the community-based intervent
 ion? 14) Did favela residents operate the OpenStreetMap? 15) Where is the 
 favela and/or intervention? 16) If mapping in a favela\, what features and
  attributes were mapped? 17) Were integrated digital and analog cartograph
 ic technologies used? Which ones? 18) Were methods used for community appr
 opriation of cartographic tools and data? 19) Was educational material pro
 vided? Indicate link\, 19) Was a method for evaluating the tools and proce
 sses used implemented? 20) Were community impact indicators used? 21) Are 
 effective impacts felt by the community reported? Which ones?\n\nBy constr
 ucting this framework\, we are evaluating aspects such as the geographic d
 iversity in the use of OSM\, indicating the platform's flexibility and ada
 ptability\, or the still limited participation of local actors in mapping 
 their communities. By compiling data on the nature of community-based inte
 rventions\, techniques and methodologies used\, and community impact indic
 ators\, we aim to identify common patterns in the types of interventions t
 hat have been most effective. Furthermore\, the analysis of reported impac
 ts can indicate tangible benefits of these projects.\n\nThe analysis of ho
 w projects addressed training and education\, including the provision of e
 ducational materials and methods for community appropriation of cartograph
 ic tools and data\, can indicate strategies used to empower local communit
 ies. We are also analyzing the features and attributes mapped specifically
  in favelas\, to identify the main challenges and specific needs of these 
 areas\, supporting the indication of demands for improvements in methodolo
 gies and mapping tools in these urban contexts.\n\nThus\, we are building 
 a comprehensive framework on the current state of the use of OpenStreetMap
  for training and intervention in favelas\, identifying gaps\, challenges\
 , and opportunities for future research and projects.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:OpenStreetMap in the Training of Local Actors in Projects Aimed at 
 Community-Based Interventions in Favelas: A Systematic Review Supported by
  AI - Patricia Lustosa Brito\, Pedro Melhado
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/HMEXFV/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-NKNMCD@talks.osgeo.org
DTSTART;TZID=-03:20241204T161500
DTEND;TZID=-03:20241204T164500
DESCRIPTION:Advances in spatial and spectral resolution in private sector s
 atellite imagery\, together with geography aware algorithms\, have created
  new venues for the use of Artificial Intelligence (AI) in geospatial appl
 ications\, sometimes referred to as AI4Geo. However\, these advancements a
 re accompanied by significant costs in the procurement of data\, computing
  resources\, communication infrastructure and human expertise. We describe
  a case study in central Bali in which we developed multiple AI4Geo approa
 ches to assist the WISNU foundation\, a Non-Governmental Organization in B
 ali\, Indonesia\, in their ongoing efforts to manage community resources a
 nd to perform land mapping across small villages in Bali. \n\nConcepts \nT
 he concept we explore here is multipath AI4Geo that seeks to find the “b
 est” approach to AI4Geo for resource constrained environments. The assum
 ption that larger models are always better does not hold where AI4Geo\, tr
 ained on data from dominant western institutions\, is applied in the major
 ity world. Some of the most ambitious AI4Geo models are trained for land c
 over categories that are mostly of interest to the Northern Hemisphere. Gi
 ven this imbalance\, we ask how participants from low-resourced environmen
 ts can best make use of AI4Geo.\n\nMethodology\nBased on field data from a
  study site in Bali\, Indonesia\, we have developed multiple open source A
 I4Geo land cover approaches to find the best way to represent agroforestry
 \, a key indicator of sustainable and robust food production. We compare t
 he image segmentation results from small models such as Random Forests (RF
 ) and Support Vector Machines (SVM)  with large models such as U-Net  and 
 ResNet152 not only along established model performance metrics such as f-s
 core\, but also in terms of their suitability for use in low-resource cond
 itions. This generally includes limited ability to collect large data sets
 \, limited computational infrastructure\, limited AI expertise and limited
  internet connectivity. We then describe a mixed-method multi-pathway appr
 oach to produce good AI4Geo results while building capacity for the NGO to
  continue the integration of  AI4Geo into its operations while planning fo
 r an even more challenging AI4Geo future dominated by large homogenizing A
 I models. \n\nHere are links to code experiments and instructions on gener
 ating the required input data for the U-Net model from geospatial shapefil
 es.\n\nSmall models (RF\, SVM based on the Orfeo library)\nhttps://github.
 com/realtechsupport/cocktail/tree/main/code \n\nLarge models (Custom desig
 ned U-Net and SATLAS based ResNet models)\nhttps://github.com/realtechsupp
 ort/cocktail/tree/main/satlas_test\nhttps://github.com/realtechsupport/coc
 ktail/blob/main/sandbox/working_model/working_model_inference.ipynb \n\nRe
 sults\nWhile RF\, SVM and U-Net approaches were all able to detect agrofor
 estry in 8-band\, 3-meter spatial resolution datasets provided by Planet L
 abs\, we found that the SVM algorithm was most responsive to the limitatio
 ns of our dataset while producing useful results that we could verify in t
 he field. SVM was furthermore painless to update with additional field dat
 a. Figure 1 summarizes the results from the image segmentation after model
  training.\n\nWhile U-Net’s f1 accuracy for agroforestry exceeds that of
  RF and SVM\, it is likely an overestimate of the actual extent of agrofor
 estry. We believe this to be the case because the U-Net architecture inges
 ted patches of 16 x 16 pixels\, and these dimensions exceed the size of  t
 he smaller agroforestry plots detected in the field. The choice of the inp
 ut patches was in turn a function of the dimensions of the U-Net architect
 ure selected for its ability to minimize loss during training across all l
 and cover categories. \n\nAs opposed to the three other models listed abov
 e\, the large ResNet152 model was not trained on data Planet Lab satellite
  imagery but on Sentinel-2 imagery. Because Sentinel-2 only has a maximum 
 spatial resolution of 10m/pixel it is not able to distinguish small scale 
 landscape features\, agroforestry that typically utilizes small plots in r
 andom arrangements. While the ResNet algorithm was trained on the largest 
 dataset\, with over 300 distinct labels across 137 classes represented acr
 oss 64 million images\, the class labels are not tuned to the spectral sig
 natures of agroforestry and deliver only crude results in our selected stu
 dy area\, as Figure 2 shows. Moreover\, The ResNet152 model that supports 
 multi spectral Sentinel-2 input has over 80 million trainable parameters\,
  exceeding our bespoke U-Net model by more than an order of magnitude\, th
 us making its  use more costly.\n\nWhile we have not fine-tuned the Resnet
 152 model with our own highest resolution Planet Lab data due to spatial r
 esolution mismatches\, it seems clear that the effort would exceed the cap
 acities of our partner organization WISNU. Our dilemma is that the most pr
 omising large models are unwieldy and not adapted to our land cover condit
 ions while the smaller models we have end to end control over can be tuned
  with smaller dataset but run the risk of becoming obsolete in the AI arms
  race over time\, where larger and more powerful models become standard-be
 arers. While the agroforestry specific results we observe are characterist
 ic of our study area and the constraints our project operates under\, the 
 homogenizing forces of large models pose a condition all AI4Geo operations
  are faced with. For that reason\, the territory of this project is signif
 icant beyond the immediate results we produce.\n\nOur solution to this dil
 emma is two-fold. We deploy multi-pathway AI4Geo across various technical 
 complexity levels while retaining agency for local stakeholders. The Wisnu
  foundation does preliminary studies of Sentinel-2 satellite imagery throu
 gh the QGIS environment to survey sites and build simple datasets. They th
 en use QGIS integrated small model approaches such as Random Forests to bu
 ild baseline segmentation maps of a given area. The research team will the
 n collect Planet Labs based higher resolution data and use the cocktail su
 ite of models\, including U-Net\, to deepen the study results. Parallel to
  this approach\, we together use the SATLAS ResNet models to find synergie
 s in those results. Across the approaches\, we build land cover analysis r
 esults that optimize limited resources while producing solid analytical re
 sults.
DTSTAMP:20260422T231510Z
LOCATION:Room III
SUMMARY:GeoAI in resource-constrained environments. - marc böhlen
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/NKNMCD/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-38D7T8@talks.osgeo.org
DTSTART;TZID=-03:20241204T164500
DTEND;TZID=-03:20241204T171500
DESCRIPTION:Abstract: \nGlobal Gap: Updated population estimates and total 
 households (HHs) per area\, typically obtained through a census\, are used
  to construct unbiased sampling frames necessary for accurate estimates in
  health surveys. These population and HH estimates are used to select a sm
 aller representative sample of enumeration areas (EAs) to visit for a publ
 ic health study. Several methods exist to select which HHs to visit at the
  EA level including systematic sampling (every nth HH)\, geographically sa
 mpling structures using satellite imagery\, segmenting an area\, and mappi
 ng and listing all HHs in the selected EA.  Accurate estimates are essenti
 al for public health programming and response\; however\, forty-two countr
 ies have not conducted a census for over a decade (United Nations Statisti
 cs Division 2024). Instead\, programs often generate an accurate sampling 
 frame by enumerating HHs within selected EAs obtaining answers to eligibil
 ity criteria\, drawing a sample from this enumerated list\, and navigating
  back to selected HHs. This requires considerable resources. To date\, no 
 free mobile-based application exists to streamline these processes.\nSolut
 ion: The GPSSample application is a user-friendly sampling solution to sel
 ect HHs within EAs. The study administrator makes a configuration in GPSSa
 mple creating the eligibility screening questions and specifying the numbe
 r or percent of HHs to return to in each EA. An example screening question
  for an immunization coverage survey question is: “Are there any childre
 n living in this HH between six and fifty-nine months old?”.  In GPSSamp
 le\, teams can rapidly enumerate HHs in an EA and collect answers to these
  screening questions. Teams send encrypted data to a supervisor via new lo
 cal-only mobile hotspot QR codes. Next\, the supervisor presses a button\,
  easily generating a simple random sample from the sampling frame of enume
 rated HHs. The selected HH list is sent to teams. Using GPSSample\, teams 
 navigate back to selected HHs to conduct surveys. GPSSample integrates sea
 mlessly with survey applications\, including ODK Collect and Kobo Toolbox\
 , opening the second app’s designated form.  Users send unique HH ids an
 d cluster data from GPSSample to the specified HH survey form. Upon saving
  the HH survey\, teams are returned to the GPSSample app to mark the statu
 s of the HH.  Teams use a map and a list view of selected HHs for monitori
 ng field work. Supervisors can view EA and study level summary statistics 
 in GPSSample to monitor field work.\nFurthermore\, the GPSSample app can b
 e used in surveys lacking any advanced information on population or areas 
 of concern. It is not necessary for a country to have conducted a census. 
 Supervisors can draw an area within GPSSample onsite\, segment the area\, 
 and assign to field teams to rapidly enumerate locations before sampling t
 hem for the assessment or survey. This novel capacity in GPSSample highlig
 hts the flexibility and potential for use in outbreak investigations and e
 mergency responses where HHs may be damaged or destroyed. Additionally\, f
 ood outbreak investigations conducted at market stalls or stores may not b
 e collated in a central list. While designed in the public health context\
 , GPSSample is useful for other disciplines. \n\nGPSSample is a free Andro
 id 8+ application available in six languages: English\, French\, Spanish\,
  Portuguese\, Russian\, and Bahasa. It is designed for field practitioners
  with limited mobile networks or Wi-Fi. The app was developed using the Ko
 tlin open-source language and it uses the open-source SQLite database. Use
 r guides\, GPSSample Decoder application decrypting data\, demonstration v
 ideos\, and Quarto analyses are available through the GPSSample GitHub sit
 e.\nUse Cases and Road Map: Development lessons learned will be presented 
 from two public health GPSSample application pilots in India and Kenya. We
  aim to engage the FOSS4G development community to enhance GPSSample’s g
 eospatial functionalities and learn best practices on maintaining and upda
 ting open-source code.  GPSSample currently uses Mapbox. Ideally in a futu
 re version\, users will also be able to select OpenStreetMap for the base 
 map and the app will include navigation using offline turn-by-turn instruc
 tions. \n\nReferences\nUnited Nations Statistics Division\, 2024: 2020 Wor
 ld Population and Housing Census Programme. Available at: https://unstats.
 un.org/unsd/demographic-social/census/censusdates/. Accessed 6/24/2024\, 2
 024.
DTSTAMP:20260422T231510Z
LOCATION:Room IV
SUMMARY:Bridging the Gap: GPSSample – An Innovative Tool for Enumeration 
 and Sampling in Health Surveys - Amber Dismer\, Joel Adegoke
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/38D7T8/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-JEVCDC@talks.osgeo.org
DTSTART;TZID=-03:20241204T164500
DTEND;TZID=-03:20241204T171500
DESCRIPTION:Forest registration is essential for effectively managing natur
 al resources\, enabling improved tree management (Kattenborn\, Eichel and 
 Fassnacht 2019). This process simplifies urban planning\, allowing for a m
 ore conscientious approach and significantly contributing to the preservat
 ion of green areas. The proposal to reduce environmental impact\, survey t
 ime\, and required effort (Barbosa et al. 2018\; Li et al. 2015\; Beloiu e
 t al. 2023) has motivated the growing use of computer vision for these tas
 ks. Today\, this represents a true cartographic revolution. These innovati
 ons enhance the quality of life in cities by providing accurate and up-to-
 date data to support critical decisions (Barbosa et al. 2018).\nThis work 
 aims to detect\, classify\, and georeference trees in urban environments u
 sing image segmentation algorithms applied to aerial and street-level imag
 es. Several studies use aerial images (Beloiu et al. 2023\; Wäldchen and 
 Mäder 2018\; Mlenek\, Dalla Corte e Santos 2020)\, but our approach seeks
  to improve the detection and identification of tree species by combining 
 street-level images with aerial images. Our model will be developed with t
 he algorithms that present the best metrics for species segmentation and c
 lassification based on related studies. The project also prioritizes using
  free and open-source software in its development. This not only democrati
 zes access to robust monitoring and analysis tools but also encourages col
 laboration and innovation in the geospatial community\, aligning with the 
 values of FOSS4G.\nWe will apply pre-processing techniques to the images t
 o enhance the model’s accuracy\, including geometric and atmospheric cor
 rection with QGIS software. Gaussian filters will also be applied to reduc
 e noise and contrast adjustments to make edges and textures more distinct.
  After this step\, we will proceed to the feature extraction stage for aut
 omatic species identification using a machine-learning model. Given the in
 creasing need for environmental preservation and sustainable management\, 
 identifying and classifying tree species have become solid allies for ecol
 ogical conservation\, positively impacting urban quality of life.\nTo map 
 the urban area of Rio Paranaíba\, an unmanned aerial vehicle (UAV) drone 
 equipped with a high-resolution camera was used\, capturing images with a 
 3.5 cm resolution. The UAV was operated autonomously\, flying in parallel 
 strips over the city. A 70% overlap between the images was used\, resultin
 g in the creation of an accurate orthomosaic of the region\, favoring more
  accurate georeferencing of the trees. OpenStreetMap software was used to 
 create the orthomosaic. GPS performed georeferencing during the flight. St
 reet-level images were obtained with a camera that provides 360º coverage
 . For species classification\, a training dataset was created from samples
  collected in the field\, both aerial and ground-level. Various machine le
 arning algorithms\, such as Random Forest\, Support Vector Machine (SVM)\,
  and Convolutional Neural Networks (CNN)\, were researched and evaluated f
 or their accuracy in species classification.\nTree identification through 
 images of trunks and leaves presents significant challenges due to high in
 traclass variability and high interclass similarity. High intraclass varia
 bility refers to the substantial differences between images of trunks or l
 eaves of the same tree species caused by lighting variations\, capture ang
 le\, and tree condition. On the other hand\, high interclass similarity re
 fers to the very similar visual characteristics between different species\
 , making it difficult to distinguish one from another based solely on appe
 arance. Additionally\, improper color balance adjustments by cameras can i
 ntroduce unwanted shades\, such as a greenish tint\, further complicating 
 accurate classification. These combined factors make using deep learning f
 or tree classification a complex and challenging problem (Cotrim et al. 20
 19). This technique\, which combines remote sensing with aerial and ground
 -level images and advanced machine learning techniques\, is expected to pr
 esent a significant advance in tree species classification. This approach 
 allows for detailed analysis of trunk and leaf textures\, potentially sign
 ificantly improving species identification accuracy. Studies such as those
  by Kattenborn\, Eichel and Fassnacht (2019) have demonstrated that CNN-ba
 sed segmentation (U-net) can achieve an 84% accuracy in vegetation classif
 ication using high spatial resolution RGB images. The U-net is widely reco
 gnized for its effectiveness in image segmentation tasks\, especially in h
 igh-precision and detail scenarios. Its architecture captures complex feat
 ures\, making it ideal for detecting and classifying specific elements in 
 high-resolution images. Additionally\, the U-net has shown consistent resu
 lts in various remote sensing applications\, making it a reliable choice f
 or geospatial data analysis projects. Therefore\, adopting the U-net in th
 e project can ensure superior tree species identification and mapping perf
 ormance. This work aligns closely with the themes addressed at the FOSS4G 
 event\, as it demonstrates the practical application of free and open-sour
 ce software tools in an environmental monitoring context. QGIS\, OpenDrone
 Map\, and OpenStreetMap exemplify how open technologies can be integrated 
 to solve complex georeferencing and species classification problems. Furth
 ermore\, the focus on urban areas and the combination of drone and street 
 view data provides valuable insights for the geospatial community\, showin
 g the feasibility and benefits of free software for urban and environmenta
 l applications.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:Georeferencing of Urban Trees Using Drones and Ground-Level Imaging
 \, and Classification of Their Species by Machine Learning - Paulo Roberto
  Ferreira Maciel\, Rodrigo Smarzaro
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/JEVCDC/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-TNEYZW@talks.osgeo.org
DTSTART;TZID=-03:20241204T171500
DTEND;TZID=-03:20241204T174500
DESCRIPTION:This article presents the case of the mapping of the informal s
 ettlement Erizo Juan Santamaría. The neighborhood went from being an empt
 y space on digital maps to be part of the official cartography of Costa Ri
 ca. The mapping was carried out using technologies based on free/open soft
 ware and participatory cartography methodologies\; the work was done joint
 ly between the people who live in the community and Laboratorio Experiment
 al (LabExp) a research and extension project of the public university Inst
 ituto Tecnológico de Costa Rica. The active participation of the communit
 y in the process was key for the Municipal Council of Alajuela\, where the
  neighborhood is located\, to make official the traces and names of the st
 reets and alleys of Erizo Juan Santamaría for municipal purposes. Further
 more\, at the request of the municipal council\, the National Nomenclature
  Commission approved the names at the national level.\n\nThe informality o
 f the Erizo Juan Santamaría neighborhood lies in the fact that the people
  who live in the space do not own the land. The territory where the neighb
 orhood is located belongs to two public institutions\, one part to the Mun
 icipality of Alajuela and the other to the National Institute of Housing a
 nd Urbanism (INVU). In the 1970s\, the first families began to occupy the 
 territory where the settlement is currently located. Since then\, the inha
 bitants of Erizo Juan Santamaría have solved their basic common infrastru
 cture needs\, as well as managed access to public services. The two public
  institutions\, owners of the land\, as well as the neighboring neighborho
 ods have assessments and interests in the informal settlement\, which are 
 manifested in a tense relationship that includes marginalization\, manipul
 ation\, stigmatization\, and invisibility.\n\nIn 2017\, LabExp and represe
 ntatives of the neighborhood agreed to work together on a 4 years universi
 ty extension project aiming to make the informal settlement visible to dec
 ision-making institutions and neighboring neighborhoods through maps. Unti
 l then\, the neighborhood was not represented on commercial digital maps o
 r on the free OpenStreetMap map. LabExp proposed a work plan based on part
 icipatory processes\, the use of free software and open geospatial data.\n
 \nIt was determined to prioritize two elements to be mapped\, considering 
 the relevance for the community in its relationship with the different dec
 ision-making actors. The first was the houses\, since INVU was interested 
 in developing a project to improve the neighborhood's housing infrastructu
 re\, the institution would carry out a census. Through a number in each ho
 use\, the map could be linked to the census data. The second were the stre
 ets and alleys\, with the intention that neighbors improve the way in whic
 h they gave their home addresses when requesting services. At all times\, 
 OpenStreetMap was considered as the repository where the collected data wo
 uld be stored. The mapping process was carried out with free and open tool
 s from the OSM ecosystem: OSMTracker to capture GPS data in the field\, Fi
 eldpapers to collect data in workshops and conversations with neighbors\, 
 JOSM to edit the OSM map and QGIS both to create maps to capture data and 
 to create maps to disseminate the mapping process. The mapping activities 
 and dynamics included: free cartography workshops with students at the loc
 al school\, field trips and unstructured playful dynamics with children in
  the neighborhood.\n\nIn addition to the mapping\, two activities were key
  to foster a feeling of ownership of the process by the residents of the n
 eighborhood and to disseminate the partial and final results. The first wa
 s the production of short videos in order for the community's inhabitants 
 to narrate their reality about infrastructure\, show the neighborhood\, an
 d describe the relationship with the decision-making institutions\, in suc
 h a way that they linked these experiences with the process of mapping. Th
 e second activity was a voting process to choose names for streets and all
 eys. Each person in the neighborhood had the opportunity to make name prop
 osals for the mapped transit spaces. Subsequently\, the residents of the n
 eighborhood were called to elections. One Sunday morning\, each person had
  the opportunity to express their will\, voting for the names of their str
 eets and alleys together.\n\nThe mapping process was completed by 2021\, E
 rizo Juan Santamaría appeared on the digital maps. In OSM\, the houses we
 re included with their respective numbering according to the needs of the 
 INVU\, the streets and alleys with the names selected by the inhabitants\,
  elements of public infrastructure\, trees and the proper name of the neig
 hborhood. The community was also represented on other commercial maps. Tha
 nks to the dissemination of the short videos and press releases in the Uni
 versity's and national media\, the mapping process of Erizo Juan Santamar
 ía was known to the members of Municipal Council of Alajuela.  The Counci
 l dedicated an entire session to heard about the project and agreed to mak
 e official the names of the streets and alleys decided in the voting proce
 ss by the neighbors. In addition\, the Council managed to make official th
 e names before the National Nomenclature Commission of the National Geogra
 phic Institute.\nThe case of Erizo Juan Santamaría is a unique example in
  the country where\, through participatory cartography\, the production of
  free geospatial data is contributed to official cartography. The visibili
 ty of the neighborhood on digital maps makes it easier for the inhabitants
  to access services that were previously denied or restricted due to the i
 nsecurity that people offering the service felt about visiting the neighbo
 rhood\, partly due to stigmatization and partly because the location led t
 o an empty space on the digital map. Given the increasing use of digital m
 aps to access services and make decisions\, it is important to discuss the
  right of communities to appear on digital maps.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:Study Case of Erizo Juan Santamaría: from free map to official car
 tography - Jaime Gutiérrez Alfaro
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/TNEYZW/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-WRGQFQ@talks.osgeo.org
DTSTART;TZID=-03:20241204T174500
DTEND;TZID=-03:20241204T181500
DESCRIPTION:Marine mammals occur in low densities and usually in areas that
  are difficult to access. One of the main sources of information for marin
 e mammals are stranded animals. However\, strandings are rare events and t
 o be biologically meaningful they need to be accumulated over large distan
 ces\, long times\, or both. This work describes SIMMAM (Sistema de Apoio a
 o Monitoramento de Mamíferos Marinhos)\, a project aimed at organizing a 
 database of marine mammal sightings and strandings along the Brazilian coa
 st and available at https://libgeo.univali.br/simmam. It began as an inter
 nal research project by UNIVALI but now is used by IBAMA and ICMBio. Its i
 nitial implementation has already been described [Moraes\, 2005\; Barreto 
 et al.\, 2006]. However\, it was almost completely rewritten since its ini
 tial implementation and SIMMAM 3.0 now conforms to the DarwinCore (DwC) st
 andard\, which is an international scientific initiative of the Taxonomic 
 Database Working Group - TDWG. The data architecture adopted is compatible
  with GBIF\, which allows SIMMAM to become a data publisher of marine mamm
 al occurrences.\nFor the development of SIMMAM 3.0 free source code tools 
 were adopted. On the server side\, PHP 7.4 was used with Symfony 5.x frame
 work. For the web client side\, the site is rendered on the server side an
 d delivered to the browser as an HTML + JavaScript page with Bootstrap 5. 
 The data exchange API was also implemented in PHP\, following the XML stan
 dard of DwC. Data is stored in PostgreSQL 11.x with PostGIS 3.x which allo
 ws manipulation of geospatialized data. Tables were structured according t
 o the DwC standard\, to reduce the complexity of the communication API.\nA
 s all occurrences in SIMMAM need to have a geographic position\, the main 
 interface for users to view the data is through an interactive WebGIS. The
  implemented WebGIS has filters by taxon and by type of occurrence (sighti
 ng\, stranding\, incidental capture) to allow users to focus on specific d
 ata. To better display areas with high density of records without generati
 ng visual clutter\, the occurrence layer was clustered\, grouping and ungr
 ouping records according to the zoom level. Leaflet Map [Agafonkin\, 2020]
  was used with the OpenStreetMap base map\, as it is a modern map engine\,
  has functionalities optimized for mobile devices\, and does not have any 
 external dependency. Leaflet supports multiple layers and is compatible wi
 th the Open Geospational Consortium (OGC) standard such as support for map
  mosaics\, georeferenced images\, WMS [Leaflet\, 2020] and GeoJSON [IETF\,
  2021].\nOne key aspect of biological information is the taxonomic identif
 ication. To avoid taxonomic instability in SIMMAM\, it uses the taxonomic 
 list provided by the Integrated Taxonomic Information System – ITIS (www
 .itis.gov). As the taxonomic classification of mammals is very stable\, it
  was decided to keep a copy of the ITIS database locally to reduce latency
 \, being updated on demand. \nThe types of occurrence records currently su
 pported by SIMMAM are stranding\, incidental capture and sightings. All th
 ese occurrence records have fields for defining the best taxonomic level\,
  geo-referencing the occurrence\, information on biological material colle
 cted and the person responsible for the data. Stranding and incidental cap
 ture records contain information regarding the state of the animal (alive 
 or dead)\, the condition of the carcass (decomposition stage)\, sex and le
 ngth. For sightings it is possible to inform environmental parameters such
  as weather condition\, sea state\, wind speed\, as well as if it was a si
 ngle animal\, part of a group and group size.\nThe first version of SIMMAM
  was made available in 2007 to the Centro Mamíferos Aquáticos - CMA\, th
 at started to use it as the main tool to integrate data for the Brazilian 
 Stranding Network of Aquatic Mammals (Rede de Encalhes de Mamíferos Aquá
 ticos do Brasil\, REMAB). On the same year\, SIMMAM was presented to the t
 hen General Coordination of Oil and Gas (Coordenação Geral de Petróleo 
 e Gás – CGPEG)\, current General Coordination of Marine and Coastal Ent
 erprises (Coordenação-Geral de Licenciamento Ambiental de Empreendimento
 s Marinhos e Costeiros – CGMAC)\, that used it to aggregate and organize
  marine mammal sighting data generated by marine mammal observers [Barreto
  et al.\, 2019\; Britto\; 2009]. Presently\, sighting data are regularly u
 ploaded to SIMMAM directly by the licensed companies. \nAs of June 2024\, 
 SIMMAM has 423 active users and holds 75\,340 aquatic mammal records. Of t
 hese\, 61% records are private\, but this proportion is very different dep
 ending on the type of record. For strandings\, that are in most part submi
 tted by research institutions\, 91% are private as they are the results of
  individual efforts. But for sightings 61% are public\, as they come mostl
 y from the oil industry as part of the environmental licensing of their op
 erations\, and they mirror the public reports that have been delivered to 
 IBAMA. As mentioned before\, all the data held in the SIMMAM database\, re
 gardless of its public availability\, can be seen by Brazilian environment
 al agencies (IBAMA and ICMBio). \nThe option to allow government agencies 
 to use the whole dataset is extremely important for management purposes\, 
 as it enables environmental agencies to use even unpublished data generate
 d by research institutions. But as the data is not available for the gener
 al public\, it does not compromise their future use in academic publicatio
 ns. Also\, a limited visualization of private data in the WebGIS\, where d
 etails of the record such as species and date are not shown\, serves as an
  indication for other researchers that a specific institution has data on 
 marine mammals in a specific area\, fostering collaborations among institu
 tions.\nWe believe that presenting this work at FOSS4G 2024 will allow us 
 to discuss SIMMAM with the geospatial community and receive input to furth
 er improve the system. It shows a successful implementation of open geospa
 tial technologies that is being used both by government and the academic c
 ommunity.
DTSTAMP:20260422T231510Z
LOCATION:Room V
SUMMARY:SIMMAM 3.0 – Updating the Toolbox for the Conservation of Marine 
 Mammals - Alencar Cabral
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/WRGQFQ/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-foss4g-2024-academic-track-XMAGHX@talks.osgeo.org
DTSTART;TZID=-03:20241204T174500
DTEND;TZID=-03:20241204T181500
DESCRIPTION:## 1. Introduction\nUrban transportation is transforming with a
  focus on sustainability and smart city initiatives. Cycling\, a key eleme
 nt of sustainable urban mobility\, needs robust infrastructure and reliabl
 e data for growth and integration into city planning. Despite advancements
  in sensor technology and (geo)data analytics\, there is a gap in comprehe
 nsive collection and use of cycling-specific environmental\, safety\, and 
 pathway data.\n\nOne major deterrent for citizens using bicycles are the p
 erceived dangers in traffic. Identifying insecure sections is crucial to i
 mproving cycling infrastructure. Safe countermeasures can change negative 
 perceptions and promote cycling as a safe and sustainable mode of transpor
 t. Traditionally\, only actual crashes are included in official data\, inf
 orming city planning decisions. However\, analysing high-risk occurrences 
 like near-miss incidents\, which greatly impact the perceived danger\, can
  provide a more accurate understanding of cycling safety.\n\nThere already
  exist a number of projects using different technologies to gather and pro
 vide data on bicycle safety and urban mobility\, but combining environment
 al and road safety aspects is unique. Projects examining cyclist safety\, 
 particularly dangerously close overtaking manoeuvres\, often involve remot
 e data processing with machine learning or human analysis. Using live vide
 o or images from bike-mounted smartphones is effective but creates data ov
 erhead and privacy concerns. Additionally\, microcontroller-based sensing 
 systems can be complex to assemble\, requiring technical skills and specia
 l equipment.\n\nThe objective of this work is to address the aforementione
 d gap by developing an innovative bicycle sensor system that leverages emb
 edded artificial intelligence (AI) to process sensor data on the device. T
 his approach has the potential to reduce data overhead and address privacy
  concerns while simultaneously providing actionable insights. Our work has
  the potential to make significant contributions to traffic and transport 
 planning by providing valuable insights into traffic patterns and road saf
 ety concerns using extensive spatial datasets gathered by citizens.\n\n## 
 System Design\nAt the core of our system is a microcontroller unit (MCU) o
 f the senseBox family. The senseBox is a versatile\, open hardware electro
 nics kit specifically designed for citizen science projects and educationa
 l initiatives\, with an emphasis on environmental monitoring and data coll
 ection.\n\nThe following environmental sensors are used:\n- Temperature & 
 rel. Humidity (HDC1080)\n- Particulate Matter (SPS30)\n- Acceleration (MPU
 6050)\n- Time-of-Flight (ToF) ranging (VL53L8CX)\n\nMoreover\, battery man
 agement\, Bluetooth Low Energy (BLE)\, and OLED-Display modules are includ
 ed for connectivity and user feedback. All parts fit into a custom designe
 d\, 3D printed enclosure which is attached to the seat post of a bicycle.\
 n\nThe device is communicating with an open source smartphone app using BL
 E which receives sensor data and combines them with geolocation data. Data
 sets are recorded and saved on the smartphone\, but can also be uploaded t
 o openSenseMap as open data during the ride. Users can control levels of p
 rivacy (e.g. by setting privacy zones) to foster digital sovereignty. \n\n
 ## 2.1. Machine Learning on the Bike\nWe are introducing two approaches to
  utilise machine learning capabilities using Tensorflow Lite on the sensor
  device: overtaking detection and road surface / quality classification. B
 y processing the data directly on the device instead of sending it to larg
 er servers\, bandwidth and energy consumption is kept minimal.\nIn the low
  resolution depth images recorded by the 8x8 multizone ranging ToF sensor\
 , overtaking vehicles can be detected using shallow neural networks. This 
 has already been described and implemented as a standalone solution in (Sc
 harf et al.\, 2024)\, but for integrating it into the mobile sensor system
  some considerations for available processing capacities\, suitable infere
 nce times and necessary accuracies will be addressed as part of this work.
  \nTo classify the road surface and its quality\, the acceleration sensor 
 will be used. While raw acceleration values can identify the roughness of 
 a road\, surface classifications and quality estimations can reveal deviat
 ions from intended surfaces to actual surfaces. Using acceleration values 
 and geolocation data\, we will explore training a machine learning model u
 sing OpenStreetMap Surface information as ground truth data.\n\n## 3. Work
 shops\nEngaging citizens in data collection\, problem identification\, and
  the construction of sensor stations empowers them and fosters the generat
 ion of new ideas. Our solution is a solder-free\, easy-to-assemble mobile 
 sensor device. We conduct a workshop in São Paulo\, Brazil\, where 20 par
 ticipants build and mount their own mobile sensor device on bicycles. Afte
 rwards\, they collect environmental and bicycle-specific sensor data. Foll
 ow-up workshops in Münster\, Germany will allow the comparison of the con
 trasting bicycle infrastructures in these cities\, as well as the general 
 urban environment differences\, and will provide valuable data and insight
 s into participants' perceptions.\nThis collaborative effort enhances part
 icipants' understanding of scientific methods and urban mobility challenge
 s while ensuring that the collected data reflects cyclists' authentic expe
 riences. By involving citizens as active contributors\, we aim to bridge t
 he gap between scientific research and community needs\, fostering a more 
 inclusive and participatory approach to urban mobility solutions.\nAfter t
 he workshops we conduct user studies with the workshop participants on the
  following topics:\nUsability: Through surveys at the end of each session 
 and interviews\, participants provide feedback on assembling\, mounting an
 d connecting the bicycle sensor device. \nTrust in Data: Participants revi
 ew the data of their recorded dangerous takeovers and road surface types a
 nd compare its accuracy with their own perceptions.\n\n## 4. Conclusion an
 d Future Work\nThis comprehensive evaluation aims to provide a thorough un
 derstanding of both the user experience and the technical performance of t
 he system\, ultimately guiding the data-driven foundation for improvements
  in urban mobility solutions. Insights gained from this work will inform f
 uture iterations of the project\, ensuring the system collects high-qualit
 y data and meets the needs of cyclists\, thereby effectively enhancing urb
 an mobility and road safety not only for cyclists but for all users of the
  urban mobility system. Future works will include the development of an op
 en source bike-related data analysis platform as a recommender system for 
 bike infrastructure measures in cities.
DTSTAMP:20260422T231510Z
LOCATION:Room I
SUMMARY:Urban Cycling: Intelligent Bicycle Sensors for Road Safety and Sust
 ainability - Felix Erdmann\, Luis Fernando Villaça Meyer\, Beatriz Gonça
 lves
URL:https://talks.osgeo.org/foss4g-2024-academic-track/talk/XMAGHX/
END:VEVENT
END:VCALENDAR
