Geodaysit 2023

To see our schedule with full functionality, like timezone conversion and personal scheduling, please enable JavaScript and go here.
09:00
09:00
120min
Registration / Registrazione partecipanti
Sala Videoconferenza @ PoliBa
09:00
120min
Registration / Registrazione partecipanti
Sala Biblioteca @ PoliBa
11:00
11:00
120min
Opening Session AIT Congress
Enrico Borgogno-Mondino

Enrico Borgogno Mondino, Presidente AIT
Monica Sebillo, Presidente ASITA
Francesco Cupertino, Rettore del Politecnico di Bari
Leonardo Damiani, Direttore del DICATECh
Eugenio, Di Sciascio, Vicesindaco del Comune di Bari
Umberto Fratino, Presidente di Ordine Ingegneri di Bari
Antonio Acquaviva, Rappresentante del Consiglio nazionale dei Geometri e Geometri laureati
Giovanni Bruno, Vicepresidente Ordine Geologi Puglia

Il Telerilevamento nella Pubblica Amministrazione, Tziana Bisantino (Dirigente del Centro Funzionale Decentrato della Protezione Civile – Regione Puglia) –
New Space Economy: Scenario and Perspectives for Earth Observation, Antonio Messeni Petruzzelli (Delegato alla Ricerca del Politecnico di Bari)

AIT2023 è l'11° Congresso della Associazione Italiana di Telerilevamento (AIT). L'AIT, fin dalla sua fondazione nel 1985, è stata il soggetto di riferimento fondamentale per sostenere la comunicazione e il coordinamento delle attività scientifiche nel campo dell'Osservazione della Terra in Italia.

L'AIT si propone di sostenere lo sviluppo e la diffusione della cultura del Telerilevamento (TLR) in Italia, favorendo le sue applicazioni ambientali e puntando ad avvicinare tra loro i principali attori scientifici, industriali e istituzionali. L'AIT sostiene le iniziative nazionali di TLR in Italia favorendone l’internazionalizzazione. AIT organizza eventi e corsi e pubblica lo European Journal of Remote Sensing in collaborazione con Taylors & Francis.

AIT2023 è il luogo dove accademia, industria, professionisti e istituzioni, in qualche modo coinvolti nel Telerilevamento e nell'Osservazione della Terra (EO), possono incontrarsi e discutere. Per i ricercatori AIT2023 è un'importante opportunità per presentare i loro recenti progressi a un pubblico vasto e transdisciplinare. Per l'industria è l'occasione per mostrare i recenti prodotti e servizi utili per la comunità del TLR. Infine, ma non meno importante, per i partner professionali e per i decisori del territorio/acqua/urbano, della conservazione, della gestione delle risorse naturali e della pianificazione territoriale, AIT2023 è l'evento chiave per presentare le proprie esperienze e aggiornare le proprie conoscenze nel campo del TLR e dell’Osservazione della Terra. Per quanto riguarda il convegno AIT, saranno presi in considerazione tutti gli argomenti che riguardano il telerilevamento remoto e prossimale, l'analisi spaziale e la modellistica ambientale.

AIT Contribution
Sala Videoconferenza @ PoliBa
13:00
13:00
90min
Lunch / Pranzo
Sala Videoconferenza @ PoliBa
13:00
90min
Lunch / Pranzo
Sala Biblioteca @ PoliBa
14:30
14:30
15min
Bright Target Detection on SAR Raw Data Based on Deep Convolutional Neural Networks
Giorgio Cascelli, Alberto Morea, Khalid Tijani, Nicolò Ricciardi, CATALDO GUARAGNELLA, Raffaele Nutricato

Since the deployment of the first satellite equipped with a Synthetic Aperture Radar (SAR) into orbit in 1978, the use of SAR imagery has been a vital part of several scientific domains, including environmental monitoring, early warning systems, and public safety.
SAR could be described as "non-literal imaging" since the raw data does not resemble an optical image and is incomprehensible to humans.
For this reason, raw data is typically processed to create a Single Look Complex (SLC) image, which is a high-resolution image of the scene being observed. The processing of raw data to create a SLC image involves several steps, including range compression, Doppler centroid estimation and azimuth compression.
Processing raw data requires a significant amount of computer power; as a result, it is almost never practical to do it on board. As a direct consequence, the data is transmitted back to Earth to be processed.
The objective of next-generation studies [1] is to optimize Earth Observation (EO) data processing and image creation in order to deliver EO products to the end user with very low latency using a combination of advancements in the on-board parts of the data chain.
In this work, we focus on a sea scenario and propose to eliminate any pre-processing by training a Deep Convolutional Neural Network (DCNN) to directly recognize bright targets on raw data.
This indeed might substantially shorten the delivery time thus improving the efficiency of satellite-based maritime monitoring services.
In this regard, the availability of training data represents one of the critical issues for the development of machine learning algorithms. In fact, the efficacy of the final machine learning-powered solution for a specific application is ultimately determined by the quality and amount of the training data.
However, to date, there are no training SAR raw data available in scientific literature with regard to the specific topic of sea scenario monitoring. Furthermore, their generation from real data is a time-consuming task.
In this work we propose and investigate physically and statistically based approaches to simulate a marine scenario and generate realistic synthetic training SAR raw datasets.
We then trained and evaluated a state-of-the-art DCNN on the generated synthetic dataset and successively on real raw data extracted from ERS imagery archive. It is one of the first
experiments proposed in the SAR literature and results are quite encouraging, as they reveal that a well-trained DCNN can correctly recognize strong scattering objects on SAR raw data.

[1] M. Kerr, et al. “EO-ALERT: a novel architecture for the next generation of earth observation satellites supporting rapid civil alerts”, in 71st International Astronautical Congress (IAC), 2020.

Acknowledgments
This work was carried out in the framework of the APP4AD project (“Advanced Payload data Processing for Autonomy & Decision”, Bando ASI “Tecnologie Abilitanti Trasversali”, Codice Unico di Progetto F95F21000020005), funded by the Italian Space Agency (ASI). ERS data are provided by the European Space Agency (ESA).

AIT Contribution
Sala Biblioteca @ PoliBa
14:30
15min
Integrated use of terrestrial geomatic techniques, aerial lidar and satellite SAR for the survey of a dam and the surrounding area
Serena Artese, Michele Perrelli

The paper describes the integrated survey operations carried out on the Redisole dam (S.Giovanni in Fiore, southern Italy), on the surrounding area and on the internal and external parts of the artifact. The dam reservoir is not yet filled, so that some elements usually not visible (spillway, bottom outlet) could be surveyied and mapped.
This survey has involved the integration of different geomatic techniques, which can be summarized as follows:
- Georeferencing of the dam, by means of the survey of fixed landmarks with triple frequency differential GPS, in the RDN-ETRF2000 network;
- Quadcopter drone flight for the acquisition of stereoscopic images for photogrammetric use for a surveyed area of about 25 hectares; the same drone has been used for obtaining close-up images for the inspection of the bituminous surface of the dam. To this aim, GCP have been positioned and their coordinates have been acquired acquisition with differential GPS and Total Station;
- Survey by long range Terrestrial Laser Scanner of the artifact, of its components and of the surrounding area (up to 500 m from the dam); the fixed landmarks and some Ground Control Points have been used, also surveyied by drone photogrammetry.
- Scanning of the tunnels, of the surface and bottom outlets as well as of the shaft, with a series of static scans conducted with a high precision, medium range Terrestrial Laser Scanner.
- Registration of the point clouds obtained and of DTM and DSM (provided by the Italian Ministry for the Environment) with 1m spacing
- Registration by the Persistent Scatterers (PS) provided by the Italian Ministry for the Environment.
The products obtained are:
- External planimetry of the dam, plano-altimetric rendering (elevated points and contour lines) in vector format, manageable in CAD environment.
- Map, profiles and quoted sections of the tunnels, of the bottom and surface outlets, as well as of the shaft.
- 3D textured model of the entire site;
- Orthophoto of the whole site with 5 cm/pixel resolution;
- High resolution orthophoto projected on the dam wall, for the visual search of the critical points of the surface.
The future activities include:
- The accurate identification of the PS present in the area and the measurement of their movements for the identification of any displacements of the building and/or of the entire area;
- PS monitoring through periodic measurement via GNSS and TLS and comparison of relative movements with those obtained via Differential Interferometric Synthetic Aperture Radar.

AIT Contribution
Sala Videoconferenza @ PoliBa
14:45
14:45
15min
A dual-polarimetric SAR processing chain for soil moisture retrieval
Anna Verlanti

According to sustainable agriculture best practices, efficient use of scarce water resources is mandatory for both a marketing objective and an environmental obligation. This implies that in the agricultural production, which is intensive and should at the same time be environmentally friendly, soil moisture is a key parameter to be constantly monitored. In addition, soil moisture plays a crucial role in plant development, human development as well as global cycles of various substances. It serves as an essential input variable for various scientific analyses ranging from hydrological modeling, forecasting of floods and groundwater movement to the modeling of global water fluxes.

Information about soil moisture can be obtained from in field measurements taken, for instance, using point sensors [1] that provide detailed point-like information. An alternative approach to field measurements is to use measurements remotely sensed from satellite-borne instruments. Both optical and microwave radiation exhibit sensitivity to soil moisture, with the optical remote sensing being limited to clear sky conditions and affected by solar illumination [2]. Microwave radiation, on the other side, is largely unaffected by weather conditions and guarantees all-day observations. Among the microwave remote sensing instruments, the Synthetic Aperture Radar (SAR), i.e., a microwave imaging radar, is very promising to soil moisture retrieval on a spatial scale fine enough to be used for sustainable agriculture purposes.

To retrieve soil moisture from microwave remotely sensed data, the key issue is de-coupling surface roughness from dielectric constant. Within this context, two different approaches are widely used: a) physical modelling and b) empirical methods. A promising approach which is both physically sound and computer-time effective was proposed in [3] which consists of using dense time-series of SAR measurements to decouple surface geometric effects (plants growth stage, etc.) from dielectric properties. The underpinning idea is that plant appearance will not change drastically from one image to another if the time series is dense enough, hence variation in the dielectric properties are sorted out. Once the permittivity is estimate, the soil moisture can be retrieved using an empirical approach, e.g., [4].

A mandatory step to design an operational processing chain to retrieve soil moisture using [3] is sorting out built-up areas, vegetation, high-slope terrains, etc. In this study, a polarimetric processing chain is proposed that, starting from dual—polarized SAR measurements, is able:
1. To sort out built-up areas using reflection symmetry, i.e., a property that is satisfied by natural scenes but is not present in man-made targets. This property manifest itself in the inter-channel correlation, i.e., the correlation between co- and cross-polarized channels that is low in case of natural targets and large over built-up areas [5].
2. To sort out vegetated areas using eigenvalue decomposition parameters, i.e., the polarimetric entropy and the mean alpha angle, to partition the polarimetric space to identify vegetated regions according to their peculiar polarimetric response.
3. The digital elevation model (DEM) to identify area calling for steep slopes.

The proposed processing chain will be showcased on actual SAR measurements acquired by Sentinel-1 over two areas of interest, namely the Campania and the Sardinia regions. In the Campania region, the test case includes ground information about soil moisture collected by a ground station provided by Netcom Group S.p.A. First experimental results show the soundness of the proposed processing chain that results in accurate enough estimations with a remarkable computer-time effectiveness.

References

[1] Lekshmi SU S, Singh DN & Shojaei Baghini M 2014. A critical review of soil moisture measurement. Measurement 54, 92-105. doi:10.1016/j.measurement.2014.04.007

[2] Gao, B.-C. 1996. NDWI - A normalized difference water index for remote sensing of vegetation liquid
water from space. Remote Sensing of Environment 58: 257-266.

[3] Balenzano, A., Mattia, F., Satalino, G., Davidson, M.W.J., 2011. Dense temporal series of C- and L-band SAR data for soil moisture retrieval over agricultural crops. IEEE J.Sel. Top. Appl. Earth Obs. Remote Sens. 439–450

[4] Hallikainen, M.T., Ulaby, F.T., Dobson, M.C., El-rayes, M.A., Wu, L., 1985. Microwave dielectric behavior of wet soil-Part II: Dielectric Mixing Models. IEEE Trans. Geosci. Remote Sensing GE-23 ge-23, 35–45

[5] F. Nunziata, M. Migliaccio and C.E. Brown, “Reflection symmetry for polarimetric observation of man-made metallic targets at sea,” IEEE Journal of Oceanic Engineering, vol.37, no.3, pp.384-394, 2012.

AIT Contribution
Sala Biblioteca @ PoliBa
14:45
15min
Development of a Photovoltaic System Extraction Index for the detection of large PV plants using Sentinel-2 images
Alessandra Capolupo, Eufemia Tarantino, Claudio Ladisa, Fernando J. Aguilar

Authors: Claudio Ladisa, Manuel A. Aguilar, Alessandra Capolupo, Eufemia Tarantino, Fernando J. Aguilar.
The use of renewable energy sources in power generation is increasing due to environmental awareness and technological advancements. Solar energy, with its extensive availability and minimal greenhouse gas emissions, is a promising source. However, large photovoltaic (PV) plants require constant monitoring to ensure efficiency and reliability. Remote sensing technology can be beneficial in providing accurate information on the plant's size, shape, and location, reducing costs and increasing monitoring efficiency. The detection of large PV plants can be carried out using various technologies, including the use of satellite imagery, drones imagery or observation from aircraft. However, the use of satellite imagery is advantageous for the detection of large PV plants because it allows to acquire data on a large area without having to move around the site and to monitor the plant over time without interfering with its activity. Open-source imagery from satellites like Sentinel-2 (S2) and Landsat 9 has led to a significant increase in remote sensing research related to extracting (PV) systems. This is because the free and public availability of high-quality images with extensive spatial coverage has eliminated the need to buy costly private satellite images. Additionally, the frequency of image acquisition, which can occur every few days, has allowed for quick and accurate monitoring of areas of interest. Several research have recently merged remote sensing with machine learning (ML) methods to develop automatic classification algorithms for PV systems. Most of these algorithms employ different spectral indices, such as the Normalized Difference Water Index (NDWI), the Normalized Difference Vegetation Index (NDVI), and Normalized Difference Bare Index (NDBI), as input. These spectral indices provide useful information on the presence of water, vegetation and bare soil, respectively, which can be used to identify PV systems more accurately, thus improving classification accuracy. However, there is no specific spectral index that has been tested exclusively for the extraction of PV. This is partially because PV arrays may be constructed on many kinds of surfaces, in various environmental and climatic circumstances, and with different solar panel sizes and types. In this regard, the goal of this work was to suggest a Photovoltaic Systems Extraction Index (PVSEI) for the detection of PV installations from S2 images in two distinct study areas characterized by the persistent presence of large PV installations: The province of Viterbo (Italy) and the province of Seville (Spain). The development of the PVSEI was based on the combination of different bands provided by S2, in order to maximise the spectral difference between the solar panels and their surroundings. For each study area, two S2 images, one taken in February and the other one in August, were used to analyse the seasonal variation of the solar panels' spectral signature and test the PVSEI's accuracy in each of the four scenarios. The image analysis was carried out using an Object-Based Image Analysis (OBIA) method since it allowed for a more accurate identification of PV systems than the pixel-based method, which analyzes individual elements without taking their spatial arrangement and semantic significance into account. Multi-resolution segmentation was used to create segments with different dimensions based on scale, shape and compactness parameters. The Decision Tree (DT) classifier was used to evaluate the effectiveness of the PVSEI and its importance in comparison to the other indices used in the literature in both locations and for both periods after the objects had been labelled as "PV" and "No-PV”. The effectiveness of the new index was demonstrated through the results obtained from the DT analysis. In three out of four scenarios, the PVSEI was selected as the first cut in the DT analysis. In the remaining scenario where it was not ranked first, it still maintained a high level of significance, being the second index in importance. The accuracy was assessed using an error matrix calculated on both the entire segmentation dataset (i.e. using all the objects) and with TTA mask with 2 m pixel size. Four metrics were used to evaluate accuracy of the PVSEI, including Overall Accuracy (OA), Kappa Index of Agreement (KIA), Producer Accuracy (PA), and User Accuracy (UA) for both classes. OA exceeded 98% in all scenarios, both for the segmentation dataset and the TTA mask. KIA values for the TTA mask ranged from 0.81 to 0.86, while values for the segmentation objects ranged from 0.74 to 0.82. In conclusion, the new index has demonstrated favourable outcomes in both study areas, with only a limited number of misclassifications involving bare soil objects that have a spectral signature resembling certain photovoltaic systems.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:00
15:00
15min
Aerial LiDAR and Infrared Thermography for urban-scale energy assessment and planning
Sebastiano Anselmo, Maria Ferrara, Yogender Yadav

In cities, the building sector is the main responsible for energy consumption and carbon dioxide emissions. However, there is a high potential for energy saving by renovating the buildings themselves and the decision-making process to primarily focus on them in the development of smart cities. Several policies have been drafted to set a path towards the mitigation of such impacts, decarbonising the energy supply and reducing the total energy demand. Nevertheless, from a smart city-oriented perspective, it is crucial to elaborate tools to support the choices of policymakers and make citizens aware. Remotely sensed data can be used for assessing the current state, thus simulating possible improvements. Existing literature shows extensive use of infrared thermography for assessing discrete buildings, while little has been done on a district or urban scale.

In this contribution, we present the potential of Aerial Infrared Thermography and LiDAR point clouds for defining energy uses and the potential photovoltaic production. First, AIT is used for energy classification, a key parameter for estimating the current energy demand. Then, two alternative retrofitting scenarios – proposing an improvement by two classes and an upgrade of the whole building stock to meet the highest standards – are compared in terms of primary energy savings and prevented emissions. Options are taken into account also considering the energy supply option, with the possibility of installing photovoltaic panels to power heating pumps as an alternative to traditional heating methods, i.e. district heating and natural gas boilers.

In addition to Infrared thermography, aerial LiDAR point clouds are also key data for planning and managing the energy resources in cities. The efficiency of solar panels primarily depends on the incidence angle of the radiation on the panels and, therefore, proper planning is crucial for the installation and setup of solar plants. One of the possible applications of LiDAR point clouds for the energy sector is to support this phase to maximise efficiency. Thanks to the 3D classified LiDAR point clouds, it is possible to extract the buildings with precise restitution of the pitches, their dimension and orientation, then categorising them into planar/flat, slant or dome types in order to estimate the angle of incidence of sunlight radiations and to better assess the maximum solar potential. In this way, an accurate data sheet for each building can be drafted, reporting precise data on theoretical production and usable surface.

Future developments are related to the development of three-dimensional energy models, to be updated regularly, able to describe precisely the current situation and simulate alternative scenarios. The state of the art smart city digital twins can be also employed for the purpose of urban energy management and to capture and understand the urban energy complexities with respect to time. The concept of energy community can be also introduced at a local level where neighbourhoods generate and share the energy generated from renewable sources.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:00
15min
Improve Sentinel-2 time series consistency with S2SDB DataBase for operational image co-registration
Federico Filipponi

Copernicus Sentinel-2 satellite constellation allows to sense Earth surface at high spatial and spectral resolution and its high revisit frequency foster new advances for land monitoring capacity. Sentinel-2 MSI data exhibit variable geolocation spatial accuracy, resulting in a weak spatial coherence that significantly affect time series consistency at pixel level. Despite evolving Sentinel-2 MSI processing baselines aims, among other objectives, to improve image co-registration with respect to a Global Reference Image (GRI), geospatial accuracy is not yet adequate for detailed time series analysis. Many methodologies to quantify image shifts, developed in the past years, require a significant computational effort to effectively co-register satellite acquisition time series.
To undertake operational image co-registration, Sentinel-2 Shift DataBase (S2SDB) has been established. The S2SDB contains information about horizontal linear local shifts, that can be easily applied to any Sentinel-2 MSI spectral band or derived spatially explicit products, using various image processing software solutions. The DataBase, by releasing simple but relevant information with an open access data policy, can contribute to reduce time and computational effort required to significantly improve Sentinel-2 MSI imagery spatial coherence and time series consistency. Improved co-registration may also contribute to strengthen satellite sensor interoperability, producing denser time series to improve Earth observation land monitoring for a wide range of applications.
S2SDB is freely accessible from the open access data repository available at link https://github.com/ffilipponi/S2SDB.
Improvements in time series consistency at pixel level using the S2SDB is demonstrated for selected case studies, related to monitoring of forest disturbances for logging identification and to the use of time series analysis for the estimation of phenological metrics at Italian national scale.

AIT Contribution
Sala Biblioteca @ PoliBa
15:15
15:15
15min
MTInSAR and ground-based geomatic observations for the analysis of displacements affecting an urbanized area
ALBERICO SONNESSA

Hydro geological instabilities involving urban areas represents a potential threat for structures and people. An in-depth knowledge of the spatial and temporal evolution of the ground surface and the related displacement field becomes then essential for mitigating and managing the risk associated with these phenomena. In this context, Multi-Temporal Interferometric Synthetic Aperture Radar (MTInSAR) techniques are gaining momentum in the monitoring of built regions affected by landslides. The presented research wants to provide an example of the benefit in using Copernicus C-band Sentinel-1 SAR products to support the management/mitigation strategies in case of building settlements in urban areas. To this aim, Sentinel-1 SAR acquisitions and ground meas-urements (i.e. high-precision geometric levelling) have been jointly used to investigate an ongoing instability occurrence, affecting the town of Chieuti, located in the Apulia region (Southern Italy). Furthermore, a geostatistical analysis has been developed in a Geographic Information System (GIS) to compare the Sentinel 1 SAR dataset with the results obtained from the ground-based geomatics observations. The study evidences the effectiveness of using Sentinel-1A SAR data as a long-term routine monitoring tool for millimetre-scale motions in areas involved by ground in-stabilities triggered by landslides. The outcomes of this analysis helped the design of the mitigation measures implemented for securing the study area, demonstrating once again the importance of satellite remote sensed SAR data in driving land management strategies and civil protection actions where potentially dangerous instability phenomena are underway

AIT Contribution
Sala Videoconferenza @ PoliBa
15:15
15min
Pixel Mixture Issue in Mapping Vineyard Phenology. A Possible Solution Based on Sentinel-2 Imagery and Local Least Squares
Enrico Borgogno-Mondino, Francesco Parizia, Federica Ghilardi, Alessandro Farbo, Filippo Sarvia, Samuele De Petris

Precision viticulture aims to enhance quality standards of wine production by improving vineyard management. In this framework, satellite optical remote sensing has already proved to be effective for mapping vegetation behavior in space and time. These maps, properly processed, are useful to optimize agronomic practices improving wine production/quality and mitigating environmental impacts. Nevertheless, vineyards represent a challenge in this context because grapevine canopies are discontinuous, and the observed reflectance signal is affected by background. In fact, satellite imagery ordinarily provides spectral measures with medium-low geometric resolution (≥ 100 m2). Therefore, spectral mixture between grapevine canopies, grass and soils is expected within a satellite-derived reflectance pixel and not considering this problem can deeply affect deductions based on this data. In this work, Sentinel-2 (S2) NDVI maps (10 m resolution) were computed and compared to the ones obtained from DJI P4 multispectral UAV over a vineyard sizing 1.5 ha and located in Piemonte region (NW Italy). The proportion of row and inter-row (α(x,y) and 1-α(x,y)) within S2 pixel was computed and mapped classifying DJI photogrammetry point cloud. Involving α(x,y) and S2 NDVI values, reversing spectral unmixing system was defined solving for two average endmembers NDVI values (row and inter-row) using a moving window (21x21 pixels) least squares approach. Results were compared at S2 pixel-level to the average ones computed from DJI, showing a MAE of 0.15 and 0.10 of row and inter-row NDVI respectively.

AIT Contribution
Sala Biblioteca @ PoliBa
15:30
15:30
15min
AN AUTOMATIC AND EFFECTIVE PIPELINE FOR INDIVIDUAL TREE DETECTION AND SEGMENTATION USING LOW-DENSITY AIRBORNE LASER SCANNING DATA IN LARGE AREAS OF MEDITERRANEAN FOREST
Abderrahim, Fernando J. Aguilar

Autores: Abderrahim Nemmaoui, Fernando J. Aguilar, Manuel A. Aguilar
Forests act as important carbon sinks, therefore being key components of the global carbon cycle. The carbon dioxide emissions account is essential for climate regulation policies and the evaluation of the effects of these policies, as well as for understanding the services they provide to societies.
Traditionally, forest inventories are completed by ground-based expert crews. These field surveys are uneconomical, time consuming and not adequate for studies dealing with periodic data collection. Consequently, one of the key topic in forest applications is to find an effective method to produce effective and accurate inventories.
In recent years, Remote Sensing (RS) has proven to be capable of providing independent, timely and reliable forest information. RS data are used to estimate several forest variables of silvicultural interest such as crown diameter (CD), tree height (H), diameter at breast height (DBH) and aboveground biomass (AGB). In this sense, and due to its ability to estimate attributes at tree level, LiDAR derive point cloud data has become a valuable data source in the field of efficient and accurate detection and segmentation of individual trees (IT).
State-of-the-art approaches use different algorithms for individual tree segmentation (ITS). For each algorithm, a specific methodology to create the input Canopy Height Model (CHM) and/or many parameters should be tuned to somehow adapt the segmentation algorithm to each particular forest stand. This approach makes the results highly dependent on the applied local fitting parameters, which implies difficulties when applied for large-scale mapping. In addition, the parameter setting process is quite time consuming and requires learning and understanding the meaning and role of each parameter.
The main goal of this work aims at developing a pipeline that requires minimal user interaction when working on large areas of Mediterranean forests. The expected results should facilitate the production of broad-extend IT maps and extract the corresponding dendrometric parameters from low-density airborne laser scanning (ALS) data without spending time tuning algorithm parameters.
The study area was located in Sierra de María-Los Vélez Natural Park (Almeria, Spain). Up to 38 reference square plots of 25 m side containing reforested stands of Aleppo pine (Pinus halepensis Mill.) with variable density, tree height and presence of shrubs and low vegetation mainly represented by little holm oak trees (Quercus ilex L.). This forest composition and structure make up a forest typology that is very representative of the Mediterranean forests.
Three open source raster-based (i.e., CHM-based) were tested to extract tree location and some dendrometric parameters such as tree H and CD. The first algorithm is the method proposed by Dalponte & Coomes(2016) adapted and introduced in the package lidR (Roussel et al.2020). The second one is the algorithm developed by Silva et al.(2016), which is focused on the way to better approximating the intersecting canopy of multiple trees after locating treetops by local maxima. The last algorithm tested is included in the library Digital Forestry Toolbox (DFT). In addition, the point cloud-based algorithm proposed by Li et al.(2012) was also tested.
For every algorithm tested, we tried different parameters to find the best pipeline, finally obtaining up to 4024 combinations of all tested algorithms for each experimental plot. For each setting, tree detection accuracy was assessed by computing the detection rate, and the commission and omission errors. Some statistics, such as median, RMSE and relative RMSE, were also used to quantitatively assess the accuracy of tree H and CD estimates over each reference plot.
The IT detection accuracy rates, in terms of precision, recall, and F1-score, showed the successful performance of the pipeline proposed in this study. The algorithm proposed by Li et al.(2012) showed detection F1-score average values of 82.65% (using the same parameter combination for the 38 experimental plots). However, it failed in delimiting the crown diameter (relative RMSE 57.06% and Pearson r of 0.55). The method developed by Silva et al.(2016), when applied on a CHM generated with the point-to-raster algorithm and using a LM based on a variable Tree Window Size (TWS), presented a similar F1-score for ITS (i.e., 82.53%), but being most successful delimiting the crown (relative RMSE 22.21% and Pearson r of 0.68). Finally, Dalponte & Coomes(2016) and DFT methods showed slightly worse results, with average F1-scores of 80.41% and 75.66%, respectively.
The results obtained confirms the usefulness of low-density ALS data to both detect IT and estimate H and CD, also underlining some key aspects regarding the choice of the correct method and parameters to perform single tree detection for Aleppo pine in large areas of Mediterranean forests.

AIT Contribution
Sala Biblioteca @ PoliBa
15:30
15min
Earth Observations applied to Critical Raw Materials supply chain
Susanna Grita, Piero Boccardo, Vittoria Olgiati, Alberta Pavone

As humanity is entering the 4th Industrial Revolution, marked by the digital transition, the global demand for strategic minerals is quickly rising. Critical Raw Materials (CRM) are among those commodities which are facing an increasing supply risk due to availability and political reasons. In order to increase EU's self-sufficiency in CRM, there is a growing interest for the identification of mineral resources in Europe and for the stipulation of acceptable trade agreements with diverse external suppliers. With the Raw Materials Act, the European Union commits to a sustainable management of raw materials. This includes promoting sustainable mining, which undertakes to the minimization of social, economic and environmental impacts caused by resource extraction. It means also reducing mining rates, in order to guarantee reserves for future generations. Despite these stringent rules applied to the extractive industry, the conversion to more sustainable practices on a global scale is still slow, and not all countries have translated the principles of sustainable mining to laws or are able to successfully enforce them. In this context, thanks to the increasing availability of aerial and satellite data, mineral and mine facility mapping with optical images is quickly gaining ground. This technique is a cost-effective, non-invasive solution for supporting early-stage exploration and monitoring of extractive facilities. Here we show some examples of how Earth Observations can support the mining industry at different phases of the supply chain. These applications use freely available multi-spectral satellite data, such as Landsat and Sentinel-2 images, as well as commercial high-resolution data, such as Planet. The high temporal resolution, as is the case of Planet and Sentinel-2 products, and the long lifespan of Landsat data, allow to effectively analyze the evolution of mine sites and their surroundings. The outcomes represent preliminary results focused on mineral characterization through band indexes and spectral signature analyses, and impact assessments on the nearby land associated with the extraction sites. The study aims at being a contribution to understanding the current relative standing of the mining sector in the achievement of the sustainable mining targets. It shows, on the one hand, that remote sensing is an innovative tool for identifying and characterizing new, inaccessible resource deposits; on the other, that it is a sufficiently mature technology for measuring the social and environmental footprint of the CRM market on a global scale. As illustrated in the Raw Materials Act, Earth Observations are key to supporting different phases of minerals’ value chain. These results and the related literature may be considered as a benchmark for future research in this domain.
This research is funded by the National Plan for Recovery and Resilience (PNRR) project GeosciencesIR.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:45
15:45
15min
Estimating the influence of building density bias on the accuracy of Global DEM of Differences in urban change analysis
Alessandra Capolupo, Eufemia Tarantino

Authors: A. Capolupo & E. Tarantino
Several research involving Earth's physical processes and depicting environmental systems are computationally time-consuming, and as a result, have a substantial impact on the time necessary to collect and manage the data. Over the years, numerous acceptable methods for describing surface morphology and enabling quick computer solutions were developed. Nevertheless, since 1991, Digital Elevation Model (DEM) has been recognized as the finest alternative for attaining this goal because, in addition to its capacity to provide baseline morphological information quickly, it also has the exclusive property of being a 2.5-D surface. The quality and trustworthiness of the results provided by its use are determined by its resolution, elevation accuracy, and shape/topological correctness. Elevation accuracy is normally established by statistically analysing differences between DEMs and reference datasets such as Ground Control Points (GCPs), whereas shape/topological correctness is typically defined by demonstrating DEM conformity with some universal principles. Therefore, the root mean square error is commonly used to achieve the first aim, whilst DEM derivates are examined in the second one. However, neither approach is without limits since their performance is influenced by the quality of the reference data and the complexity in measuring DEM realism.
This is much more difficult when the DEM under consideration encompasses the entire globe. Even though they are described as a homogenous product, the accuracy of Global DEMs in terms of elevation and realism varies according to geographical location and morphology, land cover, and climate. Furthermore, as satellite stereoscopic technologies, as well as photogrammetric and SAR interferometric methods, have evolved, the amount of Global DEMs collected has substantially increased. Most of them were also collected in different historical periods and, consequently, they may be useful free open-source data for conducting a consistent global study change detection analysis.
In such a framework, this study aims to investigate the appropriateness of medium-resolution open-access Global DEMs in evaluating changes in urban contexts between 2000 and 2011. To accomplish this, the primary freely accessible Global DEMs were statistically examined, and after selecting the best pair, a change detection analysis was carried out. To assess its accuracy, the findings were compared to the Copernicus Land Monitoring service's land use layers from the same historical periods (https://land.copernicus.eu/). Lastly, this study seeks to estimate and predict the caused by building density bias in accordance with the urban fabric type.
The procedure was implemented by writing appropriate Java-script code on the Google Earth Engine (GEE) web-based platform. Hence, the GEE catalogue was first consulted to determine the available Global DEMs corresponding to the historical period under investigation, and, once identified, they were imported into the application programming interface and validated using the "internal" technique. As a result, AW3D30 (3.2), which was launched in early January 2021, and SRTM DEM V3 were deemed the optimal combination for research purposes during an 11-year timeframe. Thus, they were used as input data for calculating the corresponding DEM of Differences (DoD) and quantify the alteration in urban environments. Owing to the law propagation error, the resultant DoD had substantial internal incoherencies, which were subsequently statistically eliminated by using the Tukeys' filter. This is widely acknowledged as an effective method for identifying and cleaning out internal noise without prior awareness of it. Yet, a significant amount of Tukey's outliers was identified and eliminated in their respective DoD, mostly in wooded and hilly zones, owing to differing degrees of quality of the input data. Following that, to reduce misclassification and distinguish noise from real changes, the resulting DoD was further filtered using the Uniformly Distributed Error (UDE) strategy, developed by Brasington et al. in 2003. However, the UDE technique, while exploiting a gaussian distribution of internal error, does not adapt the filtering threshold to the local conditions, resulting in an over or underestimation of the amount of information to remove. Urban variation was now assessed by combining the filtered DoD result with Corine Land Cover (CLC) data. This integration also enabled statistical investigation and modelling of the DoD error associated with urban fabric type. When comparing the CLC information to both Tukey's outliers and UDE noise in urban areas, it is discovered that error increased linearly with building density. This implies that urban changes quantification could be improved further by correcting the building density bias. In future works, the introduced approach will be enhanced by taking building height into consideration.

AIT Contribution
Sala Biblioteca @ PoliBa
16:00
16:00
30min
Coffee break
Sala Videoconferenza @ PoliBa
16:00
30min
Coffee break
Sala Biblioteca @ PoliBa
16:30
16:30
15min
Assessing Glacier Extent Changes through Machine Learning Algorithms and Remote Sensing Data
Vanina Fissore, Lorenza Ranaldi, Davide Lisi, Piero Boccardo, Alessandro La Rocca, Mirko Frigerio, Daniele Sanmartino

Glaciers are critical elements in the Earth’s climate system, and can be considered as sensitive indicators of climate change. Glaciers store significant amounts of freshwater, which is essential for animal and human consumption and activities like industry and agriculture. Furthermore, glaciers have a significant impact on the hydrological cycle, and their melting also contributes to rising sea levels. Understanding and monitoring glacier extent changes is critical to informing climate policies, assessing natural hazards and safeguarding global water resources. Nowadays, remote sensing technology is a proved and widely adopted source of information in this sense.
In this context, the proposed study aims to develop a regression model able to predict future changes in glacier extent, using supervised machine learning algorithms applied to open access medium and HR spatial resolution satellite data of the EU Copernicus programme. To achieve this objective, two machine learning models are developed. The first model is a segmentation model that employs a U-Net architecture, along with a final Conditional Random Field (CRF) module, to digitalize glaciers features from satellite images. The purpose of the segmentation model is to vastly expand the dataset required by the regression model, in terms of glacier surface values. In fact, this work presents an additional contribution in the form of a novel dataset consisting of time series of glaciers and snow extent. This dataset is generated using the best-performing segmentation model previously trained, applied to multiple glaciers, spanning a 30-year period and a consistent seasonal interval. To train the segmentation model, and to create the required ground truth images, the GLIMS initiative database is used again, while optical satellite images are obtained in part from Sentinel-2 data and in part from other publicly available datasets such as the "Hindu Kush Himalayas (HKH) glacier mapping dataset". The latter couples annotated glacier locations, which were produced by experts, with multispectral imagery from Landsat 7.
The second model is a multivariate regression model that seeks to identify the relationships between Land Surface Temperature (LST) and glacier/snow extent.
In order to train the models, two datasets are required. For the regression model and specifically LST, data from the Sentinel-3 SLSTR instrument, as well as data from the ESA Climate Change Initiative, which consolidates data from various satellites over the past 25 years, are utilized. Historical data on glacier extent and elevation is obtained from the "Glaciers elevation and mass change data from 1850 to present from the Fluctuations of Glaciers" database by the Copernicus Climate Change Service and datasets provided by the Global Land Ice Measurements from Space (GLIMS) initiative. Finally, both models are validated on testing data to assess their generalization capabilities and their performance on real-world cases. A subset of the segmentation dataset is kept aside to extrapolate metrics such as the Intersection-Over-Union (IoU), which allows to assess the accuracy of the results obtained and to make comparison with other architectures. For the regression model, error metrics such as the Root-Mean Squared Error (RMSE) are considered to assess the model performance. The results of the study are expected to provide insights that will enhance the monitoring efforts of glacial features and provide useful information about the impact of climate change on glaciers worldwide.

AIT Contribution
Sala Biblioteca @ PoliBa
16:30
15min
Sentinel-2 open data processing and morphodynamic modelling: an integrated approach to model sediment supply effects on rivers, estuarine and coastal areas
Bianca Federici, Lorenza Apicella, Monica De Martino

Morphodynamics aims to predict the evolution of the topography of rivers, estuaries, and coastal regions under different environmental forcings. Understanding the stability of such systems is a fundamental issue which may help the management of these areas in terms of flood control, erosion prevention, and habitat restoration.
Although the study of morphodynamics has made great progress over the decades, even from a theoretical point of view, models need data to be tested and eventually used in machine learning algorithms. From this point of view, remote sensing is a powerful tool that provides data and a way to monitor changes in these systems over time.
The processing of open images from Sentinel-2 (https://sentinel.esa.int/web/sentinel/missions/sentinel-2) satellite can support the study of the morphodynamic evolution of river, estuaries, and coastal environments. By collecting multispectral images and using appropriate algorithms, the water depth of riverbed and seafloor can be derived, the emerged and submerged areas can be classified automatically into types of bedrocks or vegetation. In addition, satellite images can be used to derive parameters, such as channel width, the evolution of which over time indicates erosion or deposition processes, and water turbidity, which can be an indicator of suspended sediment transport. Hence, data collected through image analysis provides a useful tool for morphodynamic modelling.
We propose combining remote sensing and morphodynamic modelling for a comprehensive river system assessment. This integrated approach can provide an accurate understanding of river morphology, hydrodynamics, and sediment dynamics, supporting informed decision-making for sustainable river management. In this paper, a preliminary application of this novel approach to a case of the Roia river in Liguria is presented. The Sentinel-2 multispectral optical images are processed and integrated with in-situ measurements to create a dataset for the morphodynamic model. In particular, the Satellite Derived Bathymetry is computed to estimate the depth variations along the river course, and the image classification is performed mapping different types of riverbed features such as vegetation, water turbidity, and sedimentation (Apicella et al. 2023, Apicella et al. 2022). Such a dataset is first used to test the capacity of some existing theoretical morphodynamic models (Seminara et a al. 2012, Ragno et al.,2021) to predict the equilibrium topography of the inlet reach of the Roia river. As a second step, the stability and evolution of the system under different scenarios of river discharges and sea forcing will be investigated.
The work is carried on within the Robotics and AI for Socio-economic Empowerment – RAISE (https://www.raiseliguria.it/) project funded by the “Piano Nazionale di Ripresa e Resilienza” - PNRR (https://www.mise.gov.it/it/pnrr), aiming to create a sustainable and resilient ecosystem that supports economic development, social well-being, and environmental conservation. An application activity focuses on the hydrographic, coastal and marine environment, which are key drivers of the local economy. In this context, one of the outcomes will be the risk assessment system and vulnerability of coastal areas (deltas, river mouths and lagoons) to climate change.
Acknowledgments
This work was carried out within the framework of the project "RAISE - Robotics and AI for Socioeconomic Empowerment” and has been supported by European Union – NextGenerationEU.
References
Apicella, L.; De Martino, M.; Ferrando, I.; Quarati, A.; Federici, B. Deriving Coastal Shallow Bathymetry from Sentinel 2-, Aircraft- and UAV-Derived Orthophotos: A Case Study in Ligurian Marinas. J. Mar. Sci. Eng. 2023, 11, 671. https://doi.org/10.3390/jmse11030671
Apicella, L.; De Martino, M.; Quarati, A. Copernicus User Uptake: From Data to Applications. ISPRS Int. J. Geo-Inf. 2022, 11, 121. https://doi.org/10.3390/ijgi11020121
Ragno, N.; Tambroni, N.; Bolla Pittaluga, M. When and where do free bars in estuaries and tidal channels form? Journal of Geophysical Research: Earth Surface 2021, 126, e2021JF006196. https://doi.org/10.1029/2021JF006196
Seminara, G.; Bolla Pittaluga, M.; Tambroni, N. Morphodynamic equilibrium of tidal channels. In W. Rodi, & M. Uhlmann (Eds.), Environmental fluid mechanics: Memorial volume in honour of Prof. Gerhard H. Jirka 2012, pp. 153– 174. CRC Press. https://doi.org/10.1201/b12283

AIT Contribution
Sala Videoconferenza @ PoliBa
16:45
16:45
15min
Extreme mass loss of Brenva glacier from UAV surveys
Davide Fugazza, Fabrizio Troilo

Debris covered glaciers are common in many parts of the world and contribute to the hydrological cycle and freshwater availability in arid regions. In the Italian Alps, some of the largest debris covered glaciers are located in the Mont Blanc group and among them Brenva glacier (5.95 km2 in the latest glacier inventory, Paul et al. 2020) reaches the lowest terminus elevation on the southern side of the Alps at 1415 m a.s.l.. The debris supply originated from several rockfall events throughout the Holocene, with the most recent ones in 1920s and in 1997. In 2004, the ice flow was interrupted from the icefall to the glacier tongue, and this led to enhanced ice stagnation and mass wasting. To investigate the recent evolution of the glacier tongue, we carried out two UAV surveys in 2019 and 2020, using a DJI Mavic and DJI Phantom 4 RTK drones. During the first survey, ground control points were used to increase the accuracy of the final products, while during the second survey we relied on RTK corrections to improve geolocation. The acquired images were processed using a structure from motion pipeline and yielded high resolution orthomosaics and DEMs. By comparing the DEMs from the two photogrammetric surveys, we were able to describe the rapid thinning of the ice tongue, which lost more than 40 m over one year only. Downwasting of the ice was faboured by the formation of epiglacial lakes, which enhance melt. By generating DEMs and orthomosaics from aerial data, we reconstructed the recent history of the glacier, showing an initial phase of mass transfer from the rockfall and the subsequent melt out of the ice tongue.

AIT Contribution
Sala Biblioteca @ PoliBa
16:45
15min
Vegetation Cover Classification of Coastal Sand Dune Ecosystems Using Ultra-High-Resolution UAS Imagery and Machine Learning Techniques
Elena Belcore, Melissa Latella, Marco Piras, Carlo Camporeale

The increasing use of Uncrewed Aerial Systems (UAS) has opened up new opportunities for ultra-high-resolution (UHR) land cover (LC) classification using optical data with Ground Sampling Distance (GSD) below 10 cm. Coastal sand dune ecosystems are difficult to map due to the variability of plant species, making high-resolution vegetation mapping of these areas crucial for analysing vegetation dynamics, spatial patterns and predicting species diversity. The extreme similarity of vegetation spectral responses to multispectral sensors, the small size of the coastal dune plants (mostly herbaceous), and the large amount of data generated are the main challenges in achieving ultra-high-resolution LC maps of vegetation mapping.
This work focuses on developing a VHR vegetation cover classification model for three areas of the San Rossore National Park in Italy using data collected by UAS (DJI Phantom 4 multispectral) with a multispectral optical sensor (RGB, Redge, NIR). The machine learning model is trained on two phenological-relevant epochs (September 2021 and May 2022) using a sampling scheme that combines UAS flight acquisition and field vegetation survey data collected at high precision positioning (dual frequency GNSS). A total of 757 herbaceous and shrub species were sampled.
The VHR classification of 12 species and 2 service classes (Debris and Sand) is multitemporal supervised object-oriented (OBIA), characterised by spectral features, spectral indices, elevation, and texture. Three areas of about 5 hectares each were analysed, one used solely for transferability tests.
The calibrated multispectral orthomosaics and the Crown Height Model (CHM) were generated with Structure from Motion-based processing. Textural features based on Haralick co-occurrence matrix and spectral indices were computed, resulting in a final dataset of 31 features.
The semantic segmentation was performed using eCognition Developer (Trimble), based on the Normalised Difference Vegetation Index (NDVI), RGB and CHM of May 2022 dataset, resulting in 383’200 elements over the three study areas. Imbalanced datasets, such as the one of this work, may lead to inaccurate classification, so the borderline synthetic minority oversampling technique (SMOTE) was used for oversampling the training dataset.
The random forest algorithm was used to classify tree species, and feature selection based on GINI impurity was conducted to reduce the dimensionality of the input features (reduced to 19 based on the statistical distribution of impurity).
To verify the accuracy of the model, a primary accuracy measure based on the error matrix was calculated, and the model was cross-validated using a 100-fold stratified cross-validation. The overall accuracy (OA) was found to be 0.77, with a standard deviation of 0.14. After feature selection, the OA slightly decreased to 0.76, but the processing time was improved, and the standard deviation was reduced to 0.13. The model was then applied to an unseen dataset of the transferability-test area, and the OA decreased to 0.62.
In conclusion, using UAS and multispectral ad multi-temporal optical data provides a valuable tool for ultra-high-resolution LC mapping of vegetation in challenging environments such as coastal sand dunes. The developed vegetation cover classification model based on machine learning algorithms accurately classifies vegetation species and its performances are in line with the literature. Further research is needed to improve the model's accuracy when applied to different datasets and to extend the model to map other vegetation-dominated dune environments.

AIT Contribution
Sala Videoconferenza @ PoliBa
17:00
17:00
15min
Earth Observation Data and Extreme Gradient Boosting Model: innovative methods predicting West Nile Virus Circulation in Italy
Carla Ippoliti, Luca Candeloro, Susanna Tora, Federica Iapaolo, Federica Monaco, Daniela Morelli, Annamaria Conte

West Nile Disease (WND) is one of the most spread zoonosis in Italy and Europe caused by a vector-borne virus. In Italy, the surveillance for WN and USUTU viruses is focused to early detect the virus circulation in a territory: it involves equids, wild and resident birds and mosquitoes.
In the Italian ecosystem, peak transmission of WNV to humans typically occurs between July and September, coinciding with the summer season when mosquitoes are most active and temperatures are highest. To early detect WNV circulation and therefore to reduce the risk of transmission to humans, wild birds, corvids, poultry, horses, and mosquitoes are sampled according to a risk-based ranking of the Italian provinces and WNV infection are confirmed. Together with field activities it is important to identify suitable climatic and environmental conditions for the vectors and virus to spread. The recent and massive availability of Earth Observation (EO) data and the continuous development of innovative Machine Learning methods can contribute to automatically identify patterns in big datasets and to make highly accurate identification of areas at risk.
In this study, the veterinary cases notified in the epidemics 2017-2020 were collected from the National Information System for Animal Disease Notification (SIMAN) and associated to climatic and environmental variables. EO data were derived from different sources, downloaded, mosaicked, converted to degrees (for temperature), pre-processed and harmonised: Land Surface Temperature (LST) Daytime and LST Night-time were derived from the product NASA-MODIS MOD11A2 (8-days temporal resolution, 250 meters spatial resolution); Normalized Difference Vegetation Index (NDVI) dataset was derived from the product NASA-MODIS MOD13Q1 (MODIS/Terra Vegetation Indices 16-Day L3 Global 250 m); the Surface Soil Moisture (SSM) was derived from Copernicus - Daily SSM 1-km V1 product. Each eight consecutive images of SSM have been merged to have a unique raster covering the whole Italy, for a total of 46 images per year. We have then applied a gap filling procedure to replace the empty pixels in the datasets, as the presence of missing values can prevent an accurate and homogeneous (in space and time) prediction. The three EO datasets have been resampled at the highest available spatial resolution (250 m) using bilinear interpolation method, and each dataset has maintained its own temporal scale (NDVI: 16 days; LSTD, LSTN and SSM: 8 days).
Applying a raster-based approach with a time window of 16 days, we investigated the WN virus circulation in relation to the EO variables collected during the 160 days before the infection took place, with the aim of evaluating the predictive capacity of lagged remotely sensed variables in the identification of areas at risk for WNV circulation in Italy.

An Extreme Gradient Boosting model was trained with data from 2017, 2018 and 2019 and tested for the 2020 epidemic, predicting the spatio-temporal WNV circulation two weeks in advance with an overall accuracy of 0.86 (sensitivity= 0.79, Specificity = 0.91, AUC = 0.94).
This work lays the basis for an early warning system (16-days ahead) that alert public authorities when climatic and environmental conditions become favourable to the onset and spread of WNV. This knowledge can be used to define intervention priorities within national surveillance plans.

AIT Contribution
Sala Videoconferenza @ PoliBa
17:00
15min
Future estimation of soil erosion, in Dudh Koshi basin (Nepal)
Francesco Niccolò Polinelli, Marco Gianinetto

Erosion is a major environmental threat that has a negative impact on agriculture and ecosystems. In the region of Dudh Koshi, in Nepal, soil erosion is taking place at a high rate, causing a serious concern for the fertility of agricultural lands. This region of Nepal relies on a subsistence farming system, therefore a reduction in the fertility of lands could cause a threat to the food security of the population inhabiting this mountain area. Some study have been conducted in order to estimate the rate of soil erosion, for this area of the world, in the present and recently passed decades, but the study proposed in this works aims at estimating the future trends of soil erosion rates, until 2100, in order to detect the values of increment in soil loss and the areas that will face the worst cases. To achieve this goal the two parameter of the D-RUSLE model, that change with the time, were considered: precipitation (R-Factor) and land cover (C-Factor). As far as the R-factor is concerned, different scenarios of climate change have been considered: eight combination of Global Circulation Model (GCM) under Representative Concentration Pathway (RCP). To perform this analysis data from Himalayan Adaptation, Water and Resilience (HI-AWARE) were used. To evaluate the future evolution of the C-Factor a neural network was trained using two different land cover maps, representing the situation in 1990 and 2010. The land cover maps were provided by International Center for Integrated Mountain Development (ICIMOD).

AIT Contribution
Sala Biblioteca @ PoliBa
17:15
17:15
15min
A Possible Role of NDVI Time Series from Landsat Mission to Characterize Lemurs’ Habitats Degradation in Madagascar
Enrico Borgogno-Mondino, Federica Ghilardi, Samuele De Petris, Valeria Torti, Cristina Giacoma

Deforestation is one of the main drivers of environmental degradation around the world. Slash-and-burn is a common practice, performed in tropical forests to create new agricultural lands for local communities. In Madagascar, this practice affects many natural areas including lemurs’ habitats. Reforestation within natural reserves is desirable combining native species with fast-growing ones, aiming at habitats restoration. In this context, the extensive detection of forest disturbances can effectively support restoration actions, providing an overall framework to address priorities and maximizing ecological benefits. In this work and with respect to a study area located around the Maromizaha New Protected Area (Madagascar), an analysis was conducted based on a time series of NDVI maps from Landsat missions (GSD = 30 m). The period 1991-2022 was investigated to detect location and moment of forest disturbances with the additional aim of quantifying the level of damage and of the recovery process at every disturbed location. It is worth to remind that the Maromizaha New Protected Area presently hosts 12 species of lemurs. Detection was operated at pixel level by analyzing the local temporal profile of NDVI (yearly step). Time of the eventual detected disturbance was found within the profile looking for the first derivative minimum. Significance of NDVI change was evaluated testing the Cebyšëv condition and the following parameters mapped: (i) level of damage; (ii) year of disturbance; (iii) year of the eventual “total” recovery; (iv) rate of recovery. Finally, temporal trends of both forest lost and recovery were analyzed to investigate potential impacts onto local lemurs population and, more in general, to the entire Reserve.

AIT Contribution
Sala Videoconferenza @ PoliBa
17:15
15min
Air Quality Monitoring and Prediction in Ukraine During War Crisis Using Copernicus Data and Machine Learning
Marco Scaioni, Mohammad Mehrabi, Mattia Previtali

In late February 2022, the invasion of Russia in the Ukrainian territory started. As is known, air is one of the most affected components of the environment during such exceptional circumstances. The changes in the pattern of civilian and industrial activities may cause the variation of air quality in terms of different pollutants. Hence, conducting proper air quality assessment can be of great importance in the war-affected areas. The pivotal objective of this research is to present an overview of air quality monitoring and air pollution prediction carried out for Ukrainian territory. Utilizing the Copernicus Sentinel-5P TROPOMI observations, the emissions of ozone (O3), nitrogen dioxide (NO2), formaldehyde (HCHO), and carbon monoxide (CO) in Kiev, Kharkiv, Donetsk, Kherson, and Lviv are monitored during 2022. The relevant records are compared to the same business-as-usual (BAU) periods in 2019 and 2021 to detect significant changes. Visual interpretations supported by statistical analysis proved that the ongoing war has significant impacts on the concentration of pollutants throughout Ukraine. Following this, a hybrid machine learning model is developed to predict the concentration of a well-known air quality indicator called particulate matter 2.5 (PM2.5). The prediction results indicated a reliable accuracy of the proposed methodology, as well as its superiority over benchmark models. In short, this research shows promising application of state-of-the-art technologies inducing remote sensing and artificial intelligence for solving air quality problems in during exceptional events.

AIT Contribution
Sala Biblioteca @ PoliBa
17:30
17:30
15min
A burned area database for Italy from Sentinel-2 images and ancillary data
Luca Pulvirenti, Giuseppe Squicciarino, Dario Negro, Silvia Puca

The damages generated by fire events on vegetation structure and its evolution and the economic impacts on human activity, life and infrastructures have led the scientific interest to develop tools and algorithms able to support the detection and monitoring of burned areas (BA).
The possibility of monitoring the fire evolution and mapping the BA has been strongly supported in last decades by the opportunity to use a significant quantity of satellite observations. The freely and timely availability of remote sensing data has grown so faster in the last years as well as a higher spatial resolution that makes the earth observation derived data the key component in supporting both government agencies and local decision-makers in monitoring natural disasters such as wildfire or floods.
The Copernicus Sentinel-2 with 20-m spatial resolution and a 5-day return period is a great candidate for near real-time (NRT) applications of change detection based on spectral indices. An automatic near-real time (NRT) burned area (BA) mapping approach designed to map BA using Sentinel-2 (S2) data was proposed in [1] and recently updated in [2]. The AUTOmatic Burned Areas Mapper (AUTOBAM) tool was originally designed to respond the need of the Italian Department of Civil Protection in monitoring spatial distribution and numerousness of BA during the fire season (June- September) over the Italian territory. The atmospherically corrected Level-2A(L2A) surface reflectance products from S2 are used: the automatic chain downloads and processes the most updated L2A products available on Copernicus Open Access Hub over the studied area. At the three spectral indices estimated (Normalized Burn Ratio, the Normalized Burned Ratio 2, and the Mid-Infrared Burned Index) a change detection approach is applied. AUTOBAM compares the values of these indices acquired at current time with the values derived from the most recent cloud-free S2 data. The procedure for BA mapping is based on different sequential image processing techniques such as clustering, automatic thresholding, region growing that conduce to a final BAs map with grid pixel size of 20m. Finally, a quality flag is included for each AUTOMAB BAs to certify a temporal and spatial correspondence with ancillary data, such as derived active fire detections from MODIS, VIIRS and national fire notifications.
The daily run of AUTOBAM allowed us to produce a burned area database for Italy. To evaluate the quality of the database, the AUTOBAM-derived BAs have been compared with the burn perimeters compiled by Carabinieri Command of Units for Forestry, Environmental and Agri-food protection. These perimeters represent the official burned area data for Italy. A validation procedure based of both a pixel-based confusion matrix and a set object-based accuracy metrics has been set up considering the whole Italian territory and years 2019-2021. Good results have been obtained by AUTOBAM in terms of detection capability (the Correctness parameter) and overlap factor (both larger than 60%). However, quite high values of the commission error were obtained, especially in 2019. Through a per land cover analysis, it was found that this error mostly occurred in cultivated land. Excluding the latter target, the commission error was always less than 35%, the omission error was less than 27% and the Dice Coefficient was larger than 69%. Moreover, starting from 2021, the Lazio region is providing AUTOBAM with accurate fire notifications derived from its SOUP (Italian acronym of Permanent Unified Operations Room). An experimental activity has been performed to verify whether these notifications can be used as trigger for the burned area mapping algorithm to reduce the number of false positives.

References:

[1] L. Pulvirenti et al., “An automatic processing chain for near real-time mapping of burned forest areas using sentinel-2 data,” Remote Sens., vol. 12, p. 674, 2020.
[2] L. Pulvirenti, G. Squicciarino, E. Fiori, D. Negro, A. Gollini, and S. Puca, “Near real-time generation of a country-level burned area database for Italy from Sentinel-2 data and active fire detections,” Remote Sens. Appl. Soc. Environ., vol. 29, 2023.

AIT Contribution
Sala Biblioteca @ PoliBa
17:30
15min
Evolution of the surface waters of the Po river in the 2020-22 period - a quantitative analysis of the drought effects with Sentinel-2 images
Carlo Masetto, Niccolo' Tolio, Eleonora Cagliero, Benedetta Gori, Umberto Trivelloni, Alessandra Amoroso, Laura Magnabosco

For Veneto Region, the year 2022 was affected by anomalies in terms of average temperature and recorded rainfall, compared to the climatic average of the last thirty years. The occurrence of these climatic conditions highlighted the inevitable negative effects for the environment; in particular, water resources were affected by this combination of climatic criticalities, both in terms of rivers’ flow rate and underground water flows, also due to the poor snow accumulations recorded in the Alps during the winter period.
Critical profiles in terms of surface runoff have been highlighted on various rivers and streams, particularly in the summer period. The main Italian river, the Po, was also affected by a water flow decrease in 2022, making evident the increase in the surfaces characterized by the presence of sand islands, visible along the river path.
This work, carried out by the Territorial Planning Department of the Veneto Region, aimed at a quantitative analysis of the surface covered by sand islands and surface water in a sufficiently representative area of the Po basin.
For the purposes of this study, the area included between the municipalities of Occhiobello (RO) and Ferrara was analysed (border area between Veneto and Emilia Romagna regions, where Po river flows). The analysis was carried out for the month of July 2022, comparing the data obtained with those relating to previous years (2020 and 2021).
In order to identify the islands of sand, multispectral Sentinel-2 satellite images were analysed, taking into consideration the wavelengths of the visible (B02, B03 and B04 bands), and that of the near infrared (B08 band). The area was then classified using supervised classification with the Random Forest classification algorithm, a methodology that allows to obtain high-precision classifications.
Considering the pixels’ size and the limits of the supervised classification, the precision of the analysis performed an accuracy higher than 95%. The analysis remarks relevance for monitoring the negative effects caused by the drought on the Po river. In the area under examination, a constant decrease in the surface of surface water was observed, and a corresponding increase of natural sand islands. From the analysis obtained, thanks to the use of the classifier, it is evident that the year 2022 was a year in which drought contributed to worsen the water stress for the Po river, with evident consequences on the environment in terms of availability of the water resource and the rise of the salt wedge near the river mouth. Moreover, the study presented here confirms the importance of using satellite data and classification tools for monitoring water bodies.

AIT Contribution
Sala Videoconferenza @ PoliBa
17:45
17:45
15min
“Governance of Earth Observation Data - synergies at European level. The joint experiences of Nereus and the Veneto Region”
Carlo Masetto, Umberto Trivelloni, Roya Ayazi, Margarita Chrysaki, Mirko Mazzarolo, Federico Bastarolo, Roberta Santin, Silvano De Zorzi

NEREUS (Network of European Regions Using Space Technologies) is a European association representing the interests of European regions that use space technologies whilst simultaneously highlighting the regional dimension of European space policy and programmes. It is the key mission of NEREUS, as a unique thematic network for matters of regional Space Uses, to explore the benefits of space technologies for European Regions and their citizens as well as to promote the use of space and its applications.
Veneto Region is a NEREUS active member since 2008, having played an active role in promoting space technologies (GNSS and earth-observation) during the years.
As one of the historical members of NEREUS, Veneto Region suggested some actions to boost activities on Earth Observation. Following the inspiring principle “Bringing the benefits of space uses to European regions and their citizens”, the proposal was to launch in 2023 the Working Group on Earth observation, composed by of regional experts and space technologies. The main ojectives are: 1) spreading the knowledge of earth observation data and space technologies; 2) sharing experiences that can lead to the creation of mutual synergies for a better data-governance; 3) supporting local institutions, citizens and companies in the use of space technologies; 4) inspiring and easing positive policy responses to local institutions.
In this framework, Veneto Region recently finalized the application for the Interreg Europe project “SAT.SDI.F.A.CT.ION (SATellite data and Spatial Data InFrAstruCTures for an evidence-based regIONal governance). In the same project, NEREUS is the Advisory partner.
European Earth Observation System Copernicus contributes as a vital source of knowledge to improve territorial and environmental management, efficient use of natural resources and delivery of effective public policies and services to citizens. However, it is still not clear to which extent satellite data are used by local and regional administrations, specifically how much satellite data are integrated within regional Spatial Data Infrastructure (SDI). Spatial Data Infrastructures, as defined by the INSPIRE directive, are to be considered as a framework of policies, institutional arrangements, technologies, data, and people that enable the sharing and effective usage of geographic information by standardizing formats and protocols for access and interoperability. The overall scope of the project is to promote the exchange and transfer of experiences related to the use of Satellite Data in local and regional Spatial Data Infrastructures (SDI), leading to a better, evidence-based governance of the regional territory.
The integration of Satellite Data in local and regional SDIs (Spatial Data Infrastructures) is of strategic importance and with great potential to support government and decision making at sub-national level, providing unrivalled information in different fields of application. However, the uptake of existing satellite data and services is not being fully used, and their integration in added-value services for regional and local governments is far from optimal. The SATSDIFACTION project aims at working exactly around this issue, promoting the exchange and transfer of experiences related to the use of Satellite Data in local and regional Spatial Data Infrastructures as a mean to improve the performance of regional policy instruments, eventually leading to a better, evidence-based governance of the regional territory.

AIT Contribution
Sala Videoconferenza @ PoliBa
18:15
18:15
60min
Poster session and Welcome Party
Sala Videoconferenza @ PoliBa
18:15
60min
Poster session and Welcome Party
Sala Biblioteca @ PoliBa
08:30
08:30
30min
Registration / registrazione partecipanti
Sala Videoconferenza @ PoliBa
08:30
30min
Registration / registrazione partecipanti
Sala Biblioteca @ PoliBa
09:00
09:00
15min
A GIS-based model to map gravity centers of agricultural end-of-life plastics for a sustainable waste management
Giuliano Vox, Ali Hachem, Ileana Blanco, Giacomo Scarascia Mugnozza

Agricultural plastics applications are essential for both quality and production increase and for the efficiency improvement of agricultural systems. However, they generate significant amounts of waste that pose a serious threat to the environment and to the agro-ecosystem. Effective waste management strategies are required to address this issue, which can be achieved through several means, such as the development of a comprehensive and accurate map of agricultural plastic waste (APW) gravity centers. This paper presents a GIS-based model for mapping APW gravity centers in the province of Bari, Italy.
The study first highlights the importance of agricultural plastics in promoting the productivity of the agricultural system and the coupled negative impact that APW has on the environment and agro-ecosystem. The implementation of plastic waste production indices, which take into consideration the properties of the plastic applications used in the production system, is then discussed. These indices provide a quantitative assessment of the amount and type of APW generated in different areas, enabling effective mapping of the distribution of APW.
To map APW gravity centers, land use maps and APW indices are used to identify the areas with the highest APW generation. Gravity centers for the collection, selection and first treatment of end-of-life plastics to be sent to the recycling plants, are determined based on the amount of APW generated, with areas producing higher volumes of waste resulting in a closer gravity center for waste collection and management. The model is implemented in the province of Bari, Italy, which has a large agricultural sector and significant APW generation.
The results of the study show that the GIS-based model is effective in identifying areas with the highest APW generation, allowing for more efficient and effective waste management strategies. The study also shows that the highest concentrations of APW gravity centers are in areas with intensive agriculture, such as greenhouse farming and vineyards covered with plastic films and nets. These areas generate large volumes of waste and require efficient waste management strategies.
Moreover, the study highlights the need for a comprehensive mapping of APW gravity centers to develop effective waste management strategies. The model can also be expanded to other regions with a large agricultural sector and significant APW generation.

AIT Contribution
Sala Biblioteca @ PoliBa
09:00
15min
Analysis and prevention of historical-cultural heritage instability using satellite radar interferometry
Silvia Bianchini, Anna Palamidessi, William Frodello, Veronica Tofani

Italy, with 58 properties inscribed on the World Heritage List, is the country with the highest number of UNESCO cultural heritage sites in the world. At the same time, Italy faces significant natural hazards from a geological and soil protection perspective. Particularly, archaeological sites and works of art are susceptible to geo-hydrological instability and deterioration. In order to understand instability and degradation processes it is essential to consider the extent and state of cultural heritage in the context of its geology, geomorphology, natural and urban environments. This is fundamental to decide the priorities of risks mitigation practices and protection/conservation strategies.
The monitoring of Italian cultural heritage is a fundamental activity for their long-term protection and conservation. Radar interferometric remote sensing techniques are non-invasive contactless and advanced methods capable of determining displacements and deformations affecting structures and natural slopes with millimeter accuracy. They represent powerful tools that can be profitably used for monitoring cultural heritage, architectural structures, and archaeological sites without causing any damage, and at the same time exploit several temporal time series, thanks to the available satellite constellations.
In the framework of the Extraordinary Plan for the Monitoring and Conservation of Cultural Property (Piano Straordinario di Monitoraggio e Conservazione dei Beni Culturali Immobili), an analysis of several Italian historical-cultural sites (Paestum - SA, Volterra - PI, Pienza - SI, Civita di Bagnoregio - VT, Orvieto - TR, Populonia - LI) is being conducted by the UNESCO Chair "Prevention and sustainable management of hydrogeological risk" at the University of Florence. The analysed dataset includes old and new satellite sensors: from ERS-ENVISAT time series, to CosmoSky-Med (comprising data from 2011 to 2014, and a new acquisition from 2015 to 2023), and Sentinel data available from the European Ground Motion Service. The data was processed using PSI (Persistent Scatterer Interferometry) techniques, and combined with geothematic data in a GIS environment; field validation was carried out for each site by means of field surveys. The outcomes of this work will provide useful suggestions for damage prevention in the framework of the planning of protection-conservation measures of the cultural assets.

In support of these activities, a non-invasive investigation model is proposed that incorporates non-invasive strategies for preventing and monitoring instability and natural hazards. In particular, the evaluation of the conditions of the cultural heritage assets affected by hydrogeological risk is performed through the methodologies based on PSI data already tested in the scientific literature to evaluate the conditions of potential instability of the artefacts on a local scale, analyzed considering the remotely detected deformation rates from satellite measurements, and integrated with background geological data, construction characteristics and field evidence.
This project aims at developing a sustainable system for analyzing and monitoring the architectural and cultural heritage integrity and stability, incorporating a high level of scientific and technological knowledge, in order to protect cultural heritage threatened by natural hazards, as well as to give a realistic and current picture of hydrogeological risks and vulnerabilities.

AIT Contribution
Sala Videoconferenza @ PoliBa
09:15
09:15
15min
Implementing a GIS-based digital atlas with different datasets for estimating the agricultural plastics environmental footprint
Pietro Picuno, Dina Statuto, Giuseppe Cillis

The agricultural sector has benefitted over the last century from several factors that have led to an exponential increase in its productive efficiency. The increasing use of new materials, such as plastics, has been one of the most important factors, as they have allowed for increased production in a simpler and more economical way. Various polymer types are used in different phases of the agricultural production cycle, but when their use is incorrectly managed, it can lead to serious environmental impacts. Plastic pollution, largely perceived by the public as a major risk factor that strongly impacts sea life and preservation, has an even higher negative impact on terrestrial ecosystems. Indeed, quantitative data about plastic contamination on agricultural soils are progressively emerging in alarming ways. One of the main contributors to this pollution involves the mismanagement of Agricultural Plastic Waste (APW), i.e., the residues from plastic material used to improve the productivity of agricultural crops - such as: greenhouse covers, mulching films, irrigation pipes, etc. Indeed, a wrong management of agricultural plastics during and after their working lives, may pollute the agricultural soil and aquifers by releasing macro-, micro-, and nano-plastics, which could also enter into the human food chain.
In this study, an applied and simplified methodology to quantify and manage agricultural plastics is proposed. The techniques used are based on a deductive approach, based on the quantification through the use of different remote-sensed datasets (orthophotos and satellite images) of the areas covered by plastics used for crop protection. Additionally, through an inductive approach, based on statistical data from the agricultural census of the administrative areas of the Italian provinces, an agricultural plastic coefficient (APC) has been proposed, implemented, and spatialized in a GIS environment, to produce a database of APW for each type of crop.
The study area chosen for the analysis here presented is a part of the Ionian Coast of Southern Italy, which includes the most important municipalities of the Basilicata Region as for fruit and vegetable production. The use of geographical techniques and observation methodologies, developed in an open-source GIS environment, enabled an accurate location of about 2000 hectares of agricultural land covered by plastics, as well as the identification of areas most susceptible to the accumulation of plastic waste. The proposed methodology can be exported to other countries, since it represents valuable support that could realize, in integration with other tools, a database of agricultural plastics use, which may be a starting point to plan strategies and actions targeted to the reduction of the plastic footprint of agriculture. The techniques and the model implemented, due to its simplicity of use and reliability, can be applied by different local authorities, in order to create an atlas of agricultural plastics, which would be applied for their continuous monitoring, thereby enabling to upscale future social and ecological impact assessments, identification of new policy impacts, market searches, etc. as well.

AIT Contribution
Sala Biblioteca @ PoliBa
09:15
15min
Structural monitoring of Cultural Heritage assets at urban and local scale through MT-InSAR
Amedeo Caprino, Francesca da Porto

In recent years, Multi-Temporal Interferometric Synthetic Aperture Radar (MT-InSAR) has become an increasingly popular technique for Structural Health Monitoring (SHM) purposes. The technique allows for the measurement of ground deformation with high accuracy and spatial resolution by utilizing Synthetic Aperture Radar (SAR) imagery taken from multiple time periods. MT-InSAR has been proven to be particularly effective in urban contexts due to the high reflectivity provided by structures, which makes them visible in SAR imagery. Several interferometric algorithms have been developed that are specifically tailored to the urban environment, making it possible to extract detailed information about buildings and infrastructure. Moreover, the growing availability of high-resolution SAR satellite constellations, such as the Italian COSMO-SkyMed, has also contributed to the increased use of MT-InSAR for SHM purposes. These constellations provide high-quality SAR imagery in which a high density of Measurement Points (MP) can be detected, allowing for the recording of detailed information on individual structures. With MT-InSAR, it is possible to collect information about the deformations at both global scale, detecting the most critical areas within the urban context, and local scale, focusing on individual structures such as buildings or bridges. Despite its many advantages, MT-InSAR has some drawbacks that must be taken into consideration. The technique requires complex post-processing and expert interpretation of results to avoid data misinterpretation, and technical difficulties such as geocoding errors and noisiness in the time series can be encountered during the analysis. Furthermore, the technique is sensitive to changes in the environment, such as changes in vegetation cover or weather conditions, which can affect the quality of the SAR imagery. Overall, MT-InSAR, despite its limitations, is a cost-effective and highly efficient tool for monitoring structures. It offers significant benefits in identifying potential problems and detecting deformations, providing valuable insights into the stability and health of structures. With the increasing availability of high-quality SAR imagery, MT-InSAR is predicted to have even more widespread usage for SHM purposes in the future.
In this work, the MT-InSAR technique is applied in the urban center of Verona (northern Italy), a city full of Cultural Heritage assets. The study examined images captured by the COSMO-SkyMed constellation in Stripmap mode for both ascending and descending orbits during the period 2011-2022, to detect deformations at both global and local scales. Initially, spatial interpolation algorithms were utilized to gauge the overall deformations at the urban level, identifying the most critical areas. The results show that the area of Verona presents an overall stability: the velocities of deformation of the historic center lie within the so-called range of stability (-1.5 – +1.5 mm/year), whereas the most critical areas can be identified in the northern part of the city in correspondence of the northern portion of the town beltway. Later, attention was directed towards some of the main cultural assets in the city, namely the Roman Arena, the Lamberti Tower, and the Roman Theater. For each asset several MPs were detected, distributed along the structures' height. The information contained in each MP, in terms of displacement velocity and displacement time series, allow for an understanding of the structural stability and of the evolution of the deformations during the monitoring period. As the urban analysis suggested, the investigated structures appear to be quite stable and no evident criticalities could be detected. However, despite the low magnitude of the deformations measured in the city of Verona, this research demonstrates the potential of MT-InSAR in the field of structural monitoring of Cultural Heritage.

AIT Contribution
Sala Videoconferenza @ PoliBa
09:30
09:30
15min
Integrating geographical data with surveys conducted with UAVs for planning areas of high environmental value
Pietro Picuno, Dina Statuto, Maurizio Minchilli

The setting up of a general framework for the environmental and landscape planning of a protected area, requires a basic detailed survey of this area and its vegetation, accompanied by a monitoring of the latter, so that a specific maintenance plan can be implemented accordingly. With reference to an area of high environmental, landscape and archaeological value, such as the 'Pulo di Molfetta' (Municipality of Molfetta – Southern Italy), some georeferenced floristic surveys have been carried out, with relative mapping and monitoring of the vegetation growth. In this way, it has been possible to draw up some detailed management measures for the vegetation, as well as to plan suitable interventions of ecological engineering, aimed at determining the most appropriate conditions for the recovery, use and sustainable management of this study area, even for tourism purposes. These activities have been conducted through the construction of a basic model, which has been implemented in a Geographical Information System (GIS), structured on the basis of some Free and Open-Source geographic data, integrated with a geo-localized 3D survey of the geomorphology, architectural structures and the flora-vegetation habitat. The survey, georeferencing and 3D model formation operations have been conducted by means of:
1) a photogrammetric survey at ultra-low height – variable, according to the orography - carried out with Unmanned Aerial Vehicles (UAVs), to obtain a 3D digital modelling, having a readable resolution of at least 2 cm/pixel;
2) coverage with a block of frames with nadiral and sub-horizontal orientation, for a readability optimized to the analysis of both the geomorphology and the existing medium and tall vegetation;
3) creation of a framing and support network, materialized with high-contrast photographic targets, measured with RTK methodologies of GNSS satellite positioning and accuracy ≤ 3 cm, georeferenced in the RDN2008 Reference Coordinate System - as per Italian regulations;
4) restitution of a 3D digital model, obtained with SfM (Structure from Motion) technologies, formed by a resolution of 1-2 cm, point clouds, triangular mesh and photographic texture;
5) formation of a very-high resolution ortho-photomosaic, with GSD (Ground Sampling Distance) ≤ 15 mm, and of an adequate number of radial sections, with two views, each one orthogonal to the section plane;
6) georeferenced identification of the individual floristic-vegetational elements and help for the construction of a database containing the agreed attributes defined and aimed at planning the vegetation layers present.
The metric analyses have been conducted with commercial instruments, such as UAVs systems, GNSS and photogrammetric processing software, in order to test a very widespread, low-cost operational chain in the dimensional and qualitative survey of medium-small extensions, affected by great biodiversity and an important altimetric variation such as a karst sinkhole.
In conclusion, the results thus obtained have allowed for the inclusion of the geo-localized 3D model in a GIS base for the knowledge of the flora-vegetation habitat, thanks to which it will be possible to provide support for the decision-making of planning choices for the territory, landscape and environment of the study area, as well as its close surroundings, so as to safeguard its biodiversity and ecosystem relations.

AIT Contribution
Sala Biblioteca @ PoliBa
09:30
15min
The experience of the Archaeological Park of Colosseum in the use of COSMO-SkyMed satellite data
Maria Virelli, Deodato Tapete, Irma Della Giovampaola

All archaeological sites are affected by changes due to a natural decay related to the ageing. If it compromises the functionality of the cultural property it becomes pathological and results in degradation. The monitoring, carried out with the use of innovative technologies, is a preliminary tool to an effective planned maintenance activity and therefore preventive conservation. Regarding these aspects the Parco archeologico del Colosseo took a strategic direction of a gradual transition from a plan of monitoring to a constant and planned conservation activity.
The monitoring project of the Parco archeologico del Colosseo (that started in a systematic way only in 2018) was inspired by the desire to build a sustainable system of protection and conservation, then allowing a proper tourism valorisation. With these objectives in mind, the Parco archeologico del Colosseo has developed a static and dynamic monitoring project consisting of five fundamental activities:
1. a database of all the historical data of the monuments, together with the existing graphic and photographic documentation (namely digital documen-tation archive);
2. visual monitoring carried out by teams of technicians dedicated to the inspection and control of monuments, also thanks to dedicated app that will allow to send data to the central system;
3. satellite monitoring (historical analysis of the satellite data) going directly into the system and analysed in order to monitor possible ground deformation;
4. in situ monitoring from traditional geotechnical instruments;
5. experimental activities.
Basically, the project involves the creation of a multi-parameter system of permanent control of the entire archaeological area, with the associated indicators of the level of risk, based on the combined use of innovative technologies.
In this way, the project will allow to plan, in an effective and timely manner, the necessary interventions for both ordinary and extraordinary maintenance, thus providing not only an operational tool, but also a management system for the Park with a better use of its financial resources.
As part of this monitoring project, the Parco del Colosseo requested the presence of experts from the Italian Space Agency (ASI).
The instrumental diagnostic tools are accompa-nied by satellite monitoring, already tested in the past for a short period, to obtain information on ground displacements, structures, and buildings. The use of satellite SAR interferometry technique applied to COSMO-SkyMed images is combined with the advantage of being able to use the archives of radar images that allow us to deduce, in an extensive manner, the evolution in time of more than twenty years of deformation processes. One of ASI's contributions to the monitoring project is to provide the images acquired by the COSMO-SkyMed satellites. Synthetic Aperture Radar (SAR) satellite data are gradually used for study applications and monitoring of cultural heritage, through multi-temporal analysis based on change detection techniques and differential interferometry (DInSAR). COSMO-SkyMed Constellation offers ideal features for routine monitoring of cultural heritage and observation in emergency situations which have been the subject of several demonstration, (pre-) operational and scientific research projects over the last sixteen years since the mission was declared fully operational. COSMO-SkyMed is the ASI SAR constellation, the only one in the world to be made up of 5 satellites operational (3 first generation and 2 second generation) in the X band, capable of providing very high spatial resolution images (up to 1m per civil use), very high acquisition frequency (revisit times up to 12 hours), in any meteorological and light conditions. The use of satellite SAR interferometry technique applied to COSMO-SkyMed images is combined with the advantage of being able to use the archives of radar images that allow us to deduce, in an extensive manner, the evolution in time of more than twenty years of deformation processes. For these reasons, the Parco also considered fundamental the satellite historical analysis of the archaeological area, carried out since 2010 until 2019. The satellite images, provided by ASI, were processed on commission by e-GEOS with interferometric technique. The data thus processed fed the web-GIS platform of the Parco’s monitoring project. (Della Giovampaola, 2021)

AIT Contribution
Sala Videoconferenza @ PoliBa
09:45
09:45
15min
A GIS-BASED SPATIAL ANALYSIS FOR AGRICULTURAL PRUNING WASTE MANAGEMENT IN THE CIRCULAR ECONOMY PERSPECTIVE
Fabiana Convertino, Evelia Schettini, Annachiara Dell'Acqua

Agricultural activities are responsible of huge amounts of solid wastes. Agro-residues are a large quantity. Their utilization as a source of biomass is a great opportunity in the optic of the spread of the circular economy model. Among the agro-residues, those coming from olive groves, vineyard and fruit plantations can be particularly relevant. Biomass residues from agricultural pruning represent a typical case of agro-residues yearly produced and hardly ever used as a resource for production of energy, biochemical or other products. Mismanagement practices and especially burning of those agricultural waste are very common. These cause serious human and environmental health problems and threaten food and energy security.
For a more sustainable and circular approach in agriculture, agro-residues, as those from pruning, should not be considered as waste, but as a precious resource. To pursue this aim, there is a need of overcoming the technical and logistic problems that farmers experience. A proper management system for biomass from pruning residues is mandatory.
This study pretends to contribute to the development of a proper wise collection system for agricultural biomass from pruning. The approach based on a territorial analysis using a software GIS is followed.
At first, the study investigates the types, production processes and possible optimal sustainable uses of biomass residues, highlighting the main issues of the most spread practices. Then, the objective is to map the production of the agricultural pruning residues on the territory. The attention is focused on an area particularly suited to agriculture in the Apulia Region (Italy). By using pruning indices for each crop and the land use map, the study manages to quantify and localize the pruning residues. Based on this, the best position of the collection centres is defined. The obtained maps can be easily used and updated. The study points out the power of the GIS tools for this purpose. The results of this study represent a first important step towards the improvement of the agro-residues management system and can help policymakers and stakeholders to promote more sustainable actions.

AIT Contribution
Sala Biblioteca @ PoliBa
09:45
15min
Hyperspectral PRISMA and Sentinel-2 Preliminary Assessment Comparison in Archaeological Sites
Sara Zollini, Francesco Immordino, Annachiara Dell'Acqua, Maria Alicandro, Elena Candigliota, Raimondo Quaresima

Over the last decades, remote sensing techniques have contributed to supporting cultural
heritage studies and management, including archaeological sites as well as their territorial context and
geographical surroundings. This paper aims to investigate the capabilities and limitations of the new
hyperspectral sensor PRISMA (Precursore IperSpettrale della Missione Applicativa) by the Italian
Space Agency (ASI), still little applied to archaeological studies. The PRISMA sensor was tested on
Italian terrestrial (Alba Fucens, Massa D’Albe, L’Aquila) and marine (Sinuessa, Mondragone, Caserta)
archaeological sites. A comparison between PRISMA hyperspectral imagery and the well-known
Sentinel-2 Multi-Spectral Instrument (MSI) was performed in order to better understand features and
outputs useful to investigate the aforementioned areas. At first, bad bands analysis and noise removal
were performed, in order to delete the numerically corrupted bands. Principal component analysis
(PCA) was carried out to highlight invisible details in the original image; then, spectral signatures of
representative areas were extracted and compared to Sentinel-2 data. At last, a classification analysis
(ML and SAM) was performed both on PRISMA and Sentinel-2 imagery. The results showed a full
agreement between Sentinel and PRISMA data, enhancing the capability of PRISMA in extrapolating
more spectral information and providing a better reliability in the extraction of the features.
these first analyses, applied in landscape archaeology studies, highlight
the great spectral operational capabilities of the PRISMA sensor. In future studies, a great
advantage can be brought by performing a reliable pansharpening in order to increase
the resolution of the final images (geometric resolution from pancromathic and spectral
resolution from hyperspectral data), as well as a more stable multitemporal acquisition in
the areas under investigation.

AIT Contribution
Sala Videoconferenza @ PoliBa
10:00
10:00
15min
Monitoring Erbaluce and Nebbiolo vineyards by means of Sentinel-2 NDVI index maps
Enrico Borgogno-Mondino, Alberto Cugnetto, Giorgio Masoero, Peppino Sarasso

The advent of satellite technologies has made it possible to make georeferenced observations of the entire globe at periodic intervals of a few days and with high spatial resolutions.
ESA's Copernicus mission makes available open-source data from the Sentinel-2 constellation created to provide useful information for agricultural purposes thanks to appropriately calibrated multispectral images [2].
The NDVI (Normalized Vegetation Index) [1] can be correlated with some biophysical or agronomic variables of the vineyard [3].
The work presents the results of a two-year work carried out in the province of Turin in the Piedmont region, that involved six vineyards cultivated with different varieties (Nebbiolo, Erbaluce) and two vine training system (pergola and espalier). The NDVI georeferenced data were provided by the EOS Crop Monitoring web platform.
The experimental design divided the vineyards in three classes of vigor areas, defined through a pre-survey operated by comparing the series of georeferenced NDVI images collected the summer before.
In the different vineyards for each of the chosen vigor areas, five plants were identified and used as a ground reference to evaluate a series of vegetative-productive parameters. The total amount of plants monitored were 30 for Nebbiolo and 55 for Erbaluce.
All NDVI index showed significant predictability for the studied variables.
As expected, the trend of the quantitative variables was positively related to the NDVI while the qualitative variables were negatively related. As far as the percentage mean error was concerned a high predictability, (error 1÷7% respectively for Erbaluce and Nebbiolo vineyards). Considering the canopy architecture, the leaf layers were accurately predicted from the NDVI (R2 0,72 and 0,55 respectively for Erbaluce and Nebbiolo) with an error around 10%. Regarding the fruit compartment a strong difference emerged between the systems. The shaded cluster percentage in the Nebbiolo vines was highly predictable with (R2 0,57, error 6%). In Erbaluce the error was higher (36%) with a correlation index R2 of 0,42. This fact derives from the higher variability of the plants in the compared plots. The number of clusters were predicted with a minor error in Nebbiolo than in Erbaluce (9% and 29%, R2 0,70 and 0,16 respectively) and for the bud fertility (8% and 15%, R2 0,83 and 0,36 respectively). In sum, the true productive traits appeared as the less predictable in the Erbaluce vineyards, with 31% error in yield (R2 0,26) compared to a less erroneous prediction (error 22% and R2 0,63) in Nebbiolo vines. The pruning wood weight was similarly predicted from the NDVI with 21 and 23% error, with a correlation index R2 of 0,41 and 0,28 for Erbaluce and Nebbiolo respectively.
The PCA analysis, allowed discriminating observations based on vigor attributes and consistently with the measured variables, even when all the observations, for the different varietal combinations, are processed simultaneously with the same multivariate model.
The study confirmed the possibility to use Sentinel-2 NDVI output to map the vineyards variability also in small plots (< 1 ha), estimating the vineyard canopy density, the productive and wine most important technological parameters.

[1] Giovos, R., Tassopoulos, D., Kalivas, D., Lougkos, N., & Priovolou, A. (2021). Remote sensing vegetation indices in viticulture: A critical review. Agriculture, 11(5), 457.
[2] Sarvia, F., De Petris, S., Orusa, T., & Borgogno-Mondino, E. (2021). MAIA S2 versus sentinel 2: spectral issues and their effects in the precision farming context. In Computational Science and Its Applications–ICCSA 2021: 21st International Conference, Cagliari, Italy, September 13–16, 2021, Proceedings, Part VII 21 (pp. 63-77).
[3] Vélez, S., Rançon, F., Barajas, E., Brunel, G., Rubio, J. A., & Tisseyre, B. (2022). Potential of functional analysis applied to Sentinel-2 time-series to assess relevant agronomic parameters at the within-field level in viticulture. Computers and Electronics in Agriculture, 194, 106726.

AIT Contribution
Sala Biblioteca @ PoliBa
10:00
15min
Satellite technologies for Cultural Heritage: state of the art, perspectives and Italian Space Agency contribution
Maria Virelli, Deodato Tapete

In the last 20 years, satellite technologies have been increasingly used for study, monitoring, conservation and promotion of cultural heritage, with a growing trend at both national and international levels. Recent publications critically reviewing the specialist scientific literature highlight a significant level of maturity of satellite applications in this domain (Luo et al., 2019; Tapete and Cigna, 2019a), so as satellite images collected from optical sensors have already become common data exploited by (geo-)archaeologists, researchers and heritage experts. At the same time, Synthetic Aperture Radar (SAR) technologies are increasingly being tested and exploited, also beyond the specialist image analyst community, thanks to multidisciplinary collaboration between different professionals (Tapete and Cigna, 2017) and facilitated SAR data access given the increasing provision by space agencies, also in “ready to use” formats (Tapete and Cigna, 2019b). At European level, the Italian ecosystem undoubtedly represents an excellence, given not only the long tradition in exploitation of innovative technologies for cultural heritage, but also the space sector investments into both Earth Observation missions with characteristics of image acquisition that well suit the user needs and requirements for this application domain, and initiatives promoting downstream applications and services development engaging small, medium and large enterprises. In continuity with the past decade, ASI continues launching and managing several initiatives for cultural heritage, in particular along the following directions:
• Undertaking scientific research and development, also through real-world user-driven use cases, e.g. demonstrating the performance achievable using national assets such as COSMOSkyMed data (Tapete and Cigna, 2019b; 2020);
• Supporting COSMO-SkyMed data exploitation in projects with Italian institutions (e.g. Ministry of Culture, Archaeological Park of Colosseum), and activities devoted to downstream applications and services development (e.g. in Pompeii, Capo Colonna) (Virelli et al., 2020);
• Promoting downstream by scientific, commercial and institutional users through the new programme “Innovation for Downstream Preparation” (I4DP), wherein safeguard of environment, cultural heritage and national landscape is among the key application domains.
The present paper therefore will illustrate ASI’s contribution for cultural heritage, alongside the current perspectives, in light of the COSMO-SkyMed programme (upstream) and “Multi-mission and Multi-Frequency SAR” and I4DP programmes (downstream), the latter with particular focus on the initiative dedicated to scientific users (I4DP_SCIENCE) according to the roadmap defined by Tapete & Coletta (2022).

References
Luo L., Wang X., Guo H., Lasaponara R., Zong X., Masini N., Wang G., Shi P., Khatteli H., Chen F. et al. (2019) Airborne and spaceborne remote sensing for archaeological and cultural heritage applications: A review of the century (1907–2017). Remote Sens. Environ., 232, 111280. doi: 10.1016/j.rse.2019.111280
Tapete D., Cigna F. (2017) Trends and perspectives of space-borne SAR remote sensing for archaeological landscape and cultural heritage applications. J. Archaeol. Sci. Reports, 14, 716–726. doi: 10.1016/j.jasrep.2016.07.017
Tapete D., Cigna F. (2019a) Detection of Archaeological Looting from Space: Methods, Achievements and Challenges. Remote Sens., 11, 2389. doi: 10.3390/rs11202389
Tapete D., Cigna, F. (2019b) COSMO-SkyMed SAR for detection and monitoring of archaeological
and cultural heritage sites. Remote Sens., 11, 1326. doi: 10.3390/rs11111326
Tapete D., Cigna F. (2020) Poorly known 2018 floods in Bosra UNESCO site and Sergiopolis in Syria
unveiled from space using Sentinel-1/2 and COSMO-SkyMed. Sci. Rep., 10, 12307. doi: 10.1038/s41598-020-69181-x
Tapete D., Coletta A. (2022) ASI’s roadmap towards scientific downstream applications of satellite
data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5643. doi: 10.5194/egusphere-egu22-5643, 2022.
Virelli et al. (2020) COSMO-SkyMed: uno strumento satellitare per il monitoraggio dei beni culturali. In: Monitoraggio e Manutenzione delle Aree Archeologiche. Cambiamenti climatici, dissesto idrogeologico, degrado chimico-ambientale / Atti del Convegno Internazionale di Studi, Roma, Curia Iulia, 20-21 Marzo 2019 / Alfonsina Russo e Irma Della Giovampaola - (a cura di) - «L’ERMA» di BRETSCHNEIDER, 2020 - (Collana Bibliotheca Archaeologica, 65) 278 p.; ill., pp. 103112.

AIT Contribution
Sala Videoconferenza @ PoliBa
10:45
10:45
30min
Coffee break
Sala Videoconferenza @ PoliBa
10:45
30min
Coffee break
Sala Biblioteca @ PoliBa
11:15
11:15
20min
IRIDE session: Overview on the program and on the overall System: the Constellations, the Downstream Segment, the Service Segment

ESA Plenary Session IRIDE the Italian Earth Observation System funded by PNRR

AIT Contribution
Sala Videoconferenza @ PoliBa
11:35
11:35
20min
IRIDE session: Introduction to IRIDE Precursor Service Portfolio within the Service Segment implementation workplan

ESA Plenary Session IRIDE the Italian Earth Observation System funded by PNRR

AIT Contribution
Sala Videoconferenza @ PoliBa
11:55
11:55
20min
IRIDE session: The IRIDE Precursor Service Portfolio operational at the end of 2023

ESA Plenary Session IRIDE the Italian Earth Observation System funded by PNRR

AIT Contribution
Sala Videoconferenza @ PoliBa
12:15
12:15
45min
IRIDE session: Institutional User requirements for the development of EO National Systems

ESA Plenary Session IRIDE the Italian Earth Observation System funded by PNRR

AIT Contribution
Sala Videoconferenza @ PoliBa
13:00
13:00
90min
Lunch / Pranzo
Sala Videoconferenza @ PoliBa
13:00
90min
Lunch / Pranzo
Sala Biblioteca @ PoliBa
14:30
14:30
15min
Analysis of DInSAR Displacement time series for monitoring slope instability
Davide Oscar Nitti, Fabio Bovenga, Raffaele Nutricato, Alberto Refice, Ilenia Argentiero, Guido Pasquariello, Giuseppe Spilotro

Multi-temporal SAR interferometry (MTInSAR), by providing both mean displacement maps and displacement time series over coherent objects on the Earth’s surface, allows analysing wide areas, identifying ground displacements, and studying the phenomenon evolution on long time scales. This technique has also been proven to be very useful for detecting and monitoring instabilities affecting both terrain slopes and man-made objects. In this contest, an automatic and reliable characterization of MTInSAR displacements trends is of particular relevance as pivotal for the detection of warning signals related to pre-failure of natural and artificial structures. Warning signals are typically characterised by high rates and non-linear kinematics. The Sentinel-1 (S1) C-band mission from the European Space Agency (ESA) as well as the high-resolution X-band COSMO-SkyMed (CSK) constellations from Italian Space Agency, both shorten the revisit times up to a few days, thus being very promising for detecting non-linear displacement trends related to warning signals. However, a detailed analysis of MTInSAR displacement products looking for specific trends, is often hindered by the large number of coherent targets (up to millions) to be inspected by expert users to recognize different signal components and also possible artifacts, such as, for instance, those related to phase unwrapping errors.

This work concerns the development of methods able to fully exploit the content of MTInSAR products, by automatically identifying relevant changes in displacement time series and to classify the targets on the ground according to their kinematic regime. We introduced a new statistical test based on the Fisher distribution with the aim of evaluating the reliability of a parametric displacement model fit with a determined statistical confidence. We also proposed a new set of rules based on the statistical characterization of displacement time series, which allows different polynomial approximations for MTInSAR time series to be ranked. The method was applied to model warning signals. Moreover, in order to measure the degree of regularity of a given time series, an innovative index was introduced based on the fuzzy entropy, which basically evaluates the gain in information by comparing signal segments of different lengths. This fuzzy entropy index, without postulating any a priori model, allows highlighting time series which show interesting trends, including strong non linearities, jumps related to phase unwrapping errors, and the so-called partially coherent scatterers. These procedures were used for analysing MTInSAR products derived by processing both S1 and CSK datasets acquired over Southern Italian Apennine (Basilicata region), in an area where several landslides occurred in the recent past. Both approaches were very effective in supporting the analysis of ground displacements provided by MTInSAR, since they helped focusing on a smaller set of coherent targets identifying areas or structures on the ground which deserved further detailed geotechnical investigations. Moreover, the joint exploitation of MTInSAR datasets acquired at different wavelengths, resolutions, and revisit times provided valuable insights, with CSK more effective over man-made structures, and S1 over outcrops.

Specifically, the work presents an example of slope pre-failure monitoring on Pomarico landslide, an example of slope post-failure monitoring on Montescaglioso landslide, and few examples of structures (such as buildings and roads) affected by instability related to different causes. Our analysis performed on CSK MTInSAR products over Pomarico was able to capture the building deformations preceding the landslide and the collapse. This allows the understanding of the phenomenon evolution, highlighting a change in velocities that occurred two years before the collapse. This variation probably influenced the dynamics of the landslide leading to the collapse of an area considered to be at a medium-risk level by the regional landslide risk map. Results from the analysis performed on S1 MTInSAR products were instead useful to identify post-failure signals within the Montescaglioso landslide body. The selected trends confirm the stability of the landslide area with some local displacements due to restoration works. In this case, the value of the MTInSAR displacement time series analysis emerges in the assessment phase of post-landslide stability, resulting in a useful support tool in the planning of safety measures in landslide areas.

Acknowledgments - This work was supported in part by the Italian Ministry of Education, University and Research, D.D. 2261 del 6.9.2018, Programma Operativo Nazionale Ricerca e Innovazione (PON R&I) 2014–2020 under Project OT4CLIMA; and in part by ASI under the Project “CRIOSAR: Applicazioni SAR multifrequenza alla criosfera”, grant agreement N. 2021-12-U.0.

AIT Contribution
Sala Videoconferenza @ PoliBa
14:30
120min
Ortorettificare immagini satellitari con software open: OTB in QGIS
Valerio Baiocchi

Nel presente workshop si illustreranno le modalità di ortorettificazione in ambiente QGIS grazie alle librerie OTB. Le immagini satellitari ad alta risoluzione devono essere sottoposte a un processo di orotrettifica geometrica per poter essere utilizzate a fini metrici. Infatti per poterle utilizzare correttamente e confrontarle con rilievi e mappe precedenti, è necessario trattarle geometricamente per eliminare le distorsioni introdotte dal processo di acquisizione. Si ricorda che le immagini che arrivano dai gestori dei satelliti non sono propriamente orotrettificate ma, al massimo hanno subito un primo processo di orientamento. L’ ortorettifica, infatti, non è una semplice georeferenziazione perché il processo deve tenere conto della geometria tridimensionale di acquisizione del sensore. Per questo motivo l'ortorettifica deve essere eseguita all'interno di specifici software commerciali con costi e tempi aggiuntivi rispetto all'acquisizione delle immagini. Questa operazione, chiamata orientamento, può essere effettuata utilizzando vari modelli matematici come quelli rigorosi, quelli basati su funzioni polinomiali razionali (RPF) e su a coefficiente polinomiali razionali anche definiti da alcuni autori, a coefficiente di posizionamento rapido (RPC).La procedura prevista dalla libreria OTB in QGIS in originale ha alcune limitazioni tra cui, ad esempio l'impossibilità di inserire le quote sui Ground control point che risulta una grossa limitazione per una ortorettificazione corretta. Inoltre le interfacce non sono sempre user friendly. Nel corso del Workshop verranno però mostrate alcune procedure per limitare l'effetto di queste limitazioni permettendo di sfruttare appieno le caratteristiche ottico geometriche di immagini ad alta ed altissima risoluzione geometrica come Quickbird. Dopo una breve introduzione la procedura verrà esposta passo passo così che i discenti potranno riprodurla autonomamente

GFOSS.it Contributions
Aula 1 @ UniBa
14:45
14:45
15min
Assessment of the use of SAR satellite images for detection and mapping of post-earthquake damages, for purposes of emergency response management
Maria Virelli, Valentina Nocente, Federico Lombardo, Stefano Frittelli

The increasing availability of synthetic aperture radar (SAR) satellite imagery has opened up new opportunities for operational support to predictive maintenance and emergency response. The first step in any emergency response is to assess the extent and the impact of the damage caused by the disaster. First responders need to recognize and to collect useful
information to mount their rescue operation effectively and quickly. There is indeed a strong link between timely rescue operations and the percentage of survived victims from natural disasters.
Therefore, it is extremely important to ensure effective deployment of rescue teams as soon as possible by means of the optimization of resources, accurate information on how to access and to settle in the affected areas, and the definition of operational priorities. To further optimize the activity on the field, it is possible to use the potential of SAR satellite analysis.
Today, several satellite SAR missions are available, characterized by different technical features in terms of wavelengths, and temporal and geometric resolutions
The COSMO-SkyMed constellation initially consisted of four identical satellites, each equipped with high-resolution microwave SAR operating in the X-band and positioned in a sun-synchronous orbit ~620 km above the Earth's surface. Subsequently, the four First Generation satellites were joined by two further Second Generation COSMO-SkyMed satellites, also based on identical satellites equipped with X-band SAR payloads and positioned on the same orbital plane as the First Generation satellites. Currently 5 COSMO-SkyMed satellites are operational, 3 of the first generation and 2 of the second generation. 2 more will be launched in the coming years.
In 2018, the Italian Space Agency (ASI) and the Italian National Fire and Rescue Service (CNVVF) signed an agreement to approve the collaboration between the two State Administrations. The aims of the Agreement are linked to the use of technologies that use satellite data to support urgent technical rescue, a fundamental mission of firefighters.
Under the agreement, in the event of medium and large-scale emergencies, ASI makes radar-type satellite products available to the CNVVF. Through the use of these data we want to facilitate an initial assessment of the affected area, a few hours after the event, with the delimitation of the most critical areas, in order to optimize the operational response. Thanks to the COSMO-SkyMed products made available by ASI, are developed by the cartographic office of the National Corps (the TAS Central Service) products in order to support the territorial VVF offices in the planning phase and monitoring of interventions.
With the aim of investigating the performance of SAR images characterized by different geometric resolutions for the detection and mapping of post-earthquake damages, three SAR image datasets (Sentinel-1, COSMO-SkyMed Spotlight and COSMO-SkyMed StripMap) were analyzed available in Norcia (Central Italy) in the areas that were severely affected by the strong seismic sequence in 2016. We compared pairs of images with equivalent characteristics collected before and after the principal seismic event on October 30, 2016 (at 06:00:40, UTC). The results were compared with each other and then measured against the results of the post-earthquake field surveys for damage assessment, carried out by the CNVVF. Thanks to the interesting and opportunity to have COSMO-SkyMed Spotlight images before the event, we have determined that the nominal geometric resolutions 1x1-m can provide a very detailed damage mapping of a single building, while the COSMO-SkyMed StripMap HIMAGE at 3x3 resolutions they give relatively good detections of damaged buildings. As reliable given the different spatial resolution of the Interferometric Wide Swath mode, the Sentinel-1 images did not allow acquiring information on individual buildings, but simply provided approximate identifications of the most severely damaged sectors. The main results of the performance investigation that have been carried out in this work can be exploited considering the exponential growth of the satellite market in terms of revisit time and image resolution.

Mazzanti P., Scancella S., Virelli M., Frittelli S., Nocente V., Lombardo F, Assessing the performance of multi-resolution satellite SAR. images for post-earthquake damages detection and mapping. Remote Sensing of Environment.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:00
15:00
15min
Satellite technologies for infrastructures: state of the art, perspectives and Italian Space Agency contribution
Maria Virelli, Deodato Tapete

Monitoring critical infrastructures and structures (energy and transportation) is one of the application domains of national relevance for which satellite technologies may be exploited to improve detection of causative factors of deterioration, mapping of sectors at risk, and prioritization of structural and maintenance works.
Although ground-based non-destructive testing (NDT) methods have been successfully applied for decades, reaching very high standards for data quality and accuracy, Synthetic Aperture Radar (SAR) satellite technology and interferometric techniques (InSAR) have proved to be a real “game changer”. The impact on infrastructure monitoring was particularly significant, also in light of the increased flow of SAR data collected in different radar bands and disseminated by space agencies in these past years.
ASI’s COSMO-SkyMed constellation operating in X-band is among the satellite assets that are mostly exploited by scientific and commercial community to perform high precision and accuracy monitoring, at high spatial and temporal resolution. Recent studies undertaken by ASI following “data exploitation” initiatives of COSMO-SkyMed data have highlighted an increasing use of these data to study and monitor bridges, motorways, railways, pipelines and plants. Scientific proof of concepts and demonstrators have led to strengthening national and international expertise in the use of InSAR multi-temporal techniques, and paved the way for downstream applications and mature monitoring services.
At the same time, the global scale availability of C-band Sentinel-1 data has contributed to a further dissemination of InSAR techniques for infrastructure monitoring, although the specialist literature has highlighted the limitations due to spatial resolution, as well as the need to combine different band SAR data collected at different resolution.
From 2021 to 2023, through the “Multi-mission and Multi-Frequency SAR” Program (Tapete et al., 2022), ASI has supported R&D projects focusing on data fusion and post-processing techniques in the field of infrastructure deformation monitoring. Benefits achievable through integration of multi-band SAR data (including L-band SAOCOM) have been demonstrated.
In light of these investments and the maturity of SAR data processing algorithms for generation of application products, ASI continues their institutional mission according to the following activities:
• In the upstream sector of satellite missions, improving SAR sensors to achieve new observation capabilities with COSMO-SkyMed Second Generation (CSG) satellites and facilitating the accessibility to long time series ensuring observation continuity;
• In the downstream sector of applications and services development, promoting SAR data exploitation, also in combination with navigation and telecommunications technologies, through the new programme “Innovation for Downstream Preparation” (I4DP), wherein management and monitoring of structural stability of critical infrastructures is among the application domains of recent funding and projects initiation.
The present paper therefore will illustrate ASI’s contribution on this application domain, alongside the current perspectives, in light of the COSMO-SkyMed programme (upstream) and “Multi-mission and Multi-Frequency SAR” and I4DP programmes (downstream), the latter with particular focus on the initiative dedicated to scientific users (I4DP_SCIENCE).

References
Tapete et al. (2022) ASI's “Multi-mission and Multi-Frequency SAR” Program for Algorithms Development and SAR Data Integration Towards Scientific Downstream Applications. IGARSS 2022, pp. 4498-4501, doi: 10.1109/IGARSS46834.2022.9884937.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:15
15:15
15min
Remote Sensing for structure and infrastructure monitoring: a review.
Alicandro Maria, Sara Zollini, Donatella Dominici, Nicole Pascucci

Inspection and maintenance of structures and infrastructures are, nowadays, hot topics. Extreme weather events and ageing stock, mainly, deteriorate the network infrastructure. Their structural performance should be checked periodically, but this is not always possible, both because of the difficulty to practically carry it out and because, sometimes, insufficient funds are allocated to infrastructure management. In most of the western countries, a highly percentage of bridges, roads, viaducts were built between the 1950s and the 1970s, so the detection plays a fundamental role for their proper functioning. Traditionally, instruments such as levels and total stations have been used to perform high accuracy Structural Health Monitoring (SHM). These can provide highly reliable real-time data on structural condition but, both because of economic reasons and of the difficult-to-access areas, not all the structures and infrastructure can be monitored with traditional techniques. Remote sensing can provide numerous advantages for structures and infrastructures monitoring, because the information can be extracted “from distance” with high reliability and relatively low costs. A comprehensive review on the remote sensing techniques used for structure and infrastructure monitoring is presented in this paper, focusing the attention especially on satellite remote sensing and UAV photogrammetry techniques. Nowadays, SAR (Synthetic Aperture Radar) and optical images are widely used for the aforementioned purpose. From one side, the PSIn-SAR (Permanent Scatterer SAR Interferometry) has been exploited to extract information on ground and infrastructure movements; on the other side, optical images allowed to understand the changes occurred in areas of interest by performing a change detection analysis with different algorithms. UAV photogrammetry outputs have been used for more detailed surveys on specific structures or infrastructures, both to metrically model the objects and, consequently, to detect the degradation phenomena. The main results and consideration obtained by the state of art are discussed and compared and the main advantages and limitations are, finally, outlined in order to provide general achievements within this field.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:30
15:30
15min
The use of satellite data for the knowledge of the territory: geological applications
Giuseppe Solaro, Andrea Barone, Raffaele Castaldo, Vincenzo De Novellis, Antonio Pepe, Susi Pepe, Pietro Tizzani

The geological processes that occur several kilometers below the earth's surface, such as displacement along a seismogenic fault, pressure variation in magma reservoirs and landslides, in many cases cause deformations of the earth's surface that can be measured with geodetic methods and remote sensing techniques, such as differential SAR interferometry (DInSAR). DInSAR is a consolidated microwave remote sensing technique which, by exploiting two satellite images acquired at different times, makes it possible to estimate the surface deformations that occurred between the two acquisitions with centimeter precision. DInSAR systems are able to revisit the same area at regular intervals, providing very high spatial resolution information of the observed scene. For example, ESA ERS 1/2 and Envisat satellites, active since 1992, have a revisit time of 35 days, the sensors of the Italian COSMO-SkyMed constellation, have a revisit time of 8 days, and finally this time was reduced to 6 days for the "Sentinels" of the European Copernicus programme. These measurements are indicated by a series of colored bands, the so-called fringes or interferograms. The electromagnetic waves used are characterized by an alternation of crests spaced a few cm apart. By "counting" these crests, the radar is able to figure out how far the object being observed has moved; if the object is hundreds of kilometers away, moving only a few centimeters, the number of crests that characterize the electromagnetic waves will change, allowing the displacement to be accurately detected and measured with centimeter accuracy.
Interferometric techniques produce not only surface deformation maps measured along the sensor's line of sight; indeed, by exploiting a series of images acquired over time, it is possible to follow the temporal evolution of the deformation. This information can be particularly valuable, for example, for measuring ground deformation in volcanic areas, as this parameter can be a precursor to the resumption of eruptive activity or the increase of the unrest phenomena. If we consider that the first satellites (ERS-1) used for this purpose have been collecting data since 1992, the history of deformation of a volcano over the last 30 years can be analyzed in previously unimaginable detail.
The main results obtained in recent years in various geological contexts will be presented. For example in the volcanic context, using the DInSAR technique and benefiting from the availability of long-term SAR archives, it was possible to detect and monitor the evolution of the surface deformation of the Campania volcanoes (Campi Flegrei, Vesuvio and Ischia) and, with geophysical inversion, identify and analyze the deep sources responsible for the observed deformation. In the context of hydrogeological instability, by way of example, we will present a study conducted on the Ivanchic landslide in Umbria characterized by a relatively slow movement, which starting from satellite data and with geological, geotechnical and geophysical knowledge, has allowed us to characterize the geometry and the detachment surface of the landslide. Finally, some examples of applications of these techniques to identify deformations of infrastructures, such as buildings, dams, viaducts, will be illustrated in the urban context.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:45
15:45
15min
Satellite Intelligence for proactive monitoring of the territory: the Rheticus Safeland service
Giuseppe Forenza, Alessandra Bleve

Ground instability can cause severe damage to infrastructure and the environment. Falling rocks can destroy roads, pipes, and buildings and even endanger citizens. In recent years, a continuous increase in the intensity and frequency of ground stability phenomena has been observed, with a clear relationship to human activity and climate change.
Advanced technological solutions for monitoring and forecasting instability offer the opportunity to prevent disasters by monitoring ground movement phenomena to detect potential hazards in time.
Rheticus Safeland is the wide-area continuous monitoring service designed to enable agencies to effectively collect, visualize and monitor land instability for better management and planning of soil protection activities.
The Rheticus Safeland service is used in various sectors, from utilities to road infrastructures to Public Administrations, which, thanks to this product, can observe vast areas such as Regions or Districts.
Rheticus Safeland's in-depth analysis comprehensively traces any ground movement and slope instability in the area of interest. This timely information provides valuable assistance in planning campaigns to prevent and mitigate land instability risks.
In Italy, the Geological Service of the Friuli Venezia Giulia Region monitors and prevents risks from hydrogeological instability phenomena, using information from Rheticus Safeland.
The mountainous areas of the Friuli Venezia Giulia region are particularly susceptible to geological hazards, including landslides and sinkholes. They are prone to slope instability and sites of geological weakness, such as fault zones, shear zones, and weak rock or mineral strata. The region, therefore, needed, on the one hand, a solution to continuously collect, visualize and analyze data on unstable areas. On the other hand, it needed to monitor ground movement affecting buildings and roads to protect citizens from danger and avoid increased costs and delays in new developments.
To help the Friuli Venezia Giulia Region, Planetek developed Rheticus® Safeland, a vertical geoinformation service that continuously tracks ground movements via satellite radar interferometry. Satellite radar data are a reliable source of information. They are ideal for this task because they are readily available, continuously updated, and allow users to identify trends in ground movements with pinpoint accuracy.

Rheticus Safeland has enabled the Geological Survey to collect and provide detailed information for buildings and transport infrastructure, allowing engineers, planners, and other users to analyze ground movement phenomena over time with great accuracy. The comprehensive picture provided by the Rheticus Safeland service gives planners the knowledge to prioritize risk mitigation measures, make better decisions and proactively avoid critical issues that arise when ongoing phenomena are not fully understood.
The Rheticus service platform was awarded the World Smart City Award because it made satellite information accessible to everyone. Users do not need any knowledge or experience with satellite radar data, interferometry, or GIS applications. Interpreting this data usually requires considerable technical expertise, and the results can be difficult for non-experts to understand.

Rheticus Safeland has solved this problem by simplifying the complex information gathered from the analysis of radar data. Clear studies and an intuitive dashboard interface combine maps, reports, and dynamic geo-analysis into one intelligent application that provides valuable information.
Complex, multi-source data is automatically processed by the platform, so users can focus on what they do best: observing, tracking, and managing the territory to ensure the safety of people and assets. With Rheticus Safeland, these users get an accurate and complete territory overview with timely updates and dynamic analysis. After implementation, the Geological Survey of the Autonomous Region of Friuli Venezia Giulia significantly reduced the time and costs associated with traditional land image collection and ongoing software development.

AIT Contribution
Sala Videoconferenza @ PoliBa
16:15
16:15
30min
Coffee break
Sala Videoconferenza @ PoliBa
16:15
30min
Coffee break
Sala Biblioteca @ PoliBa
16:30
16:30
120min
l'editor JOSM - livello base/medio
Alessandro Palmas

Sottotitolo:
il migliore, e più usato editor di dati OpenStreetMap, da zero ad un utilizzo intermedio.
JOSM, editor in Java, è potente, veloce, apre file gpx, csv, geoJSON, ShapeFile, geoTIFF, foto georiferite, è flessibile coi suoi numerosi plugin, ... e altro ancora.

Partiremo da zero, e dopo i concetti base per disegnare nuovi oggetti o modificare gli esistenti, vedremo alcune funzionalità un pò più avanzate.

Requisiti richiesti:

  • avere un account OpenStreetMap https://www.openstreetmap.org/user/new
  • un computer ed un mouse vero con la rotellina (non dispositivi di puntamento dei portatili)
  • aver installato Java Runtime https://wiki.openstreetmap.org/wiki/IT:JOSM/Installazione (sezione Installazione di Java)
  • possibilmente, avere JOSM installato https://wiki.openstreetmap.org/wiki/IT:JOSM/Installazione

Il laboratorio nel dettaglio:

  • Installazione di JOSM e prima configurazione
  • Descrizione delle barre degli strumenti e dei pannelli laterali
  • Caricare alcuni dati da OSM
  • Comandi base: disegna, modifica, taglia, unisci, cancella
  • Disegnare nuovi oggetti e modificare geometrie esistenti
  • Applichiamo i giusti attributi (i tag)
  • Carichiamo le modifiche su OSM
  • Analizziamo il changeset (il set di dati appena caricati)
  • Apriamo file GPX con note vocali
  • Immagini di sfondo: utilizziamo quelle esistenti o aggiungiamo nuovi servizi WMS/TMS. Correggere scostamenti delle immagini.
  • Entriamo nella configurazione: abilitare il "telecomando" (controllo remoto), installiamo alcuni plugin, abilitiamo modalità per esperti
  • Breve sessione di editing su una zona che conoscete personalmente
  • Altre funzionalità
  • Domande & risposte
GFOSS.it Contributions
Aula 1 @ UniBa
16:45
16:45
15min
Investigating PLSR and RF for retrieving wheat crop traits in a field phenotyping experiment using full-range hyperspectral data: performance assessment and modelling interpretation
Ramin Heidarian Dehkordi, Mirco Boschetti, Gabriele Candiani, Federico Carotenuto, Carla Cesaraccio, Andrea Genangeli, Beniamino Gioli, Donato Cillis, Marina_Ranghetti

Crop traits monitoring is a fundamental step for controlling crop productivity in the context of precision agriculture and field phenotyping. At present, the usage of hyperspectral data in machine learning regression algorithms (MLRAs) has attracted increasing attention to alleviate the challenges associated with traditional crop trait measurements. However, the performance assessment of such hyperspectral-based MLRA models for crop trait retrievals with respect to the well-known natural variations in either structural or biochemical crop properties remains largely elusive. As such, this experiment was set up to assess whether full-range hyperspectral data, acquired by a handheld spectrometer (Spectral Evolution; 350 – 2500 nm), as inputs to partial least squares regression (PLSR) and random forest (RF) models are capable of modeling different wheat crop traits at the canopy level. The examined crop traits were leaf area index (LAI), canopy water content (CWC), canopy chlorophyll content (CCC), and canopy nitrogen content (CNC). This approach allowed us, as an overarching objective, to compare the performance of the two aforementioned MLRA models while also focusing on the physical interpretation of the modelling results for each particular crop trait.
Overall, PLSR provided remarkably higher accuracy, tested with a cross-validation strategy, as compared to RF for all the crop traits. More precisely, PLSR denoted R2 (resp. nRMSE%) values of 0.72 (11.97), 0.77 (10.89), 0.70 (14.61), and 0.74 (14.38) for LAI, CWC, CCC, and CNC, respectively. All PLSR models indicated robust prediction capability with RPD values greater than 1.4, and amongst them, CWC was found to have excellent prediction performance with an RPD higher than 2. However, RF yielded less predictive models with R2 (resp. nRMSE%) values of 0.59 (14.59), 0.42 (17.42), 0.50 (18.86), and 0.42 (21.41) for LAI, CWC, CCC, and CNC, respectively. RF models for LAI and CCC showed good prediction capabilities (RPD > 1.4), whilst RF models of neither CWC nor CNC were reliable (RPD < 1.4).
In general, RF band importance and PLSR regression coefficient results revealed physically- meaningful and consistent patterns for each specific crop trait. Specific wavelengths at SWIR (1716-1745 nm) and NIR (1057-1120 nm), Green, and the Red-Edge bands respectively showed the highest importance for LAI retrieval. Water absorption regions around 910 nm and 1200 nm as well as the Red-Edge and Visible parts were of higher importance for the retrieval of CWC. The best-performing bands were situated in Red-Edge and Green spectral channels for CCC retrieval. SWIR spectral regions between 1600-1800 nm and 2100-2300 nm appeared to be important (in particular with respect to the other traits) alongside the Red-Edge part of the spectrum to retrieve CNC.
We demonstrated that full-range hyperspectral data in combination with MLRA algorithms can provide accurate estimates of wheat crop traits at the canopy level. The success of utilizing hyperspectral data in MLRA algorithms was further highlighted by the physically-meaningful modelling performances in accordance with the subtle structural and biochemical crop properties. Our results suggest that such spectroscopic hyperspectral-based MLRA approaches could be a powerful tool to accurately monitor crop status throughout the cropping season to improve high-throughput phenotyping activities and to further aid precision agricultural practices.

AIT Contribution
Sala Videoconferenza @ PoliBa
16:45
15min
Investigating the Correlation between Sentinel-2 Multispectral Images and Ground-Based Field Measurements of Soil Moisture (Case Study: Mendatica, Liguria, Italy)
Alessandro Iacopino, Gachpaz Saba, Giorgio Boni, Gabriele Moser, Bianca Federici

Surface Soil Moisture (SSM) is an essential climate variable that links the atmospheric and surface processes, controlling the exchange of water, partitioning the available energy at the ground surface and biochemical process. SSM plays also a crucial role in controlling hydro-geological hazards, like rainfall-triggered landslides.
SSM can be monitored using various methods: ground-based measurements, proximal methods, or air-borne/ space-borne remote sensing. Traditional methods are mainly ground-based measurements through contact sensors; they provide accurate but single-point measurements and require manual placement and intensive maintenance, especially in large-scale studies. Because SSM is a heterogeneous variable in terms of space and time, data acquisition with traditional single-point measurement methods is very limited, especially at large scale.
Advances in satellite Remote Sensing (RS) bring the possibility of continuous land surface observation and characterization over time. In addition to the geometric condition, and optical and mineral properties of the earth surface, SSM is one of the influential factors that control the radiation emitted from the earth’s surface. All parts of the electromagnetic (EM) spectrum that are normally used for earth observation can be used for quantitative SSM extraction. Considering the potential of penetration depth of the EM wavelength, RS methods can be classified into three categories: thermal, microwave, and optical RS. Thermal RS can be used individually or in combination of vegetation indices, like the Crop Water Stress Index (CWSI). The acquisition of thermal data has high costs, and, in addition, the differentiation between soil temperature and tree canopy temperature is not easily achieved. Most globally available SSM products are derived from microwave RS, due to the ability of microwave radiation to penetrate cloud cover, but they are highly sensitive to surface roughness and have coarse spatial resolution, making them inefficient for studies at small scale. Optical RS in the visible, near-infrared, and shortwave infrared ranges measures the reflected radiation from the earth surface, which can be related as a function of soil moisture to provide very high spatial resolution data.
In the present study, the potential of multispectral satellite images acquired by Sentinel-2 (S-2) for SSM extraction is investigated. For this purpose, a yearly dataset of hourly SSM measurements, acquired at four different depths (-10cm, -35cm, -55cm, -85cm) from a monitoring network in Mendatica (Liguria, Italy) from 1st of July 2020 to 30th of June 2021, was used to look for correlation with S-2 images. Data acquired by the sensors were previously calibrated, taking into account the soil-specific characteristics of the areas (Bovolenta et al., 2020), and the reliability of the dataset was verified. After performing the required preprocessing on satellite images, the correlation coefficients between each band of S-2 images and ground-based measurements were calculated. The results represent the potential of each band or a combination of them to estimate SSM from RS through linear estimators.

AIT Contribution
Sala Biblioteca @ PoliBa
17:00
17:00
15min
A Preliminary Investigation of the PRISMA Hyperspectral Sensor Potential for Burned Area Mapping in an Operational Context
Luca Cenci, Luca Pulvirenti, Giuseppe Squicciarino

In the past, the scarcity of hyperspectral Earth Observation (EO) data hindered the development of operational applications based on such technology. Considering the current increasing availability of this kind of data (e.g., PRISMA, EnMap), that it is expected to further grow in the future (e.g., Copernicus CHIME, PRISMA Second Generation), it is important to evaluate the potential retained by hyperspectral remote sensing for EO applications that could provide operational services in the next few years. Within this context, this work was conceived to perform a preliminary investigation of the capabilities of the PRISMA hyperspectral sensor for burned area (BA) mapping in an operational context (e.g., civil protection applications).

One of the most common approaches used for BA mapping via EO data is based on the Differenced Normalized Burn Ratio (dNBR) index, which detects the fire-induces alterations to vegetation and soils by taking advantage of the spectral information acquired in the Near InfraRed (NIR: 0.7-1.2 µm) and Short-Wave InfraRed (SWIR: 1.2-2.5 µm) bands of two images: one acquired before the fire event, one after [1]. Multispectral imagery commonly used for performing BA mapping for operational applications (e.g., Sentinel 2, Landsat) have specific NIR and SWIR bands that can be used for dNBR computation [2]. Hyperspectral images, instead, allow for several bands combinations of data acquired in the NIR and SWIR spectral regions, thereby generating numerous (and, in some cases, slightly) different definitions of dNBR maps. Amongst these bands’ combinations, the more reliable ones shall be identified (i.e., the ones capable of producing BA maps more accurate). At the same time – since the dNBR is also sensible to non-fire induced spectral alterations [1] – the less reliable ones shall be avoided.

The aim of this study was to set up an experiment in which it was prototyped an automatic methodology of operational BA mapping based on PRISMA Level2D products (i.e., orthorectified, surface reflectance imagery; GSD: 30 m). The wildfire that occurred in Pantelleria Island (Italy) on 17/08/2022 was used as a case study. For this event, there were available two PRISMA images acquired on 06/08/2022 (pre-event) and 16/07/2022 (post-event). An ancillary shapefile produced by the Copernicus Emergency Management Service (EMS) and representing the extent of the BA on 19/08/2022 (ca. 28 ha) was used as a reference layer to validate the analysis results.

The methodology that was set up – conceptually similar to the one developed by [2] – produced more than 7600 dNBR maps (obtained from the combinations of the PRISMA NIR and SWIR spectral bands), from which the pixels corresponding to the BA were mapped by using the Otsu approach for automatic threshold selection. The analysis was carried out over the whole Pantelleria Island territory, where water bodies, clouds and clouds’ shadows were masked out (as well as poor quality PRISMA bands). Then, the accuracy of the classification was quantified (as a percentage) by means of the Dice Coefficient (DC) [3], which was calculated by using the Copernicus EMS reference BA layer. According to the DC, the best bands combination for mapping the BA of the Pantelleria 2022 wildfire corresponds to the 0.903 (NIR) and 2.253 µm (SWIR) wavelengths. The DC associated with this BA map was 89.4%.

In an operational context, ancillary information (i.e., BA reference layers) are often not available to identify the most reliable bands for BA mapping. Therefore, an image-based selection criterion useful to achieve this objective shall be used. Indeed, for every NIR/SWIR bands combination used during the analysis, the spectral separability [3] of the pixels classified as BA – from the neighbouring ones classified as not BA – was computed. Then, the bands combination characterized by the highest separability value was used for identifying the best dNBR map to use for BA mapping. For this specific exercise, this combination corresponds to the 1.038 µm (NIR) and 2.245 µm (SWIR) wavelengths. The DC associated with this BA map was 88.8%. This value is very similar to the one identified via the ancillary reference BA layer.

The details of the methodology will be presented at the conference, where the analysis results will be also thoroughly discussed.

References:

[1] van Gerrevink M.J. & Veraverbeke S. (2021). Evaluating the Hyperspectral Sensitivity of the Differenced Normalized Burn Ratio for Assessing Fire Severity. Remote Sensing. 13(22):4611.

[2] Pulvirenti L. et al. (2023). Near real-time generation of a country-level burned area database for Italy from Sentinel-2 data and active fire detections. Remote Sensing Applications: Society and Environment. 29.

[3] Roteta E. et al. (2019). Development of a Sentinel‐2 burned area algorithm: Generation of a small fire database for sub‐Saharan Africa. Remote Sensing of Environment. 222, 1–17.

AIT Contribution
Sala Videoconferenza @ PoliBa
17:00
15min
Remote sensing and Sentinel-2 data role within the Common Agricultural Policy 2023-2027
Enrico Borgogno-Mondino, Alessandro Farbo, Filippo Sarvia, Samuele De Petris, Elena Xausa, Gianluca Cantamessa

Starting from 1962 the Common Agricultural Policy (CAP) has supported through contributions the agricultural sector aiming at preserving the environment and improving crops production. The local Paying Agencies (PA) verify the correctness, completeness and compliance of farmers applications by administrative checks (ACs) and on-the-spot checks (OTSCs). ACs are performed on 100% of applications to automatically detect formal faults through informatics tools. OTSCs are performed on about the 5% of applications testing the compliance with envisaged commitments and obligations, verify eligibility criteria and checking the truthfulness of declared area size. Recently, the article 10 of the recent EU regulation (N. 1173/2022), defined new controls based on remote sensing, specifically by adopting Copernicus Sentinel-2 (S2) imagery, or “other data” at least equivalent value. The adoption of S2 imagery allows to monitor all areas declared by farmers’ applications longing for irregularities detection. Consequently, this type of control can be applied to all CAPs (no longer 5%) applications in each member state. In this framework, the new CAP 2023-2027, requires a gradual implementation of such remote-sensing based tools within member states control systems, becoming compulsory in 2024. Furthermore, the 2023-2027 CAP will introduce some new types of contributions called 'eco-schemes' related to the climate, environment and animal welfare. Nevertheless, a proper review of how remote sensing-based tools can be applied to these new contributions is missing. Therefore, in this work we preliminary explore which marker can be detected by Copernicus S2 data in terms of field surface, agronomic practices and monitor period, possibly related to a specific CAP contribution requirement. Focuses will concern: (a) basic payment; (b) eco-schemes; (c) enhanced conditionality.

AIT Contribution
Sala Biblioteca @ PoliBa
17:15
17:15
15min
Double Crop Mapping using Sentinel-2 Data in Support to Implementation and Monitoring of the 2023-2027 Common Agricultural Policy within Rural Development Interventions
Enrico Borgogno-Mondino, Filippo Sarvia, Emma Izquierdo, Francesco Vuolo

Sustainable agriculture is one of the main focus of the 2023 – 2027 Common Agricultural Policy (CAP). For this reason, the new CAP strategic plan presents greater ambitions on climate and environment action in comparison of the previous programming period and stronger incentives that promote climate- and environment-friendly farming practices (i.e. minimizing soil disturbance, organic and carbon farming, maintaining permanent ground cover and adopting combined rotations) are provided. Among the several options, avoiding bare soil conditions and consequently promoting cover crops, or even to cultivate two main crops in a year, can provide excellent benefits. In particular, soil erosion and nitrate percolation are limited and soil structure, fertility, organic carbon sequestration and adaptability to climate change are supported. Consequently, an estimation of how much cultivated area is currently managed in this way should be estimated. Within the farmer CAP application, single (i.e. winter or summer) and a double crop could be included even if more crops can indeed be cultivated afterwards. Accordingly, the scope of this research is to design and validate an approach to classify and map the fields where a crop cover maintenance is promoted rather than the single crop based on Copernicus Sentinel-2 (S2) data. The study area is located in Austria, where a representative sample of the main crop types cultivated in the region was derived from the declarations to the Integrated Administration and Control System (IACS) for the year 2021. The approach relies on the classification of reflectance data from S2 time series including nine vegetation indices that were used to identify single or double crop systems. For this purpose, two supervised classifiers were applied namely One-Class Support Vector Machine (OneClassSVM) and Random Forest (RF). Statistical measures such as Overall Accuracy and Cohen's kappa coefficient were derived from the confusion matrices and the differences between field data and mapping results were analysed. A new map showing single vs double-crop systems was generated for further spatial analysis and interpretation.

AIT Contribution
Sala Biblioteca @ PoliBa
17:15
15min
Spectroscopic Determination of Crop Residue Cover using Exponential-Gaussian Optimization of absorption features and Random Forest
Ramin Heidarian Dehkordi, Monica Pepe, katayoun Fakherifard

Non-photosynthetic vegetation (NPV) detection and quantification represent a key variable in remote sensing of conservative agriculture, and, more recently, in carbon farming due to its important role in water, nutrient and carbon cycling. For this reason, both mapping and characterization of NPV represent a relevant topic in the exploitation of Earth Observation (EO) data for agriculture monitoring.
Studies on NPV mapping by EO data benefit from the availability of hyperspectral data due to the high spectral resolution particularly at wavelengths from 1.6 to 2.3m, where the spectral features of carbon-based constituents of plants are distinctive. The launch of new generation hyperspectral satellites, as PRISMA (PRecursore IperSpettrale della Missione Applicativa) and, more recently, EnMAP (Environmental Mapping and Analysis Program) offers research opportunities in the field, which before was mainly investigated by proximal and aerial sensing.
Early studies already proved the potential of PRISMA in NPV due to the prominence of the cellulose-lignin key absorption feature at 2.1m. More recent studies on PRISMA make use of machine learning regression algorithm (MLRA) trained on the basis of radiative transfer model simulations, or on the basis of Exponential Gaussian Optimization (EGO) of specific absorption features on sensed data.
This second approach, proposed in this study, is aimed at the determination of Crop Residue Cover (CRC) using PRISMA hyperspectral imagery by a two-step approach making use of: i) firstly, an Exponential Gaussian Optimization to model pre-selected absorption features, also reducing the spectral dimension; ii) secondly, a Random Forest paradigm, performing non-linear regression to finally predict and map CRC.
This study exploits for the training phase an extensive and well documented spectral library, namely “Reflectance spectra of agricultural field conditions supporting remote sensing evaluation of non-photosynthetic vegetation cover” made available online by USGS (https://doi.org/10.5066/P9XK3867). It consists of 916 in situ surface reflectance spectra collected using a proximal full range spectroradiometer (350 to 2500 nm). Spectra are annotated with the corresponding fractions of NPV, Soil and (if any) Green Vegetation, as estimated by point sampling digital photograph of the radiometer field-of-view.
This spectral library was resampled to PRISMA spectral resolution, prior to the Gaussian Exponential Optimization (EGO) on 4 spectral intervals of interest, already tested in previous studies, and corresponding to absorption bands of: cellulose-lignin, plant pigments, vegetation water content and clays.
The EGO algorithm optimizes continuum-removed spectra by 4 parameters - absorption band depth, center, width and asymmetry – and since this is performed for each spectral interval, it results in 16 parameters. This is a reduced space as compared to the one of the input spectra (around 230 bands). This parameter space was used to train a Random Forest to model the regression between Crop Residue Cover percentage and EGO parameters, achieving a determination coefficient around 0.8 (RPD ˜2.1; MSE ˜ 0.02) on the test set.
The RF model was firstly validated against an independent spectral library of around 100 spectra, collected during a proximal sensing survey with a portable full range spectroradiometer, conducted in a large farm test site (3800ha) located in Jolanda di Savoia (Italy). Also in this case, spectra are annotated with Crop Residue Cover percentages, and resampled to PRISMA spectral resolution. The model performance on this dataset is in agreement with the test on the USGS spectral library.
Finally, the regression model was applied to a PRISMA image , acquired on the Jolanda di Savoia farm (June 21st 2021), for CRC mapping. The resulting map was validated against field observations: the CRC map show values and patterns in good agreement with ground data confirming encouraging prediction capabilities of the model
In conclusion, the proposed classification approach, trained on a spectral library is predictive, as proved on an independent spectral data set and on the PRISMA image. Further work will encompass testing the robustness of the model by collecting field ground data of Crop Residue Cover at the PRISMA scale; monitoring CRC dynamics on PRISMA time series; and, the use of Radiative Transfer Model simulations to enlarge the training set, accounting also for different factors controlling reflectance (e.g. soil moisture).

AIT Contribution
Sala Videoconferenza @ PoliBa
17:30
17:30
15min
Exploiting PRISMA hyperspectral data for planetary science: remote characterization of paleo-hydrologic environments on the Earth and Mars.
Paola Manzari, Veronica Camplone, Angelo Zinzi, Eleonora Ammannito, Giuseppe Sindoni, Francesco Zucca, Gianluca Polenta

Here we show some preliminary results of the hyperspectral investigation on a paleohydrological environment, using ASI PRISMA data (Caporusso et al., 2020), with the scope of subsequently comparing them with similar environments on Mars (Zinzi et al., 2023).
The datasets investigated are located in the Gobi Desert. The first areas were selected on the basis of the availability of mineralogical data on rocks revealing a composition of quartz, albite, phyllosilicates and sporadic calcite by means of X-ray diffraction (Sekine et al., 2020).
Keeping into account that quartz and albite do not show strong unambiguous diagnostic absorption features in the PRISMA spectral range, we aim at investigating whether their occurrence can be deduced from the reflectance values and the whole spectrum.
On the contrary, phyllosilicates were found in PRISMA data, in particular, their occurrences seem to be attested by the absorption associated to Al-OH bond around 2.19 micrometers in the structure of illite, smectite, kaolinite.
Up to now, we did not observe carbonate-related absorptions in the investigated ROIs. This can be due to neglectable abundance of carbonates at the scale of PRISMA, but this hypothesis needs further insights.
However, the study will include another possible delta in the Gobi area probably characterized by basaltic composition (Mason et al., 2021 and references therein) and therefore more similar to Jezero delta mineralogy.
We will proceed in the identification and mapping of minerals in these areas, in the view of comparison of water related environments on Earth and Mars.

References

Caporusso et al., 2020. IEEE IGARSS 2020 Proceedings. 3282–3285.
Zinzi et al., 2023. Abstract XVIII Congresso Scienze Planetarie, Perugia.
Sekine et al., 2020. Minerals, 10(9):792.
Mason et al., 2021, abs#1664, 52nd Lunar and Planetary Science Conference.

AIT Contribution
Sala Videoconferenza @ PoliBa
17:30
15min
Monitoring the seeds phenolic maturity in Nebbiolo vineyard by means of NDVI index vs foliar NIR spectroscopy
Alberto Cugnetto, Matteo Altare, Giorgio Masoero, Silvia Guidoni

Foliar NIR Spectroscopy and EOS platform for monitoring polyphenolic maturity in Nebbiolo

A. Cugnetto1, M. Altare2, G. Masoero 1,3, S. Guidoni 3,1.

1 Accademia di Agricoltura di Torino (TO)
2 Az. Vitivinicola Costa di Bussia, Monforte (CN)
3 Dipartimento Scienze Agrarie, Forestali e Alimentari, Università di Torino (TO)

A Nebbiolo vineyard was divided into three vigor zones (High, Medium, Low) according to the NDVI index survey supplied by EOS Crop Monitoring web platform. In four sessions, leaf samples were collected on which petiolar pH [1] and the NIR spectrum were determined using the SCiOTM v 1.2 apparatus (740-1070 nm, 331 reflectance points). From samples of 10 berries the seeds were cleaned and scanned by NIRS obtaining 99 spectra. The polyphenolic maturity of the seeds was expressed based on the Non-Extractable Polyphenols / Extractable Polyphenols (PSM) ratio, analyzed according to the Di Stefano method [2]. The value was estimated by a WinISI-II PLS equation recalculated on published data [3] which has a predictive value of R2 = 0.70 and RMSE error = 8%. From the NIR spectra of 164 leaves a SPAD value was estimated (by unpublished equation) and the PSM of the seeds was regressed on the 16 composition parameters [4]. The most important variables that explain the model, were those related to the bromatological composition of the vegetal wall (Cellulose, ADL, digestible-NDF, non-digestible-NDF, Total digestibility). The fitting of the 10 vines vigor group gave an R2 = 0.88 (Mean RMSE 12%). The petiolar pH did not show significant relations with the seeds PSM. The direct calibration of the NIR spectrum on the seeds PSM made with the WinISI, revealed an R2 = 0.84 (MRMSE 5%, with an outlier group), while using the PLSR of LabSCiO we obtained R2=0.73 (MRMSE 6% with an outlier group).
This part of the work demonstrates that a proximal scrutiny of the NIR spectrum of Nebbiolo leaves allow an estimation of the maturity of the seed polyphenols provided that the result is consolidated with the mean of at least 15 replicate measurements.
Once the individual calculations were examined, the group averages were processed by performing a linear regression of the PSM on the averages of the available variables extracted from the NIR spectra, and on the NDVI measurements taken from the Sentinel-2 satellite. The examined variables had different importance and the SPAD (R2=0.49) had the maximum one. The NDVI from satellite had fitted to the seeds PSM with R2 = 0.34; it was under the forecast accuracy provided by the leaves spectra set, but is worthy of attention for the simplicity of use. The obtained linear equation was PSM = 5.71 + 2.42 * NDVI.

The work demonstrates that with the modern Satellite remote sensing technologies, it is possible to improve the grape sampling during the maturation period, better identifying the internal plot variability, that is related to different seed ripening levels. The leaf NIR spectra detected at ground level with SCIOTM v 1.2, is a rapid proximal method for estimating the Nebbiolo seed ripening, directly in the farm

1 Masoero G, Cugnetto A. 2018 The raw pH in plants: a multifaceted parameter. Journal of Agronomy Research, 1: (2), 18-34. ISSN: 2639-3166. DOI10.14302/issn.2639-3166.jar-18-2397. https://openaccesspub.org/jar/article/871
2 Di Stefano R, Cravero MC. 1991 Metodi per lo studio dei polifenoli nell'uva. Riv. Vitic. Enol, 2, 37-45.
3 Cugnetto A, Masoero G. (2021) Colored anti-hail nets modify the ripening parameters of Nebbiolo (Vitis vinifera L.) and a smart NIRS can predict the polyphenol features. JAR 4 (1), 24-45. https://openaccesspub.org/jar/article/1701
4 Peiretti P G, Masoero G and Tassone S 2017: Comparison of the nutritive value and fatty acid profile of the green pruning residues of six grapevine (Vitis vinifera L.) cultivars. Livestock Research for Rural Development. Volume 29, Article #194.Retrieved October 3, 2017, http://www.lrrd.org/lrrd29/10/pier29194.html

AIT Contribution
Sala Biblioteca @ PoliBa
17:45
17:45
15min
Evaluation of the use of data from MODIS-Terra and Sentinel-2 to analyze the epidemic impact of Citrus Tristeza Virus in infected areas of the Apulia region
Stefania Gualano, Hamza Mghari, Antonio Novelli, Anna Maria D’Onghia, Biagio Di Terlizzi, Franco Santoro

Monitoring and surveillance of plant pathogens and pests are essential steps in integrated pest management. However, conventional field monitoring of diseases and pests is time-consuming, labor-intensive, and generally not very effective. Remote sensing (RS) techniques could play a crucial role in large-scale monitoring of plant diseases and pests. Citrus Tristeza Virus (CTV) is the most important citrus virus globally. In 2002, two outbreaks of the virus were reported in the Apulia region, Italy.
To examine the epidemic effects of the virus in the Apulia region, a remote sensing approach based on time series analysis was applied. Time series of the Normalized Difference Vegetation Index (NDVI) obtained from MODIS-Terra satellite and Sentinel-2 multispectral imagery were used to study fluctuations related to the infection in nine citrus fields monitored by MODIS-Terra for a period of 21 years and in four citrus orchards observed with Sentinel-2 for a period of 5 years.
Phonological parameters were extracted from MODIS-Terra and Sentinel-2 NDVI by applying asymmetric curve fitting methods. Subsequently, their evolution was analyzed using linear regression. The evaluated approach demonstrated high potential in monitoring infected citrus fields.
MODIS-Terra, with its medium spectral resolution, allowed for the assessment of infection's temporal trend over a long period and identifying long-term trends. Sentinel-2, thanks to its high spectral resolution, enabled the monitoring of small orchards and greater precision in the spatial analysis of infected areas.
Statistical analysis revealed a correlation between the incidence of infection and the trends of seasonal parameters, particularly the peak value, seasonal amplitude, and seasonal integral in the summer period. These results suggest that integrating MODIS-Terra and Sentinel-2 data could constitute an effective strategy for monitoring and assessing the effects of CTV in the Apulia region.

AIT Contribution
Sala Biblioteca @ PoliBa
17:45
15min
PRISMA Operational Activity Description
Francesco Nirchio, N. Lombardi, G. Viavattene, A. Cenci, P. Tempesta, V. Ferri, L. Agrimano, D. Iacovone, I. Corradino, L. Chiarantini, F. Sarti

PRISMA Operational Activity Description
F. Nirchio1, N. Lombardi1, G. Viavattene1, A. Cenci2, P. Tempesta2, V. Ferri2, L. Agrimano3, D. Iacovone4, I. Corradino5, L. Chiarantini6, F. Sarti6
1 Agenzia Spaziale Italiana (ASI), Contrada Terlecchia, 75100, Matera (MT)
2 Telespazio S.p.A., Via Tiburtina 965, 00156, Roma (RM)
3 Planetek Italia s.r.l., Via Massaua 12, 70132, Bari (BA)
4 e-GEOS S.p.A., Contrada Terlecchia, 75100, Matera (MT)
5 OHB Italia S.p.A., Via Gallarate, 150, 20151, Milano (MI)
6 Leonardo SpA, Via delle Officine Galileo, 1, 50013 Campi Bisenzio, Firenze (FI)

Objective of the presentation is to give an inside look into PRISMA operational activities, firstly summarizing the Missions functionality and subsequently describing the main activities that are carried on by the operational team, finally some statistical data on the mission are provided.
PRISMA (PRecursore IperSpettrale della Missione Applicativa) is a medium-resolution hyperspectral (HYP) and high-resolution panchromatic (PAN) imaging satellite fully funded by ASI and realized by Italian Industries Consortium led by OHB Italia, Leonardo and Telespazio. Launched on March 2019, PRISMA is devoted to the push-broom imaging of land, vegetation, waters and coastal zones for a planned operative period of 5 years.
The HYP module has a spatial resolution of 30 m, a spectral resolution of 12 nm and operates in the VNIR (400-1010 nm) and in the SWIR (920-2505 nm) with a swath width of 30 km. The PAN module has a spatial resolution up to 5 m and operates in the visible spectral range of 400-700 nm. The Sun Synchronous Orbit at 615 km of altitude and the orbital period of 97 minutes allow a repeat cycle of 29 days.
It is operational since February 2020. The mission is composed, along with the space segment, also by User Ground segment, located in ASI Matera Space Centre and the Mission Control Centre, located in Telespazio Fucino Space Centre.
To ensure the Mission performances some routine operations are performed. First of all the maintenance of the orbit and of the track. It is assured by the Space Control Centre and the Flight Dynamic System (FDS). Those functions are also involved any time a collision avoidance manoeuvre is requested. Satellite pointing accuracy, products geolocation and radiometric accuracy are regularly check over selected ground sites.
In order to evaluate the PRISMA health status all the telemetry data are regularly collected and evaluated by the mission specialists and every four months by a dedicated mission board.
During Prisma operational life any time a non-conformance (NC) behaviour is detected or a component failure is checked a Non-Conformity Report (NCR) form is issued. A trouble shooting activity starts involving the Support Engineering Team (SET) which examine any component of the system suspected to be involved in order to determine the causes and implement all the corrective actions. The fixing activities and results are evaluated by a Non-conformance Review Board (NRB).
Any registered user can place its orders on the PRISMA web portal both for new acquisitions and for processing the archive data in order to receive a desired product from the archived images. The user can obtain Level 1 products, corresponding to the Top Of Atmosphere (TOA) radiometrically and geometrically calibrated HYP and PAN radiance images, or Level 2 products corresponding to Bottom Of Atmosphere (BOA) geolocated (L2B, L2C) and geocoded (L2D) atmospherically corrected HYP and PAN images. In case of need, the user can receive support by help desk that can be contacted by e-mail.

AIT Contribution
Sala Videoconferenza @ PoliBa
18:00
18:00
15min
Assessing transferability of Gaussian Process Regression for Canopy Chlorophyll Content and Leaf Area Index estimation from Sentinel-2 data exploiting a multi-site, year and crop dataset
Mirco Boschetti, Carla Cesaraccio, Beniamino Gioli

Authors: Margherita De Peppo, Francesco Nutini, Alberto Crema, Gabriele Candiani, Giovanni Antonio Re, Federico Sanna, Carla Cesaraccio, Beniamino Gioli, Mirco Boschetti

Spatio-temporal estimation of crop bio-parameters (BioPar) is required for agroecosystem management and monitoring. BioPar such as Canopy Chlorophyll Content (CCC) and Leaf Area Index (LAI) contribute to assess plant physiological status and health at leaf and canopy level. Remote sensing provides an effective way to spatially explicitly retrieve CCC and LAI at different spatial and temporal scales. Several studies demonstrated how Machine Learning (ML) techniques outperform traditional empirical approaches based on Vegetation Index in BioPar estimations from RS data. Among the different available algorithms Gaussian processes regression (GPR) is considered promising for LAI and CCC mapping. However, few of these studies have examined the performance of GPR in predicting crop parameters when applied to different site, season and crop typology (i.e. validation using independent dataset). The specific objectives of this study conducted in the framework of E-CROPS project were: (i) develop a transferable GPR algorithm for LAI and CCC estimation by exploiting a robust multi-crop, multi-year and site dataset; (ii) assess GPR BioPar retrieval performance against ground measurements acquired over independent dataset; (iii) compare result with other methods including empirically based VI models and operational product embedded in SNAP. In total, 209 (CCC) and 301 (LAI) observations were used to train GPR models. Then, over the unseen dataset (LAI n=820 and CCC n=305) the GPR was validated. The results showed that for both LAI and CCC GPR retrieval are reliable and comparable with SNAP estimates despite CCC show a consistent underestimation. LAI (CCC) estimation metrics ranges for the different data sets as follows: R2 0.2 to 0.75 (0.2 -0.7) and MAE 0.1 to 0.75 (0.5-3). Overall the results demonstrated the potentiality of GPR machine learning approach in LAI and CCC estimations when a robust training set is exploited, such condition guarantee a spatial-temporal transferability of the developed model. GPR BioPar estimation from Sentinel 2 can produce decametric quasi-weekly quantitative information for crop spatio-temporal monitoring. Such maps are a fundamental input for decision support systems devoted to smart crop management and early warning indication. Many precision agriculture techniques could thus benefit from information generated with ideal quality and frequency for site-specific practices aimed at reducing inputs and improving the use-efficiency of fertilizers.

AIT Contribution
Sala Biblioteca @ PoliBa
18:00
15min
PRISMA, Launched Four Years Ago: Enabling Scientific Studies on Aquatic Ecosystems
Claudia Giardino

This study provides an overview of main findings achieved by exploiting the hyperspectral products provided by PRISMA in the visible near infrared wavelength range (VNIR), for aquatic ecosystems mapping. To this aim, the quality of PRISMA L2 products, distributed by ASI and already atmospherically corrected, is assessed on the basis of corresponding in-situ measurements at twenty inland and coastal water sites representing a wider range of optical properties of water. For a subset of sites, where L2 products showed low accuracies, the results provided by different atmospheric correction codes (e.g., ACOLITE) are added. The results show that the PRISMA L2 products are sub-optimal for estimating water quality parameters, apart from very turbid waters or clear-shallow waters, while ACOLITE would generally be more accurate in reproducing the spectral shape of in-situ hyperspectral data. A series of use cases are then presented to demonstrate the performances of a pre-defined series of algorithms (i.e. bio-optical modelling inversion, band-ratios, machine learning) for deriving bio-physical parameters in optically complex waters from PRISMA. The retrieval of water quality parameters is performed for a variety of water types corresponding to lakes of different trophic status, coastal waters with significant depth profiles and ecosystems characterized by different hydrogeochemical and ecological processes. The presented use cases include the simultaneous retrieval of phytoplankton pigments (e.g., concentration of chlorophyll-a and phycocyanin) total suspended matter with separation of organic and inorganic fractions, yellow substances, the mapping of fractional cover of bottom types, as well as of emergent macrophytes biomass. For some of these parameters the synergy of PRISMA with operational multispectral sensors (e.g., Sentinel-2) are presented, while an outlook for advancing the estimation of water quality parameters with PRISMA is finally discussed.

AIT Contribution
Sala Videoconferenza @ PoliBa
18:30
18:30
60min
Poster session
Sala Videoconferenza @ PoliBa
18:30
60min
Poster session
Sala Biblioteca @ PoliBa
08:30
08:30
30min
Registration / registrazione partecipanti
Sala Videoconferenza @ PoliBa
08:30
30min
Registration / registrazione partecipanti
Sala Biblioteca @ PoliBa
09:00
09:00
240min
THE DOWNSTREAM SECTOR IN ITALY: ASI’s ROLE and PROGRAMMES in the NATIONAL ECOSYSTEM
Maria Libera Battagliere, Luigi D'Amato, Laura Candela

In the era of the “space society”, satellite services represent a key element that has to be valued and promoted by institutions at local and global level. Earth Observation (EO), Navigation (NAV) and Telecommunication (TLC) satellite services represent the space sectors with the most relevant growth, not only for the level of maturity, quality and quantity of the existing operational infrastructures, but mostly for the potential of wide diffusion for applications and connected services (known as downstream). These applications and services are suitable to ensure a sustainable economic development, fostering a significant progress in several domains and generating benefits for citizens and society. For this reason, they have been recognized as top strategic priorities of the Italian space policies (Resolution of the President of the Council, March 2019).
Considering that Italian Space Agency (ASI) is a governmental organization having the responsability to promote space technology for the development of the Country, ASI has set up a programme called Innovation for Downstream Preparation (I4DP) further articulated in three intervention lines to better leverage on needs of that communities: Public Institutional entities, Science and Commercial operators. The main focus of this initiative is s to stimulate the downstream sector growth, offering a concrete support to set-up innovative and powefull space solutions for emerging demand and, at the same time, consolidating and enriching existing national know how both at scierntific and industrial level.
The implementation of the programme is based on thematic periodic calls for each category of target users, as mentioned above (PA i.e. Public Administrations, SCIENCE i.e Scientific Community, MARKET i.e. Economic Operators).
The first cycle of the programme has been initiated in 2021, with the issue of 3 calls focused on the following topics: Effects of climate change and extreme events (I4DP_PA), Sustainable Cities (I4DP_SCIENCE), Management and monitoring of Stability of Infrastructures and/or critical infrastructures also in relation to landscape conservation and Precision Farming for I4DP_MARKET. All the calls were successfully closed in 2022 with the selection of about 20 innovative projects.
I4DP_PA aims to promote demonstrations and pre-operational developments of innovative complex services value chains responding to a well-defined institutional need (e.g. related to activities that the involved PA has to perform by law) in order to prepare new generation downstream services that can be useful to the institutions responsible for territorial governance, civil protection and economic resources management, while promoting the full exploitation of national and European space systems, operational or under development. The final objective of the I4DP_PA calls is to promote an active involvement of Public Administrations, as end users of the services and, at the same time, allow an acceleration of scientific and technological developments, as well as the experimentation of new (pre-)operational procedures EO-based. This approach allows to highlight actual operational gaps and so can help the preparation and the support respect to other national and European investments in infrastructures to better enable the realization of the services themselves.
I4DP_SCIENCE aims to promote the demonstrative development of innovative value-added services based on the use of EO, SATNAV, SATCOM systems in order to prepare new generation downstream services and promote the full use of national and European space systems, operational or under development.
I4DP_MARKET calls aim at supporting the development of innovative projects with a high starting TRL level, in order to promote commercial exploitation of services and products based on innovative data processing, analysis and integration techniques. This initiative is also aimed at guarantee a constant increase in the national technological capacity of the downstream sector, allowing participation in the selection procedure of SMEs, startups and university spin-off. In the long term, these calls will consolidate the Italian entrepreneurial texture in the exploitation of the services and data provided by the current and future satellite infrastructures, in synergy with the terrestrial ones.

The proposed workshop aims to provide a complete picture about the ASI’s ongoing I4DP activities and their future perspectives, highlighting effects in support of the whole Italian communities along the whole space service value chain and of economic downstream sector, providing a focus on selected projects in the framework of the first cycle of the I4DP Programme.

Workshop
Aula 1 @ UniBa
09:00
105min
Tavola Rotonda AUTeC DATI GEOSPAZIALI DALLA FORMAZIONE ALLA FRUIZIONE
Donatella Dominici

Intervengono alla Tavola Rotonda
Maurizio Ambrosiano - Agenzia delle Entrate
Antonio Rotundo - Agenzia Italia Digitale
Gabriele Mascetti - Agenzia Spaziale Italiana (da remoto)
Francesco Tocci - Istituto Idrografico Militare
Riccardo Barzaghi - Membro della Giunta AUTeC
Roberto Devoti - Istituto Nazionale di Geofisica e Vulcanologia
Angelo Iorio - Tecne - Gruppo Autostrade per l'Italia
Umberto Trivelloni - Coordinatore del Gruppo di Lavoro "Cartografia" presso Conferenza delle Regioni e Province Autonome

I contenuti scientifico disciplinari riguardano l'acquisizione, la restituzione, l'analisi e la gestione di dati di natura metrica o tematica relativi alla superficie della Terra, o a porzioni di essa, ivi compreso l'ambiente urbano, le infrastrutture e il patrimonio architettonico, individuati dalla loro posizione spaziale e qualificati dalla precisione del rilevamento.

Le discipline comprese nel settore sono la geodesia (fisica, geometrica e spaziale), la topografia, la fotogrammetria (aerea e terrestre), la cartografia, il telerilevamento (spaziale, aereo e terrestre), la navigazione (spaziale, aerea, marittima e terrestre) e i sistemi informativi territoriali.

Gli ambiti applicativi hanno per oggetto, in particolare, lo studio dei sistemi di riferimento globali e locali, gli strumenti e i metodi di rilevamento, di controllo, di monitoraggio del territorio, delle strutture e dei beni culturali, il trattamento dei dati di misura, la produzione e l'aggiornamento della cartografia, dei DB topografici, il tracciamento di opere ed infrastrutture, i sistemi mobili di rilevamento i modelli numerici del terreno e delle superfici, la gestione e la condivisione dell'informazione geografica multidimensionale e multi temporale.

AUTeC session / Sessione AUTeC
Sala Videoconferenza @ PoliBa
10:45
10:45
30min
Coffee break
Sala Videoconferenza @ PoliBa
11:15
11:15
105min
Sessione AUTEC DIDATTICA E RICERCA NELL’OSSERVAZIONE DELLA TERRA
Bianca Federici

Intervengono:
Mattia Crespi - Dottorato Osservazione della Terra
Andrea Taramelli - Copernicus Academy
Maurizio Savoncelli - Consiglio Nazionale Geometri e Geometri Laureati (da remoto)
Valerio Baiocchi - Membro della Giunta AUTeC
Domenico Sguerso - Società Italiana di Fotogrammetria e Topografia
Enrico Borgogno Mondino - Associazione Italiana di Telerilevamento
Paolo Dabove - Associazione GFOSS.it APS

I contenuti scientifico disciplinari riguardano l'acquisizione, la restituzione, l'analisi e la gestione di dati di natura metrica o tematica relativi alla superficie della Terra, o a porzioni di essa, ivi compreso l'ambiente urbano, le infrastrutture e il patrimonio architettonico, individuati dalla loro posizione spaziale e qualificati dalla precisione del rilevamento.

Le discipline comprese nel settore sono la geodesia (fisica, geometrica e spaziale), la topografia, la fotogrammetria (aerea e terrestre), la cartografia, il telerilevamento (spaziale, aereo e terrestre), la navigazione (spaziale, aerea, marittima e terrestre) e i sistemi informativi territoriali.

Gli ambiti applicativi hanno per oggetto, in particolare, lo studio dei sistemi di riferimento globali e locali, gli strumenti e i metodi di rilevamento, di controllo, di monitoraggio del territorio, delle strutture e dei beni culturali, il trattamento dei dati di misura, la produzione e l'aggiornamento della cartografia, dei DB topografici, il tracciamento di opere ed infrastrutture, i sistemi mobili di rilevamento i modelli numerici del terreno e delle superfici, la gestione e la condivisione dell'informazione geografica multidimensionale e multi temporale.

AUTeC session / Sessione AUTeC
Sala Videoconferenza @ PoliBa
13:00
13:00
90min
Lunch / Pranzo
Sala Videoconferenza @ PoliBa
13:00
90min
Lunch / Pranzo
Sala Biblioteca @ PoliBa
14:30
14:30
15min
EO-Learning: Free online courses on Earth Observation
Francesca Albanese

EO-Learning is the e-learning platform with free courses and resources on Earth Observation launched by Planetek Italia in December 2021. A new opportunity for students and professionals in private and public entities to learn and stay up to date on technologies, methodologies and applications of satellite Earth Observation.

EO-Learning offers courses in both English and Italian languages, ranging from the very basics of remote sensing up to the more complex techniques of satellite data processing and derived applications.
Once enrolled for free in the platform, users can autonomously access open courses.
These are organized in several short lessons, so that users can attend and complete the course at different times, or they can easily focus only on specific contents while ignoring others. Lessons are also designed to be more suitable and engaging for nonexperts in the EO field: users can interact with objects, browse the lesson or answer simple questions, and lessons are narrated by professional speakers.
In addition to courses, a friendly dashboard allows users to keep track of their progresses and to check their results.

EO-Learning also offers Premium courses, dedicated to public or private organizations aiming to provide certified training courses for their members/employee. In fact, these restricted-access courses are provided with more learning tools, such as the tracking of progresses, a series of intermediate and final evaluation tests, and the certification of course completion.
Premium courses can also be tailored to user’s needs, with the possibility to provide course reporting and dedicated scientific support through user forums.

EO-Learning can also host public or private custom courses, designed and produced together with commercial and scientific partners and dedicated to specific training activities/projects in the Earth Observation field.

AIT Contribution
Sala Videoconferenza @ PoliBa
14:30
15min
Exploitation of Multi-Temporal InSAR data for Environmental Risk Assessment Services
Davide Oscar Nitti, Alberto Morea, Khalid Tijani, Nicolò Ricciardi, Fabio Bovenga, Raffaele Nutricato

Multi-temporal SAR Interferometry (MTInSAR) techniques allow detecting and monitoring millimetric displacements occurring on selected point targets that exhibit coherent radar backscattering properties over time. Successful applications to different geophysical phenomena have been already demonstrated in literature. New application opportunities have emerged in the last years thanks to the greater data availability offered by recent launches of radar satellites, and the improved capabilities of the new space radar sensors in terms of both resolution and revisit time. Currently, different space-borne Synthetic Aperture Radar (SAR) missions are operational, e.g. the Italian COSMO-SkyMed (CSK) constellation and the Copernicus Sentinel-1 (S1) mission.

Each CSK satellite is equipped with an X-band SAR sensor that acquires data with high spatial resolution (3x3 m2), thus leading to a very high spatial density of the measurable targets and allowing the monitoring of very local scale events. Thanks to the nationwide acquisition plan “MapItaly”, CSK constellation covers the Italian territory with a best effort revisit time of 16 days since 2010.

S1 mission is instead operational since 2014 and acquires in C-band at medium resolution (5x20 m2) with a minimum revisit time of 12 days (only 6 days between 2016 and 2021, when the full S1 constellation was operational), thus allowing to monitor ground instabilities back in time almost all over the Earth. Moreover, all data acquired by the S1 mission are provided on an open and free basis by the European Space Agency (ESA) and the European Commission (EC), for promoting full utilization of S1 data, with the aim of increasing the scientific research, growing the EO markets and fostering the development of continuous monitoring services, such as the European Ground Motion Service (EGMS) and the Rheticus® Displacement Geo-information Service.

The EGMS is based on the MTInSAR analysis of S1 radar images at full resolution, updated annually, and provides consistent and reliable information regarding natural and anthropogenic ground motion over the Copernicus Participating States and across national borders.

Rheticus® offers monthly updates of the millimetric displacements of the ground surface, through the MTInSAR processing chain based on the SPINUA© algorithm (“Stable Point Interferometry even in Un-urbanized Areas”). Rheticus® is capable to process SAR images acquired by different SAR missions, including CSK and S1. Thanks to the technological maturity as well as to the wide availability of SAR data, these ground motion services can be used to support systems devoted to environmental monitoring and risk management. This work shows the results obtained in the framework of the SeVaRA project (“Environmental Risk Assessment Service”), coordinated by Omnitech srl. The goal of SeVaRA is to implement an innovative system for calculating an aggregate environmental risk index, derived from several parameters related to hydrogeological instability phenomena and/or Weather-related extreme events. In particular, the present work is focused on the analysis of the “Deformation Sub-System”, that has been designed for the computation of risk indices related to structural and ground instabilities (landslides). The first step consists in the Hazard Map computation, which requires the following input data:

  • Susceptibility Map (i.e., the European Landslide Susceptibility Map, provided by the Joint Research Centre European Soil Data Centre)
  • National mosaic of landslide hazard zones, provided by ISPRA (River Basin Plans PAI)
  • Cumulated precipitations (derived by cumulating ground measurement data collected by weather stations, if available, or by interpolating hourly rainfall data provided by the Global Satellite Mapping of Precipitation service, GSMaP, offered by the JAXA Global Rainfall Watch)
  • Land Cover Change (i.e., the CORINE Land Cover inventory)
  • Seismic events inventory, provided by INGV, to account for earthquake-induced landslides
  • MTInSAR ground displacement time series.

The last input is essential for detecting instable areas, whose MTInSAR displacement trend exhibits a significant velocity in the whole observation period and/or an acceleration in the acquisition dates of the last year. The SeVaRA “Deformation Sub-System” has been primarily designed to be interfaced with the Rheticus® Displacement Service, but it supports also products offered by the EGMS service as well as by other MTInSAR services available on the EO market. The final step consists in the computation of the landslide risk index, obtained by combining the previous hazard index with the vulnerability and the exposure of the area of interest. The results of this study over specific areas of interest will be presented and commented.

Acknowledgments

Study carried out in the framework of the SeVaRA project, funded by Apulia Region (PO FESR 2014/2020).

AIT Contribution
Sala Biblioteca @ PoliBa
14:30
120min
Mappatura ad alta velocità: il nuovo editor di mappe Rapid
Christopher Beddow

This workshop will be focused on teaching participants how to use the latest version of the Rapid map editor for OpenStreetMap. Over the course of the workshop, participants will gain valuable experience in mapping buildings, roads, and other critical geographic data that is relevant to their work or interests.

The workshop will begin with an introduction to the Rapid's interface, including the open data catalog, validation of AI data, and use of important hotkeys. Participants will learn how to navigate the Rapid interface and access Rapid via Tasking Manager project. After an introduction to Rapid and its connections to Tasking Manager projects, participants will join a project focused on a local region in Italy.

The workshop will emphasize accurately mapping roads and buildings with correct OSM tags, from highways to neighborhood roads, apartment blocks to retail buildings, and ensuring the AI or open data geometry and tags are corrected as needed. The workshop features extensive hands-on experience, allowing participants to work with the Rapid map editor tools on individual tasks, with validation assistance from the workshop organizer.

By the end of the workshop, participants will have the confidence and expertise to use the new Rapid editor in their typical OpenStreetMap routines. They will be able to map buildings, roads, and other important data with precision and accuracy, and understand the features offered by Rapid.

The workshop is ideal for anyone who works with open data, edits OpenStreetMap, has an interest in AI and machine learning datasets, and who wants to help improve OpenStreetMap in areas with large amounts of missing data. No previous OpenStreetMap experience is required. It is recommended that participants bring a laptop computer, or share with 1-2 other people to collaboratively edit. Participants should also register for an OpenStreetMap account at https://openstreetmap.org prior to the workshop.

Workshop
Aula 4 @ UniBa
14:30
120min
QGIS in campo con Mergin Maps
Matteo Ghetta, Ulisse Cavallini

Mergin Maps è un'applicazione per Android e iOS, ideata e creata per la raccolta di dati in campagna. È completamente integrata in QGIS e, grazie a un servizio di cloud, i dati raccolti vengono sincronizzati in maniera facile ed immediata su un server centrale.

L'applicazione è pensata anche per un uso offline nei casi, più o meno frequenti, in cui la rete non sia disponibile in campagna.

Grazie alla creazione di un account sul cloud, e all'utilizzo di un plugin di QGIS, è molto facile creare dei progetti direttamente dall'ufficio e sincronizzarli con i dispositivi.

I progetti sincronizzati vengono storicizzati e versionati nel cloud, in modo da poter vedere le modifiche inserite e di apportare delle correzioni in seguito al rilievo. La sincronizzazione è bidirezionale, ovvero i progetti ed i dati vengono sincronizzati da QGIS al cloud e dal cloud al dispositivo. Una copia dei dati in geopackage è cosi disponibile su ogni dispositivo e pronta per essere integrata con dati nuovi, anche in assenza di connessione. Dal dispositivo, con un semplice tocco, i dati vengono nuovamente sincronizzati nel cloud, ed il server centrale gestirà automaticamente eventuali conflitti.

Mergin Maps utilizza automaticamente tutte le principali caratteristiche di un progetto di QGIS: vincoli e valori predefiniti per garantire un inserimento corretto dei dati, personalizzazione dei widget, fotografie geotaggate, relazioni 1:N e tanto altro ancora.

Mergin Maps utilizza lo stesso motore di rendering di QGIS, e grazie a questa caratteristica rispetta pienamente gli stili impostati nel progetto, compresi quelli condizionali. Supporta inoltre una moltitudine di formati, tra i quali tiles vettoriali e raster, raster online, connessioni dirette a database PostgreSQL, GeoPackage e Shapefiles.

La creazione del progetto avviene mediante QGIS, offrendo quindi la stessa interfaccia, le stesse potenzialità, e la possibilità di reimpiegare conoscenze già presenti.

Workshop
Aula 1 @ UniBa
14:45
14:45
15min
GeoImagery: dal Copernicus Hub alla geo-business-intelligence
Chiara Sammarco, Andrea Dilallo, Jonas Cinquini, Giovanni Sammarco

Il Copernicus Hub, il portale web sviluppato dall'Agenzia Spaziale Europea (ESA) in collaborazione con la Commissione Europea (CE), fornisce accesso libero e aperto ai dati e alle informazioni raccolte da una rete di satelliti e sensori terrestri (Sentinelsat 1, 2, 3) nell’ambito del programma di osservazione della terra Copernicus.

Si tratta di una serie innumerevole di informazioni geospaziali storicizzate che includono immagini satellitari, dati climatici e dati di monitoraggio ambientale.

Lo scopo principale di GeoImagery è quello di rendere disponibili questi dati in un’infrastruttura dati spaziale (Spatial Data Infrastructure - SDI) completa e trasformarli in informazioni utili per decisioni di geo-business-intelligence, per analisi spaziali ad-hoc e previsioni territoriali.

GeoImagery utilizza esclusivamente software opensource. In particolare, tra le tecnologie utilizzate ci sono: Geonode, Geoserver, Mapstore, Postgis, Airflow, Qgis.

La piattaforma, sfruttando l’accessibilità delle informazioni provenienti dal Copernicus Hub, attraverso delle pipeline che estraggono informazioni dai dati ancillari, permette all’utente di accedere a servizi che realizzano la mosaicatura ottimale dei prodotti, l’estrazione di indici, di utilizzare segmentazione e classificazione per l’estrazione di informazioni dalle immagini satellitari, di effettuare operazioni di machine learning su serie temporali.

Per come è strutturato, GeoImagery può essere adattato all’utilizzo in campi applicativi molto diversi tra loro, quali la gestione delle infrastrutture, delle flotte di droni, l’agricoltura di precisione, l’urbanistica, il marketing, la gestione delle emergenze … solo per nominarne alcuni.

In questo talk presenteremo più in dettaglio GeoImagery, la sua infrastruttura e le funzionalità già implementate, dando degli esempi applicativi concreti.

AIT Contribution
Sala Videoconferenza @ PoliBa
14:45
15min
Study of interaction of slow landslide with infrastructures based on remote sensing technique
Davide Oscar Nitti, Giovanna D'Ambrosio, Raffaele Nutricato, Angelo Doglioni

Slow and very slow landslides are quite common in territory which is involved in orogenetic processes like Italian territory. These movements are not immediately evident, since displacements are often a few millimetres per year, and they could be unknown.

Landslides are a common natural hazard that can cause significant damage to infrastructure, including bridges, tunnels, railways and buildings. In particular, slow landslides may have a long-term impact on bridges as they often occur over extended periods, and the resulting deformation can be difficult to detect. Remote sensing technologies have emerged as an effective tool for detecting slow landslides and monitoring their impact on bridges.

This work provides a comprehensive review of the interaction between slow and very slow landslides and bridges and their analysis using remote sensing techniques. First, the causes and types of landslides are discussed, with a focus on slow landslides and their impact on bridges. The several factors that contribute to slow landslides, including geology and geomorphology, are also presented.
Hence we introduce remote sensing technologies that have been used to detect ground displacement and monitor slow landslides, including satellite imagery and multi-temporal synthetic aperture radar interferometry. The use of remote sensing for analysing the impact of slow landslides on bridges is also examined.

Finally, the challenges and limitations of using remote sensing for analysing the interaction between slow landslides and bridges are discussed, including their spatial and temporal resolution, and the need for (i) ground truth data for calibration and validation and (ii) for interdisciplinary collaboration between engineers, geologists, and remote sensing experts.

The main findings of this study are presented, by highlighting the potential for remote sensing technologies to improve our understanding of the interaction between slow landslides and bridges.

Acknowledgements

This work is part of the project: “Analysis of the impacts on slow landslides based on remote sensing techniques”, granted by Apulian Regional Government, RIPARTI, project number 39786e0f.

AIT Contribution
Sala Biblioteca @ PoliBa
15:00
15:00
15min
Assessment of infrastructure deformation using EGMS-InSAR data and geo-environmental factors through machine learning: Railways and highways of Lombardy Region, Italy
Marco Scaioni, Rasoul Eskandari, Ziyang Wang

Linear Infrastructures, characterized by high level of systemic vulnerability [1,2], are subject to several environmental and geological hazards. In the context of risk assessment and management, monitoring these important assets plays an important role in establishing the maintenance planning and preventive measures against the disruptive phenomenon, such as ground deformation due to natural and anthropogenic causes. In-situ and traditional infrastructure monitoring approaches, such as high-precision leveling measurements [3], are known to be costly and time-consuming. On the other hand, satellite Remote Sensing (RS) techniques, such as Synthetic Aperture Radar (SAR) Interferometry (InSAR), are recognized to be promising tools for monitoring and condition assessment of infrastructures [4].
As an essential branch of Copernicus Land Monitoring Service (CLMS), the new European Ground Motion Service (EGMS) is providing freely accessible ground deformation data spatially covering almost all European countries. The deformation time time-series contained in the datapoints are acquired based on InSAR processing of Sentinel-1 images from January 2016 up to December 2021 [5,6].
In this study, InSAR-derived deformation dataset, geo-environmental parameters, and Machine Learning (ML) techniques have been integrated to address the major causes of this complex phenomenon, specifically emphasizing railway and highway in Lombardy region, Italy. The vertical displacement velocities (mm/year) of EGMS datapoints located at the neighborhood of these infrastructures are utilized as the input ground motion data. The conditioning factors considered in this work include elevation, slope angle, slope aspect, precipitation, curvature, solar radiation, and Normalized Difference Vegetation Index (NDVI). The ML models, including Decision Tree (DT), Linear regression (LR), Light GBM (LG), XGBoost (XG), Random Forest (RF) and Extra Trees (ET), are used in this study. The Train-Test dataset ratio is considered to be 7:3, with respect to the higher performance of this ratio [7].
First, the used models have been validated using the Area Under ROC Curve (AUC), and ROC being Receiver Operating Characteristic curve. The results mostly show accep results (interval of 0.7 to 0.8) and the applicibility of the model. Then, the Relative Feature Importance (RFI) analysis is carried out to address the significant factors causing the ground deformatio. Also, the results regarding the Permutation-based and Shapley Additive Explanations (SHAP) importance decisions among the factors show that the rainfall (precipitation) and elevation are playing the most important role in the occurrence of the ground deformation detected on the infrastructures, based on the methodology adopted in this study. Also, the effect of solar radiation cannot be neglected. More detailed and further discussion of the results will be provided in the full version of this letter.

AIT Contribution
Sala Biblioteca @ PoliBa
15:00
15min
Automatic analysis of detention camps in Xinjiang (PRC) using Nighttime Light remote sensing data
Andrea Ajmar, Edoardo Vassallo, Emere Arco

Global nighttime imaging data, such as Day/Night Band (DNB) VIIRS sensors, provide global daily measurements of visible light and night infrared. Nighttime Light (NTL) remote sensing products have a wide range of applications such as feature detection and monitoring, multitemporal analysis, and prediction of socio-economics and environmental variables.

This work presents a methodology based primarily on NTL data acquired by the VIIRS (Visible Infrared Imaging Radiometer Suite) sensor mounted on Suomi NPP (National Polar-orbiting Partnership) for monitoring the construction of Uyghur’s detention camp in the Xinjiang Uygur Autonomous Region of the People's Republic of China (PRC). This region is strategically important for PRC, with three of the 5 economic corridors of the Belt & Road Initiative (BRI) crossing this administrative unit. Due to its history and culture strongly linked to the Sunni Islamic world and the independence movements rekindled after the dissolution of the Soviet Union, this area is particularly sensitive (and consequently, under special observation) for the Chinese central government. In December 2015, the National People's Congress passed an anti-terrorism law, which defined various aspects of the Uyghur lifestyle and culture as a security issue, contextualizing them as terrorists and extremists.
Since 2014, PRC has begun the construction of detention camps, responding to the first international accusations by denying their existence. Only later, when the existence of the camps was proven more strongly thanks to satellite images and other sources, the Chinese government changed its narrative, by acknowledging their existence only as education camps, intended to help people find stable jobs and improve their lifestyles.
The methodology also exploits day optical images acquired by sensors mounted on Sentinel-2 satellites and data produced by the Xinjiang Data Project that monitors the human rights situation for Uyghurs and other non-Han nationalities in Xinjiang.

Historical series of NTL radiance data has been generated over localities identified as a mass internment camp in a fully automated processing chain based on Google Earth Engine APIs and developed within a Jupiter Notebook, employing also open-source modules.
The procedure works with three major steps:
a) extracts from the database of Google Earth Engine VIIRS nighttime lights data acquired over a list of provided locations and within a user-defined time frame, storing it efficiently;
b) calculates statistics over the radiance values and generates charts displaying the historical trends of the calculated statistical parameters;
c) performs a clustering of the historical series based on Dynamic Time Warping (DTW) and K-Means techniques.
The script has been released and is available on a dedicated GitHub page.

As a result of the procedure, the 380 camps have been grouped into 10 clusters highlighting patterns that can be linked to different phases: construction, operativity, enlargement, dismission, etc. The interpretation of the clusters has been later validated using the visual interpretation of sample Sentinel-2 images and by exploring the relationship between the radiance value and the historical record of the number of buildings within each camp reported in the Xinjiang Data Project dataset.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:15
15:15
15min
Real-Time Oil Spill Detection by Using SAR-Based Machine Learning Techniques
Davide Oscar Nitti, Alberto Morea, Khalid Tijani, Nicolò Ricciardi, Raffaele Nutricato

This study presents a novel approach to monitor oil spills and ships using Synthetic Aperture Radar (SAR) raw data and deep learning techniques. The proposed methodology involves several steps including pre-processing (focusing, filtering and land sea mask), semantic segmentation, and classification using a deep convolutional neural network (DCNN) model, as well as real-time (FFT-based) processing to ensure a fast response.

To train the DCNN model, the study combined three datasets: CleanSeaNet, TenGeoP-SARwv, and GAP_OilSpill_DB. The first two datasets are publicly available, while the third dataset was specifically built by the authors by integrating known and documented case studies from news articles and cases identified in the sea area in front of the port of Brindisi (Southern Italy), internally validated by expert GAP operators.

Data augmentation techniques were also utilized to improve the model's performance by generating additional training data. The DCNN model uses DeepLab v3+ based on ResNet-18 and is trained on a large dataset of SAR images that includes various types of oil spills, look-alikes, novelty objects, and ships.

The proposed system is optimized to process data on board the satellite to ensure a real-time response. The system transmits images to the ground segment only if there is an event of interest (e.g. a novelty object or an oil spill detected eventually involving the nearest ships).

The study demonstrates that the proposed approach provides a promising solution for real-time monitoring of oil spills, ships and novelty objects using satellite SAR raw data. The use of deep learning and data augmentation techniques can significantly improve the accuracy and speed of detection, which can ultimately lead to better environmental management and oil spill response. .Additionally, the proposed approach can be applied to a variety of SAR datasets and has the potential to be integrated with existing oil spill response systems.

Acknowledgments 

This work was carried out in the framework of the APP4AD project (“Advanced Payload data Processing for Autonomy & Decision”, Bando ASI “Tecnologie Abilitanti Trasversali”, Codice Unico di Progetto F95F21000020005), funded by the Italian Space Agency (ASI). ERS, ENVISAT and Sentinel-1 data are provided by the European Space Agency (ESA).

AIT Contribution
Sala Biblioteca @ PoliBa
15:15
15min
The use of open-source machine learning techniques for urban features extraction
Paolo Dabove

This research aimed to identify important urban features for sustainable development in the urban landscape of Turin, Italy, using machine learning techniques. Specifically, the study sought to identify physical and social elements such as buildings, roads, vegetation, and open land. The goal was to contribute to more sustainable urban environments.
The study employed the open-source platform QGIS and Orfeo Toolbox (OTB), a software library for processing images from Earth observation satellites. OTB offers various algorithms, including filtering, feature extraction, segmentation, and classification. The primary dataset used for classification consisted of orthophotos with 3 RGB bands at a resolution of 25 cm.
The challenge was encountered when classifying pavement and flat roofs, prevalent features in modern urban areas exhibiting similar radiometric contents in the spectral domain. Flat roofs play a significant role within sustainable urban environments, as they can be utilized to install green roofs improving energy efficiency and reducing the urban heat island effect. Additionally, in Italy, where most old roofs are typically made of “terracotta” tiles, flat roofs result being a relatively new feature in the urban landscape. Identifying flat roofs can, therefore, help monitor changes in urban morphology and land use over time.
To address this challenge, a 4th band was added as DEM (digital elevation model) exhibiting a Ground Sampling Resolution of 50 cm/pix. Its main application was to create an integrated data set providing information on the elevation of the terrain. This helped in distinguishing pavement and flat roofs based on their height difference. Adding the 4th band as DEM increased the dimensionality and complexity of the data, as a single pixel is now classified as four inputs, RGB and DEM. The random forest algorithm in OTB was applied using pixel-based classification, a machine-learning algorithm that combines multiple decision trees to create a robust classifier.
Five classes were generated for analysis using the unsupervised learning k-means algorithm from OTB: buildings, flat roofs, roads, vegetation, and open land. These classes represent the most common urban features of the study area, a linear concentration of urban settlements along major transportation routes. The random forest algorithm was then trained on these classes using a subset of the integrated dataset as training data. The trained model was used to classify the rest of the dataset, resulting into the final classification map.
Applying the random forest algorithm on the integrated dataset significantly improved accuracy, increasing the overall classification accuracy from 0.83 to 0.90. Notably, the accuracy for the road class rose from 0.796 to 0.944, while that for the flat roof class improved from 0.598 to 0.773. These results provide strong evidence for the effectiveness of using open-source platforms and tools like OTB to identify urban features sustainably. Furthermore, adding more bands, such as the DEM, can enhance the potential of these methods for creating more accurate and detailed maps of urban environments.
This study departs from traditional land cover and land use classification methods that rely on pixel-based classification using only spectral information. Pixel-based classification assigns a single class to each pixel based on its spectral signature, which may not fully capture urban features' spatial variability and heterogeneity. Additionally, discriminating between similar characteristics like pavement and flat roofs requires more than just spectral information.
It is worth noting that this study focused solely on identifying urban features, including buildings, flat roofs, roads, vegetation, and open land. However, suppose the goal is to identify a specific feature, such as only roofs or roads. In that case, the inclusion of irrelevant features in the dataset may result in redundant data and decrease the overall accuracy of the classification. Therefore, future studies may need to explore more advanced algorithms, such as convolutional neural networks, to improve the accuracy and efficiency of identifying specific urban features.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:30
15:30
15min
Application of Remote Sensing to Repeat Photography to Analyze Landscape Change
Peter Ernest Wigand, Somayeh Zahabnazouri

In the 1850s landscape photography proliferated in the American West as a means of recording “then and now” views of the same landscape after some interval of time. Many photos were casual, usually taken from the same view point but without regard to season, or the exactness of the scene being photographed. Some are very precise and involve a careful study of the original image. These photos have provided a unique database that has been exploited by scientists from both universities and US environmental agencies to track the impact of first people upon the landscape and later of climate change. Pioneered in the US state of Arizona, this technique provided a unique opportunity to record changes in vegetation cover due to climate change and human impact, and to document ongoing surficial processes due to both. Remote sensing when combined with repeat photography provides a unique opportunity to study these changes in much greater detail, especially when precise measurements are required. Landsat photos have the potential to be precisely positioned for comparison with past photos. This makes the measurement of both the nature of change and their rates to be precisely measured and applied in models of environmental change. We are combining historic photos from both southern Italy and the American West with Landsat photos to study changes in the two areas during the last 140 years. In particular, we are focusing upon changes in both vegetation cover due to climate change and to the activities of people. We are also investigating relationship between vegetation cover destruction and erosion. We will be use our findings and relating it to specific changes in climate to see what conditions have been the most destructive with regards to both annual rainfall amount and in changes in rainfall seasonality.

AIT Contribution
Sala Videoconferenza @ PoliBa
15:30
15min
Probabilistic approach to the mapping of flooded areas through the analysis of historical time series of SAR intensity and coherence.
Giacomo Caporusso, Davide Oscar Nitti, Fabio Bovenga, Raffaele Nutricato, Alberto Refice, Domenico Capolongo, Rosa Colacicco, Francesco P. Lovergine, Annarita D’Addabbo(

Giacomo Caporusso(1), Alberto Refice(1), Domenico Capolongo(2), Rosa Colacicco(2), Raffaele Nutricato(3), Davide Oscar Nitti(3), Francesco P. Lovergine(1), Fabio Bovenga(1), Annarita D’Addabbo(1)
1 IREA-CNR – Bari, Italy
2 Earth and Geoenvironmental Sciences Dept., University of Bari, Italy
3 GAP srl, Bari, Italy

As part of the analysis of flood events, ongoing studies aim to identify methods of using optical and SAR data in order to be able to map in an ever more precise way the flooded areas that are defined following a flood. At the same time, institutions responsible for territorial security have concrete needs of both monitoring tools capable of describing the susceptibility to flooding and of forecast tools for events with a fixed return time, consistent with the hazard and risk approaches defined, for example, at European or National regulatory level.
As far as flood hazards are concerned, hydraulic modeling is currently the most widely used reference for responding to forecasting needs, while the concrete value of remote sensing support emerges in the monitoring context, given the possibility of examining historical series of images referring to any portion of the territory.
A statistical approach to the analysis of historical series of satellite images can take into consideration the study of the probability connected to the presence/absence of water in the area, through the analysis of specific indices derived from multi- and hyperspectral optical images (NDVI, NDWI, LSWI) and/or intensity, coherence and radar indices derived from SAR images. In particular, for the study of time series of the variables considered, algorithmic approaches of a probabilistic nature are suitable, such as the Bayesian model and the Theory of Extreme Values.
The objective of this work is the assessment of a methodology to return the historical series of the probability of flooding, as well as the corresponding maps, relating to a test area.
In this context we present some results related to the study of an agricultural area near the city of Vercelli (Northern Italy), characterized by the presence of widespread rice fields and affected by a major flood of the Sesia river in October 2020.
Sentinel-1 SAR images were considered, from which the intensity and interferometric coherence variables can be deduced. The hydrogeomorphological support consist of slope, Height Above the Nearest Drainage (HAND), and Land Cover maps. Through the Copernicus Emergency Management, the flood maps relating to the 2020 event were acquired, to validate the results.
Regarding the methodology, the probabilistic modeling of the InSAR intensity and coherence time stacks is cast in a Bayesian framework. It is assumed that floods are temporally impulsive events lasting a single, or a few consecutive acquisitions. The Bayesian framework also allows to consider ancillary information such as the above-mentioned hydrogeomorphology and satellite acquisition geometry, which allow to characterize the a priori probabilities in a more realistic way, especially for areas with low probability of flooding. According to this approach it is possible to express the posterior probability p(F|v) for the presence of flood waters (F) given the variable v (intensity or coherence) at a certain pixel and at a certain time t as a function of the a priori and conditioned probabilities, through the Bayes equation:
p(F|v) = p(v|F)p(F) / (p(v|F)p(F) + p(v|NF)p(NF)),
with p(F) and p(NF) = 1 − p(F) indicating respectively the a priori probability of flood or no flood, while p(v|F) and p(v|NF) are the likelihoods of v, given the two events.
The flood likelihood can be estimated on permanent water bodies, while, to estimate the likelihood of areas potentially affected by flood events, the residuals of the historical series are considered with respect to a regular temporal modeling of the variable v.
Gaussian processes (GP) are used to fit the time series of the variable v. GPs are valid alternatives to parametric models, in which data trends are modeled by "learning" their stochastic behavior by optimizing some "hyperparameters" of a given autocorrelation function (kernel). The residuals with respect to this model can be used to derive conditional probabilities and then plugged into the Bayes equation.
The availability of the flood maps will allow to tackle the forecasting aspect in the next future, taking the time series of satellite images as a reference.

AIT Contribution
Sala Biblioteca @ PoliBa
15:45
15:45
15min
EUSI il principale gateway Europeo per immagini VHR satellitari, missioni di tasking, e monitoraggio.
Valerio Gulli

EUSI il principale gateway Europeo per immagini VHR satellitari, missioni di tasking, e monitoraggio.

Abstract: EUSI è un’azienda all'avanguardia nell'osservazione della Terra e fornisce soluzioni tecnologiche avanzate per immagini satellitari ad altissima risoluzione (VHR), prodotti 2D e 3D e applicazioni geospaziali tra cui strumenti di analisi basati sull'intelligenza artificiale. Grazie alle nostre stazioni di terra (multi-missione) abbiamo la capacità unica di fornire immagini in meno di 30 minuti dalla raccolta, fornendo ai nostri clienti geo-intelligenza tempestiva e accurata. Grazie all'accesso a una costellazione di più di 30 satelliti, i nostri partners beneficiano di una qualità e di una produttività delle immagini senza pari, con risoluzioni che vanno da 30 cm a 1 m, con una frequenza di rivisitazione giornaliera combinata di quasi 10 volte al giorno in pancromatico, multispettrale, iperspettrale e video. Quest’anno grazie all'aggiunta dei satelliti ad alte prestazioni della costellazione WorldView Legion offriremo un monitoraggio più persistente e accurato, con rilevamento dei cambiamenti quasi in tempo reale e analisi tempestive su scala. La presentazione si concentra sulle capacità all'avanguardia di EUSI e la nostra lunga cooperazione lavorativa con Planetek, un azienda leader nel settore dei Sistemi Informativi Geografici e dell’elaborazione di immagini telerilevate da satellite. Partecipate alla presentazione per saperne di più su come stiamo trasformando il modo in cui osserviamo e analizziamo il nostro pianeta.

AIT Contribution
Sala Biblioteca @ PoliBa
15:45
15min
Mergin Maps: an open source platform to take QGIS to the field!
Saber Razmjooei

Simplify field survey by capturing Geodata on your mobile or tablet. Create mobile forms with the fields you require and invite your survey teams to complete them on their phones or tablets. Captured data, along with their location can be surveyed offline, then synced back to the office in seconds.

Mergin Maps is an extension of the free and open source GIS software QGIS. It allows you to open, interrogate and edit your QGIS projects on your mobile. Map layers look the same as in QGIS desktop and you can sync your data back and forward with QGIS desktop using the Mergin Maps QGIS plugin.

Advantages of the Mergin Maps system:

  • Configurable forms and validation on the fly
  • No need for cables to get your data on/off your device
  • Connect to external GNSS devices for high accuracy location data
  • Wide selection of CRS with possible transformation and datum shift grids
  • Stake out and line recording utilising GPS
  • Share field survey with others for collaborative working
  • Safely work together on the same datasets, even offline
  • Updates from different surveyors are intelligently merged
  • Push data back from the field in real time
  • See version history and cloud-based backup
  • Fine-grained access control
  • Sync with your PostGIS datasets
    Mergin Maps is developed by Lutra Consulting. With more than 14 years of experience helping organisations adopt open source GIS, we designed Mergin Maps to help solve challenges in a wide range of industries. Lutra Consulting is part of the core QGIS development team.
GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
16:00
16:00
30min
Coffee break
Sala Videoconferenza @ PoliBa
16:30
16:30
120min
Principi base del sistema GNSS ed elaborazione di dati con un applicativo open-source
ALBERICO SONNESSA

Nel corso degli ultimi cinquant'anni, i sistemi di posizionamento e navigazione basati su satelliti hanno acquisito una crescente importanza sia in campo scientifico che in ambito professionale.
L'evoluzione dal Global Positioning System (GPS), nato come sistema prettamente militare svilup-pato dagli Stati Uniti d’America, al Global Navigation Satellite System (GNSS), che attualmente comprende costellazioni satellitari gestite da Russia (Glonass), Europa (Galileo ), Cina (Beidou) e al-tre nazioni, e il cui segnale è utilizzabile anche dagli utenti civili, ha aperto nuove opportunità per i ricercatori e accelerato la diffusione di applicazioni ormai di uso quotidiano (es. strumenti per la na-vigazione installati sui telefonini).
Inoltre, la crescente disponibilità di ricevitori a basso costo e l'implementazione di un numero sem-pre maggiore di stazioni di riferimento operanti in continuo (Continuously Operating Reference Sta-tions - CORS), spesso appartenenti a reti regionali, consente oggi ai professionisti di eseguire facil-mente attività di posizionamento, sfruttando la modalità Real-Time Kinematic (RTK), ed evitando così la fase di post-elaborazione delle osservazioni acquisite in campo.
Di contro, la disponibilità di modalità di posizionamento rapido ha fatto sì che in molti casi l’operazione di rilievo si sia ridotta alla mera “pressione di un pulsante”, portando alla perdita di co-noscenza teorica dei principi di funzionamento del sistema e al sottoutilizzo delle potenzialità dello strumento, soprattutto in ambito professionale.
Alla luce di ciò, il workshop proposto intende fornire i concetti alla base del GNSS e delle tecniche di posizionamento, e semplici esempi di elaborazione dei dati attraverso un pacchetto di programmi completamente gratuito e open-source, ovvero RTKLIB, che permette di utilizzare le osservazioni acquisite da qualsiasi ricevitore per eseguire operazioni di posizionamento con prestazioni parago-nabili a quelle dei software commerciali.

Workshop
Aula 4 @ UniBa
16:30
120min
Trasformare gli Open Data in informazioni utili per il monitoraggio ambientale e la gestione del territorio con i servizi Copernicus e Google Earth Engine.
Alessandra Capolupo

Questo corso si focalizza sull’estrazione e produzione di informazioni preziose per il monitoraggio ambientale e la gestione del territorio attraverso l'uso di fonti di dati open, come quelle fornite dal programma Copernicus, un'iniziativa dell'Unione Europea volta a garantire la diffusione di dati e servizi ambientali accurati e affidabili per supportare i processi decisionali. Tale servizio si configura come uno strumento essenziale per gestire, monitorare e valutare l'ambiente e le sue risorse, utilizzabile da un'ampia gamma di utenti, tra cui decisori politici, ricercatori e aziende, per analizzare e fronteggiare le più importanti criticità ambientali.
Dopo una panoramica dell'iniziativa Copernicus e dei suoi servizi, saranno introdotte le potenzialità della piattaforma Google Earth Engine (GEE) nel processamento dei big data geospaziali. GEE è una piattaforma cloud, versatile e gratuita, sviluppata da Google nel 2017 per trattare i big data geospaziali, e caratterizzata da un database integrato, continuamente aggiornamento, in cui sono immagazzinati i dati geospaziali open-source e free prodotti e diffusi dai vari programmi spaziali. Al suo interno è possibile effettuare ricerche geospaziali complesse e creare carte personalizzate, integrando una varietà di fonti di dati e tools mediante lo sviluppo di codici in linguaggio di programmazione Javascript o Python.
La natura pratica del workshop suggerisce che i partecipanti acquisiscano esperienze di base per estrarre importanti informazioni dalle immagini satellitari e dalle altre fonti di dati geospaziali di tipo open.
Alcuni potenziali argomenti che saranno trattati nel workshop sono:
• Presentazione del programma Copernicus e del servizio di monitoraggio del territorio da esso fornito
• Introduzione a Google Earth Engine e alle sue potenzialità nel processamento ed analisi dei big dati geospaziali
• Esercizi pratici volti ad analizzare e processare i dati e i servizi Copernicus in ambiente Google Earth Engine.

Workshop
Aula 1 @ UniBa
08:30
08:30
45min
Registrazione partecipanti
Sala Videoconferenza @ PoliBa
09:15
09:15
15min
Saluti di apertura
Sala Videoconferenza @ PoliBa
09:30
09:30
60min
Quale futuro per il FOSS?
Paolo Dabove

Dal 14 al 16 giugno 2023 il Politecnico di Bari e l'Università degli Studi di Bari ospiteranno il convegno su Software e Dati Geografici Free e Open Source FOSS4G-IT 2023, organizzato congiuntamente da: Associazione Italiana per l'Informazione Geografica Libera GFOSS.it APS, Associazione Italiana di Telerilevamento A.I.T. e Wikimedia Italia

Il successo delle edizioni precedenti (2017 a Genova, 2018 a Roma, 2019 a Padova, 2020 a Torino) ha definitivamente posto FOSS4G-IT come uno degli appuntamenti di riferimento a livello nazionale per utilizzatori e sviluppatori di software geografico libero e per produttori e fruitori di dati geografici liberi, senza tralasciare le occasioni di scambio tra le persone appartenenti a tutte le comunità che guardano alle soluzioni libere nel campo dell'informazione geografica.

Lo scopo dell'evento è quindi:

presentare esperienze di utilizzo di dati e software Free e Open Source per il trattamento delle informazioni geografiche;
creare occasioni di confronto e scambio di conoscenza tra utenti professionali, utenti della Pubblica Amministrazione centrale e locale, sviluppatori e produttori di dati geografici;
presentare sviluppi e potenzialità di progetti liberi in ambito geografico riguardanti tutti gli ambiti di interesse (beni culturali, pericoli naturali e antropici, ecc);
mostrare lo stato dell'arte di progetti di software geografico, libero le prospettive di sviluppo sia del software sia delle comunità che ruotano attorno ad esse e che li sostengono.

In continuità con le edizioni precedenti, FOSS4G-it 2023 sarà preceduto da due giornate dedicate a workshop di introduzione pratica ai sistemi FOSS4G, mentre per tutta la durata dell'evento ci sarà spazio per Community Sprint di progetti liberi per attività di sviluppo software, traduzione di documentazione e altro.

La comunità italiana di OpenStreetMap (OSM), rappresentata da Wikimedia Italia, promuoverà una giornata di mapping party e divulgazione sui temi della libera mappatura aperta a tutti, maggiori informazioni sulla pagina wiki.

Anche la comunità degli utenti italiani di QGIS sarà presente con il periodico hackfest italiano, per maggiori dettagli vedere la pagina dedicata.

Gli atti del convegno potranno essere pubblicati su Riviste scientifiche a diffusione mondiale.

La partecipazione al convegno è libera e gratuita, grazie all’attività volontaria degli organizzatori e al supporto degli sponsor, ma è richiesta la registrazione.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
10:30
10:30
30min
Coffee break
Sala Videoconferenza @ PoliBa
11:00
11:00
15min
Catasto-Open: strumenti open-source per la visualizzazione dei dati catastali
Chiara Sammarco

Accedere e offrire servizi basati sui dati catastali italiani non è così facile. Il più delle volte sistemi obsoleti e database frammentati causano non poche difficoltà. Eppure sono informazioni che possono risultare strategiche in diversi ambiti: dall’uso responsabile delle risorse terrestri, alla definizione di politiche per la crescita economica di un territorio, fino al campo delle azioni immobiliari.

Catasto-Open mira a rendere l’interazione con le informazioni catastali semplice ed intuitiva. Si tratta di un insieme di strumenti per la gestione dei dati anagrafici e geospaziali del Catasto italiano, che dà la possibilità di:

  • archiviare, recuperare e manipolare dati catastali, inclusi confini di proprietà.
  • visualizzare la rappresentazione cartografica di terreni e immobili, con confini di proprietà inclusi.
  • visualizzare le informazioni sugli immobili e altre informazioni rilevanti.

Un database centralizzato e facilmente accessibile delle informazioni, corredato da servizi di interrogazione ad hoc e basato sui dati resi disponibili all’agenzia governativa.

Il software si propone come una soluzione completa open source per la gestione dei dati catastali italiani e ha le seguenti funzionalità:

  • Rappresentazione cartografica: visualizzazione di terreni e immobili su mappa, che consente una chiara individuazione della loro posizione e dei confini delle proprietà.
  • Interfaccia utente intuitiva: interfaccia utente intuitiva, che rende facile la ricerca e la visualizzazione delle informazioni.
  • Scalabilità: Il software è progettato per essere scalabile, grazie ai diversi moduli che lo costituiscono (catasto-api, catasto-ingest, catasto-tools, catasto-db, etrflib e catasto-open (front-end)).
  • Sicurezza: L’integrazione nativa con MapStore e GeoServer ed i loro possibili sistemi di sicurezza consente di proteggere l’accesso ai dati e ai servizi di Catasto-Open.
  • Standard OGC: Il software espone servizi web secondo gli standard OGC. Pertanto, è totalmente integrabile con altre soluzioni e sistemi GIS.
  • Sviluppo guidato dalla comunità: con una licenza open source, una comunità di sviluppatori potrà contribuire al miglioramento e alla personalizzazione del software per casi d'uso specifici in Italia.
  • Basato su consolidate soluzioni open-source: Catasto-Open si integra nativamente come plugin in MapStore lato front-end, mentre, lato back-end, si integra con GeoServer, per la fruizione dei dati cartografici secondo gli standard OGC. Il modulo python etrflib consente invece di gestire la conversione delle coordinate Sister nei file CXF incluse quelle in Cassini-Soldner.

In questo talk sarà presentato il software Catasto-Open e le sue funzionalità. Grazie alla sua architettura, allo sviluppo guidato dalla comunità e ai fattori di scalabilità, sicurezza, e conformità agli standard, Catasto-Open ha tutte le carte per aiutare chi deve implementare soluzioni per l’amministrazione di dati catastali.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
11:15
11:15
15min
Piattaforma open source per la gestione del territorio della Regione del Veneto
Umberto Trivelloni

La Regione del Veneto è da tempo impegnata nello sviluppo di una Piattaforma di Monitoraggio Territoriale (PIMOT) che è sostenuta con crescente energia dagli indirizzi politici al fine di migliorare i processi di programmazione, pianificazione e monitoraggio del territorio e dell’ambiente.
Si tratta di un progetto che sviluppa un articolato sistema di informazioni basate sulla componente geografica e che vede coinvolte numerose strutture regionali, centrali e periferiche. Il ricorso alle applicazioni open source (Qgis, Postgres…) ha consentito infatti non soltanto di creare una rete di dati molto ricca e diversificata, ma anche di collegare diverse categorie di utenze raggiungendo gli specialisti che operano nelle sedi periferiche e a più diretto contatto con il territorio anche nelle situazioni di emergenza.
PIMOT mette a sistema un vastissima mole di dati interni regionali provenienti dall’Infrastruttura Dati Territoriali (IDT-RV) e dai sistemi di monitoraggio di ARPAV a cui si accompagnano fonti esterne derivanti soprattutto da piattaforme satellitari per l’Earth Observation.
Da questo serbatoio di dati si ricavano informazioni storiche ed in tempo reale che sono agevolmente fruibili dagli utenti attraverso procedure e servizi di facile utilizzo.
Accanto allo sviluppo e all’implementazione della piattaforma sono stati sviluppati dei servizi social per divulgare al maggior numero possibile di utenti i temi dell’osservazione della Terra che, unitamente a cicli di formazione online, stanno disseminando le competenze tecniche e scientifiche creando, in ultima analisi, le condizioni adeguate per il potenziamento a lungo termine della piattaforma.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
11:30
11:30
15min
La mappatura geomorfologica a “copertura totale” mediante l’utilizzo di software opensource: strumento di valutazione, gestione e mitigazione dei rischi geomorfologici
MARA REMI

La mappatura geomorfologica a “copertura totale” mediante l’utilizzo di software opensource:
strumento di valutazione, gestione e mitigazione dei rischi geomorfologici

Dipartimento di Scienze della Terra e Geoambientali, Università degli Studi di Bari “Aldo Moro”
Mara Remi – mara.remi@uniba.it

Abstract
La produzione di cartografie geologiche e geotematiche è essenziale per il Sistema Informatico Territoriale Integrato, il quale fornisce una comprensione approfondita del territorio in termini di litologia, strutture e morfologia, rendendolo un utile strumento per la pianificazione territoriale. Nel tempo, numerosi progetti hanno contribuito alla creazione di queste mappe, utilizzando simboli e colori standardizzati (ad esempio, il progetto CARG) per ottenere una rappresentazione cartografica omogenea con un linguaggio comune e interpretabile a livello nazionale. Attualmente, la cartografia geomorfologica "a copertura totale" è emersa come uno strumento fondamentale per la valutazione, gestione e mitigazione dei rischi geomorfologici, favorendo una pianificazione territoriale accurata. A differenza dell'approccio cartografico "tradizionale", che impiega simboli e colori per rappresentare dati morfogenetici, questo metodo utilizza vettori poligonali e puntuali. Ciò garantisce che nessun punto o area venga trascurato durante il processo di interpretazione e che ogni elemento sia discretizzato, offrendo una rappresentazione completa e dimensionalmente corretta della complessità del paesaggio fisico (forme, depositi e processi) su diverse scale. La creazione di questa Carta Geomorfologica a copertura totale viene effettuata su piattaforme GIS open source (come QGIS), sfruttando le potenzialità di georeferenziazione, digitalizzazione degli elementi cartografici, visualizzazione multiscala e elaborazione di layer informativi diversi. Per facilitare la redazione cartografica, sono stati raccolti ed elaborati diversi layer informativi utilizzando il software QGIS, che supporta l'interpretazione degli elementi morfogenetici della superficie terrestre. A partire da una base topografica IGM 1:25.000 e CTR 1:5.000, sono stati aggiunti ulteriori layer informativi, tra cui ortofoto, dati idro-geomorfologici, modelli digitali del terreno (DTM) e dati LiDAR. Utilizzando le informazioni contenute negli ultimi due (ovvero l'elevazione del terreno), è possibile derivare, tramite gli strumenti di processing di QGIS, mappe di indici geomorfologici che supportano l'interpretazione e la mappatura delle forme. Un esempio di questi layer sono gli "hillshade", raster che mappano il terreno utilizzando luce e ombra per simulare un effetto 3D, e i "geomorphon", raster derivati da un'analisi qualitativa della topografia che, attraverso l'uso di diversi colori, permettono di visualizzare le forme più comuni del paesaggio, come aree di pendenza, ripiani pianeggianti, avvallamenti, valli, oltre a punti più alti come creste e crinali. Questi elementi sono fondamentali per la mappatura delle unità geomorfo-topografiche, che a loro volta contribuiscono al riconoscimento delle morfologie e alla comprensione dei processi morfogenetici che determinano l'evoluzione del paesaggio. In conclusione, questi elementi di supporto sono considerati un aspetto chiave per la mappatura geomorfologica a copertura totale, in quanto consentono la discretizzazione accurata delle forme del paesaggio e dei processi a cui sono soggette, migliorando la valutazione del livello di pericolo e/o di rischio geomorfologico.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
11:45
11:45
15min
La raggiungibilità dei comuni montani: Come e perchè costruire una comunità di mappatori locali – Caprezzo Community use-case
Chiara Angiolini

In questa presentazione TomTom illustrerà lo use-case del Comune di Caprezzo (Verbano-Cusio-Ossola), un piccolo comune montano all’interno del Parco Nazionale della Val Grande in Piemonte, che aveva la necessità di aumentare la propria raggiungibilità.
Dal 2020 il Comune di Caprezzo ha visto crescere la propria popolazione. La possibilità di poter fare smart working, il desiderio di poter avere più spazio vivibile durante i lockdown del 2020 e 2021 e l’aumento del costo della vita in città hanno creato le condizioni affinchè ci fosse un aumento della crescita della popolazione anche giovanile. Questo incremento ha portato ad una serie di conseguenze tra cui una maggiore domanda di servizi, in particolar modo è emersa la criticità del raggiungimento dei cittadini da parte dei mezzi di primo soccorso: la mappa del paese era poco aggiornata e con pochi contenuti disponibili, per questo motivo il Comune di Caprezzo si è fatto aiutare da TomTom.
Caprezzo e TomTom hanno lavorato insieme ad una strategia congiunta che comprendeva l’analisi dello stato di fatto, le risorse, i risultati, le tempistiche e le opportunità future. La Community di Caprezzo è nata a inizio 2023 e da allora molto è stato fatto, la presentazione di TomTom tratterà in dettaglio:
• Le risorse: come TomTom ha supportato la creazione della comunità locale di mappatori, come e cosa è stato fatto, chi e come ha supportato dall’esterno
• I risultati: tra cui com’era e com’è oggi la mappa di Caprezzo
• Le opportunità future: cosa si può fare nel prossimo futuro
L’esperienza di questa comunità con OpenStreetMap (“non avrei mai detto che lavorare su una mappa fosse così bello”, “avevo paura che mappare fosse difficile e invece devo ammettere di no!”, “mi sono sentito utile divertendomi”...) e la sinergia che si è creata con TomTom (“Grazie per averci aiutato! E’ bello che io possa contribuire direttamente all’aggiornamento della mappa del mio paese”) sono ulteriori risultati positivi di questa attività congiunta.
Il tema della scarsa raggiungibilità sollevata dal Comune di Caprezzo, affrontata insieme a TomTom grazie a OpenStreetMap è un’esperienza importante che ha dato risultati tangibili, che potrebbero essere replicati in altri comuni. La strategia seguita e le problematiche affrontate insieme giorno dopo giorno potranno essere sicuramente di aiuto per altre realtà che ne vogliano seguire l’esempio e intraprendere quindi iniziative simili.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:00
12:00
5min
L’approccio al software open source per il Gis: i concetti più ostici
ANTONELLA Marsico

L’uso dei sistemi informativi territoriali è ormai diventato indispensabile per tutti gli studi che utilizzano informazioni con una componente spaziale. Sempre più neofiti si avvicinano a questo mezzo di lavoro per la gestione e rappresentazione dei dati georiferiti, ma l’approccio può risultare molto difficoltoso soprattutto per gli studenti che si trovano ad affrontare questo tipo di studi per la prima volta.
Sicuramente il concetto più ostico da assimilare riguarda la gestione dei sistemi di riferimento. La possibilità di visualizzare e gestire dati con coordinate espresse in sistemi di riferimento diversi in un progetto che può avere un sistema di riferimento ancora differente, crea una certa confusione. Se poi si tratta di dover georeferenziare della cartografia non reperibile online, la situazione richiede un’attenzione ancora maggiore nella scelta del sistema di riferimento da attribuire al dato raster. Tuttavia, l’ampia disponibilità di dati da webgis con un proprio sistema di riferimento riduce tale problematica.
Un altro aspetto della gestione dei dati in un GIS riguarda la creazione e la modifica dei vettoriali. Nella parte teorica gli studenti non incontrano particolari difficoltà così come nella creazione dei dati puntuali. Anche la geometria lineare non è particolarmente problematica, mentre la creazione di poligoni, soprattutto se si tratta di una vettorializzazione a copertura continua, si configura come una procedura particolarmente ostica. Il rispetto della topologia è, in parte, garantito dagli strumenti idonei, ma alcune azioni di editing sfuggono a tale controllo come la sovrapposizione di vertici dello stesso poligono. Anche l’utilizzo di altri strumenti di editing avanzato incontra qualche difficoltà dovuta, soprattutto, alla sequenza di azioni da compiere e a problemi di natura topologica non segnalati in questa fase.
Fra le maggiori difficoltà riscontrate rientra anche quella di capire e gestire i molteplici comandi legati al layer: la caratterizzazione della simbologia e il gran numero di operazioni che si possono effettuare sui singoli tematismi risulta essere di una certa complessità.
La presenza di un docente che guidi e indirizzi l’uso iniziale dei sistemi informativi territoriali favorisce il passaggio delle nozioni, soprattutto se mancano basi di informatica e programmazione. La procedura guidata e la possibilità di correggere immediatamente gli errori incoraggia l’apprendimento della giusta sequenza di passaggi e la comprensione della logica che sottende all’utilizzo del GIS. Gli errori più comuni, derivanti dalle procedure sopra descritte, vengono risolti agevolmente poiché gli studenti, all’inizio del loro percorso di utilizzo del GIS, tendono a commettere grossomodo sempre la stessa tipologia di operazioni inesatte.
Tuttavia, la semplificazione di alcune procedure nell’interfaccia utente dei software dedicati può contribuire notevolmente all’abbattimento delle difficoltà che i neofiti incontrano nel loro primo approccio ai sistemi informativi territoriali.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:05
12:05
5min
Il MOOC sulla GIScience per la giustizia climatica: dalla teoria alla pratica
Daniele Codato

Grazie allo sviluppo avvenuto, in particolare a partire dal nuovo millennio, di metodologie, strumenti e tecnologie opensource per la comunicazione e diffusione di contenuti geografici, che rientrano sotto il termine di GEO-ICT, ovvero tecnologie di geo-informazione e comunicazione, le università hanno potuto sfruttare uno strumento in più per portare l’educazione geografica agli studenti e studentesse universitari ed uscire anche dai campus con il coinvolgimento della società civile. Questo si rivela particolarmente importante in un periodo storico in cui la diffusione di strumenti quali smartphone provvisti di GPS e app geografiche, droni, satelliti e geo-portali ricchi di informazioni libere e gratuite permettono a chiunque di raccogliere, produrre e condividere contenuti spaziali. Imparare ad utilizzare al meglio e in maniera più consapevole questa miniera di strumenti e informazioni è fondamentale per migliorare le progettualità e l’inclusione nei processi decisionali relativi a tematiche socio-ambientali e territoriali.
Il centro di Eccellenza sulla Giustizia Climatica (progetto Jean Monnet Erasmus+ 2020-2023) dell’Università degli Studi di Padova, guidato dal gruppo di ricerca "Cambiamenti climatici, territori, diversità", dipartimento ICEA, ha individuato in particolare nei Massive Open Online Courses (MOOC) un’opportunità per diffondere e veicolare alla società civile e al mondo accademico i temi ad alto contenuto geografico sviluppati dal Centro, quali il cambiamento climatico, la giustizia climatica e ambientale e la giusta transizione dai combustibili fossili.
Durante il 2022 i collaboratori e le collaboratrici del Centro hanno ideato e sviluppato due MOOC, il primo relativo al cambiamento climatico e alle azioni di mitigazione e adattamento e il secondo con una forte anima pratica e dedicato alle metodologie proprie della GIScience e agli strumenti, piattaforme e dati geografici e statistici liberi e open-source che permettono la ricerca, produzione e condivisione di dati e informazioni utili per progetti, azioni e iniziative partecipate di giustizia climatica e ambientale.
In agosto 2022, durante il Foss4g-it di Firenze, è stato presentato un primo contributo relativo al MOOC sulla GIScience per la giustizia climatica, dall’ambizioso titolo “Geo-ICTs for good: a MOOC on GIScience for Climate Justice” (https://isprs-archives.copernicus.org/articles/XLVIII-4-W1-2022/103/2022/isprs-archives-XLVIII-4-W1-2022-103-2022.pdf) dove è stato descritto il quadro teorico e metodologico alla base dello sviluppo di questo strumento che allora si trovava nella fase di pianificazione. Dopo quasi 8 mesi, in aprile 2023 la prima versione di questo MOOC ha visto la luce nella piattaforma moodle dell’Università di Padova, assieme al MOOC dal titolo “Cambiamenti climatici e adattamenti negli ecosistemi e nelle società”.
Nel contributo proposto durante i GEOdaysIT 2023, si vuole presentare quindi il risultato tangibile, il prodotto finale scaturito da quanto esposto a Firenze. Partendo dalla visualizzazione ed esplorazione del MOOC presente nella piattaforma moodle, si descriverà quanto elaborato e le differenze e gli accorgimenti adottati rispetto a quanto pianificato 8 mesi fa, basate sui feedback ricevuti da studenti e altri partecipanti durante la fase dello sviluppo. Nella presentazione si farà riferimento anche al suo MOOC fratello, che, con una differente impostazione, vuole invece fornire la base teorica relativa ai cambiamenti climatici per poter affrontare questa parte più pratica, benché siano stati ideati per essere fruiti anche in maniera indipendente.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:10
12:10
5min
Ortorettificare in QGIS con OTB: limiti e possibili soluzioni
Valerio Baiocchi

Le immagini satellitari ad alta e altissima risoluzione sono ormai una risorsa insostituibile per l'osservazione della terra e per l'estrazione di informazioni territoriali. Le immagini satellitari ad alta risoluzione devono essere sottoposte a un processo di orotrettifica geometrica per poter essere utilizzate a fini metrici. Infatti per poterle utilizzare correttamente e confrontarle con rilievi e mappe precedenti, è necessario trattarle geometricamente per eliminare le distorsioni introdotte dal processo di acquisizione. Si ricorda che le immagini che arrivano dai gestori dei satelliti non sono propriamente orotrettificate ma, al massimo hanno subito un primo processo di orientamento. L’ ortorettifica, infatti, non è una semplice georeferenziazione perché il processo deve tenere conto della geometria tridimensionale di acquisizione del sensore. Per questo motivo l'ortorettifica deve essere eseguita all'interno di specifici software commerciali con costi e tempi aggiuntivi rispetto all'acquisizione delle immagini. Questa operazione, chiamata orientamento, può essere effettuata utilizzando vari modelli matematici come quelli rigorosi, quelli basati su funzioni polinomiali razionali (RPF) e su a coefficiente polinomiali razionali anche definiti da alcuni autori, a coefficiente di posizionamento rapido (RPC). Alcuni algoritmi di ortorettifica, basati principalmente sull'approccio RPC, sono però disponibili in software GIS open source come QGIS. OTB (Orpheus toolbox) per QGIS contiene alcuni di questi algoritmi, ma le sue interfacce non sono immediate; d’altro canto sono presenti delle opzioni di semplificazione a volte troppo limitanti, come l'impossibilità di inserire punti di controllo a terra tridimensionali (GCP). Questo limita fortemente l'accuratezza finale ottenibile perché non permette di stimare correttamente l'influenza delle diverse morfologie del terreno sulla geometria di acquisizione. Infatti, la procedura proposta in OTB non consente di sfruttare appieno le potenzialità dei modelli RPC, su cui OTB stesso si basa. Per aggirare queste limitazioni è, ad esempio, possibile realizzare uno "pseudo DEM" ed utilizzare altri accorgimenti per completare l'intero processo ottenendo risultati assoluti paragonabili a quelli dei software più accreditati.
In particolare l’ortofoto ottenuta durante questa sperimentazione ha evidenziato il vistoso spostamento in corrispondenza di un rilievo, dovuto alla correzione con il DEM. Una prima verifica speditiva del processo di ortorettificazione è stata svolta mediante il confronto tra l’immagine di partenza (a sinistra) e l’immagine ortorettificata (a destra) con il file vettoriale della cartografia 1:5000. È stata riscontrata una buona sovrapposizione in prossimità di strade e edifici. Per valutare l’accuratezza metrica dei risultati ottenuti con il software open source Orfeo Toolbox, è stato svolto il confronto con software riconosciuto dalla comunità accademica, utilizzando il metodo RPC. Sono stati scelti 40 punti dalla cartografia 1:5 000, in corrispondenza di particolari facili da collimare in modo tale da ridurre l’errore di collimazione al minimo, e distribuiti omogeneamente in tutta l’area di studio. Confrontando le ortofoto ottenute con con il software open source Orfeo Toolbox e con il software di riferimento, si sono ricavati la media e la media assoluta per le differenze tra le immagini ortorettificate dai due software N ed E:
E N
Media 0,08 0,39
Media assoluta 0,27 0,423
La procedura che è stata messa a punto e che proponiamo può non essere la più veloce ma è una valida alternativa per chi utilizza le immagini satellitari come strumento nel proprio lavoro di ricerca o professionale.

Grizonnet, M.; Michel, J.; Poughon, V.; Inglada, J.; Savinaud, M.; Cresson, R. Orfeo ToolBox: open source processing of remote sensing images. Open Geospat. Data Softw. Stand. 2017, 2.
Toutin, T. State-of-the-art of geometric correction of remote sensing data: a data fusion perspective. Int. J. Image Data Fusion 2011, 2, 3–35.

Jacobsen, K. Systematic geometric image errors of very high resolution optical satellites. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, -XLII-1, 233–238.

Fraser, C.S.; Yamakawa, T.; Hanley, H.B.; Dare, P.M. Geopositioning from high-resolution satellite imagery: experiences with the affine sensor orientation model. In Proceedings of the 2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003.

OTB Users, Orectification Models. July 2019. Available online: http://otb-users.37221.n3.nabble.com/Orthorectification-models-td4031100.html

Baiocchi, V.; Giannone, F.; Monti, F.; Vatore, F. ACYOTB Plugin: Tool for Accurate Orthorectification in Open-Source Environments. ISPRS Int. J. Geo-Inf. 2020, 9, 11. https://doi.org/10.3390/ijgi9010011

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:15
12:15
5min
Open data per le carte tematiche
ANTONELLA Marsico

La possibilità di accedere a diversi dataset open presenti online apre molte possibilità per la produzione e implementazione di carte tematiche, con relativa caratterizzazione topografica.
L’utilizzo dei servizi di condivisione consente di poter utilizzare tali dati come cartografia di base per localizzare l’area di indagine e i risultati dello studio svolto.
Tuttavia, non sempre tali dati supportano in modo soddisfacente la rappresentazione cartografica dei risultati di uno studio. Nel caso di modelli digitali del terreno, ad esempio, l’informazione rispetto alle infrastrutture antropica è scarsa, mentre l’utilizzo di immagini o carte stradali può dare un surplus di informazioni che rendono difficoltosa la lettura del dataset che si vuole rappresentare: spesso, infatti, la variazione dei toni e dei colori diventa un elemento di disturbo.
Il progetto di mappatura collaborativa OpenStreetMap (OSM), in piena filosofia ‘Open’, consente di utilizzare, modificare e aggiungere dati geospaziali, in modo intuitivo, consentendo agli utenti di produrre mappe geotematiche per qualsiasi scopo, superando limiti di alcuni dataset già presenti. Inoltre, grazie alla possibilità di interrogare il database attraverso software GIS, è possibile selezionare gli elementi vettoriali che possono arricchire una cartografia di base che non presenta molte informazioni. Diverse le applicazioni, presenti nella letteratura scientifica, che mostrano l’importanza di tale strumento: dalla mappatura umanitaria (con la forte presenza internazionale del network YouthMappers), fino ad arrivare a progetti che mirano alla valorizzazione del territorio, mappando geoitinerari e siti di interesse geologico, naturalistico, storico. Un altro esempio di grande rilevanza è l’utilizzo dei dati OSM per la produzione di carte di previsione di inondazione nelle quali la base cartografica è rappresentata da un modello digitale del terreno, disponibile online, integrato con il dataset vettoriale delle vie di comunicazione e dell’estensione dei centri abitati selezionato dal database OSM. Grazie all’integrazione di tali dati con il software QGIS e ai plugin disponibili, la ricerca e la selezione dei dati si è rivelata semplice e immediata.
L’immenso dataset di dati vettoriali disponibili OSM consente quindi un vasto grado di personalizzazione della cartografia di base, con la possibilità di selezionare e utilizzare gli elementi più idonei per mettere in risalto anche i risultati di studi scientifici.
Bibliografia:
- Antonella Marsico, Stefania Lisco, Valeria Lo Presti, Fabrizio Antonioli, Alessandro Amorosi, Marco Anzidei, Giacomo Deiana, Giovanni De Falco, Alessandro Fontana, Giorgio Fontolan, Massimo Moretti, Paolo E. Orrú, Enrico Serpelloni, Gianmaria Sannino, Antonio Vecchio & Giuseppe Mastronuzzi (2017) Flooding scenario for four Italiancoastal plains using three relative sea level rise models, Journal of Maps, 13:2, 961-967, DOI: 10.1080/17445647.2017.1415989
- Antonioli F., Anzidei M., Amorosi A., Lo Presti V., Mastronuzzi G., Deiana G., De Falco G., Fontana A., Fontolan G., Lisco S., Marsico A., Moretti M., Orrù P.E., Sannino G.M., Serpelloni E., Vecchio A. (2017). Sea-level rise and potential drowning of the Italian coastal plains Flooding risk scenarios for 2100. Quaternary Science Reviews 158, 29-43

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:20
12:20
5min
DATA COLLECTION FOR ASSESSMENT OF HAZARDOUS CONDITIONS AND ECOSYSTEM BENEFITS OF URBAN TREES USING QFIELD/QGIS
Calamai Stefano

Calamai S., Francesconi A. and Cinelli F.
Department of Energy, Systems, Territory and Construction Engineering, Largo Lucio Lazzarino, Pisa, University of Pisa
The aim of this paper is to highlight the main benefits of using the Qfield app in tree census activity. The advantage of entrusting all of the information to the main GIS platform of the project, which is stored inside the PC, means this leaves only the task of checking the collected data, along with the bonus of in-depth topographical and geospatial analysis. We illustrate the results of tree census activity in the state road 67 Tosco-Romagnola (SS 67) that connects Pisa with Marina di Ravenna, particularly in the stretch that connects the town of Fornacette with Pontedera. Part of this stretch (about 1,.7 km) belongs to the Municipality of Calcinaia with which we are collaborating for the urban tree inventory. This one is straight, has a discontinuous and partly uneven double row of trees of the same species (hackberry), and is subject to high vehicular traffic, even heavy, due to the presence of artisanal and industrial activities. The goal of the work was to evaluate the phytopathological and hazardous conditions and the ecosystem benefits of the trees along this stretch of road in terms of healthy, C02 absorption, particulate matter (PM10) adsorption and shade effect using a GIS opensource app (QFIELD and QGIS). Bio-morphometric parameters (diameter at breast heigh, tree height, first branch height, crown width according to the cardinal points, crown shape and density, tree defects as cavities, decays, severity of pruning, etc.) were recorded in situ during spring and summer. An integrated urban tree inventory was built including both quantitative and qualitative information and the integration between the three software (ITREE, QGIS, QFIELD) has allowed us to observe the precise and punctual geographical context.
From the paper file we have moved on to a computerized file linked to the geographical point and this allows us to link the observed defects and the calculated benefits of the trees to their position (proximity to buildings, presence of agricultural spaces). Hackberry (Celtis australis L.) is a broadleaf tree species, has a fast growth rate, little leaves and an height at maturity of 20 meters. The total number of hackberry tree was 321 (179 on the right and 142 on the left in direction of Pontedera). Their main hazardous defects depend on severe pruning and on conditions of the rooting site, but overall the most abundant failure risk classes are B (low) with 155 plants and C (moderate) with 136 trees. Only one is “D” class (extreme failure risk). These trees contribute to reduce summer air conditioning loads by shading buildings and, if tree canopy is sufficient, lowering air temperatures. In our case total electricity saved was equal to 40 GJ for a value of about 2240 Euros. Annual carbon dioxide reductions and releases amount to 178,600 kg/year (total stored CO2) and to 13,440 kg sequestered (value total net CO2 about 110 Euros). Trees decrease also air pollution by adsorbing fine dust quantified in 270 kg deposition of pollutants.
The awareness of the existence of these benefits and their ecosystem and socio-economic value represents a starting point for improving the green urban ‘capital’ and the management practices to optimize their benefits. This integrated approach is an information and governance opportunity to create a widespread consciousness of the value of urban green assets and implementing concrete actions to maximize their functions against the impact of climate change and air pollution.
Currently, the possibility of an Open Source and a pocket GIS platform, such as QFIELD, truly represents a unique opportunity to make the work easier, faster and more accurate. At the same time, the GIS allows us to have a continuous overview of the data produced on site and to further implement information regarding the investigation by using geospatial analysis, which helps to facilitate the final interpretative.
Sometimes trees were prematurely removed, not replaced, and inadequately maintained because controlling costs outweighs management aimed at increasing their health and the ecosystem services they provide over the long term. The informatic tree census allowed us to give a fairly broad picture of the benefits of trees of this very busy road. Considering the empty places (removals) and the stumps still present, the system will be able to simulate the increase in benefits once the trees will be reintegrated.
In conclusion this study could be used by the Municipality for the redevelopment of the tree-lined road and to improve the quality of life of the residents, mitigating the effects of pollution.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:25
12:25
5min
European Fund (ERDF) and use of Open Source Geospatial (OSGeo) to support the planning and monitoring in Regione Piemonte
Enrico Suozzi, Giorgio Roberto Pelassa

The new EU programmes and funds aim to support the territories in facing the main challenges for development, combining a relaunch of competitiveness and sustainable and inclusive growth.
To address issues such as adaptation to climate change, the resilience of territories, and the protection of biodiversity and natural ecosystems, it is necessary to support traditional planning and monitoring methods with new and more effective systems capable of maximizing the effectiveness of policies and investments. In this context, Regione Piemonte, which has always been active in use of OSGeo, has developed a new methodology that will apply in the ERDF 2021-2027 PO 2 for urban heat islands (UHI) identification and monitoring its evolution over time. The tool involves using satellite images, made available by the United States Space Agency (NASA) and the European Space Agency (ESA), and their processing through the use of free software such as QGIS, SAGA, GRASS, Python, and R, etc. The analysis involves the combination of data deriving from the processing of a series of satellite images using their spectral indexes and data relating to the spatial distribution of the population most exposed to the effects of summer heat waves extrapolated from the associated demographic data to the census sections and provided by ISTAT. The indices derived from the satellite analysis identified for use were: the LST (Land Surface Temperature) which provides basic information relating to the distribution of temperatures in space and the NDVI (Normalized Difference Vegetation Index), able to highlight the vegetative state and considered strategic as it can describe the "beneficial" effects, in terms of decrease in ground temperatures, generated by the presence of vegetation. The data deduced from the suitably normalized spectral index maps were then combined with the demographic data to produce maps of the vulnerability of urban centers (all the cities of Piedmont with more than 10,000 inhabitants) to heat islands, helpful in identifying the most sensitive areas where to implement appropriate NBS (Nature Based Solutions) interventions. This analysis system is supported by a monitoring system that evaluates the effects of the adaptation interventions to be implemented, the LST and NDVI spectral indices have been evaluated on a seasonal basis on a ten-year historical series (2013/2022) on pilot areas. NDVI will be considered suitable for measuring the effects induced by urban transformations and therefore usable for quantifying the effectiveness of future NBS interventions financed by the new programming. The activity carried out up to now has highlighted the advantages of the combined use of open source data and free software: the remote sensing data provide updated and detailed information on land cover and make it possible to directly estimate certain physical and ecosystemic quantities of the territory while the Free software allows for extensive analysis capabilities for free.
The ongoing activity has allowed the development of a first proposal for a methodology that will be supplied as a support tool for planning and programming activities. This methodology, while showing some limitations as it analyzes only some aspects of the UHI phenomenon, presents interesting development potential that will be the subject of further study.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:30
12:30
5min
Utilizzo di software open-source e tecniche di crowdsensing per la calibrazione di dati orientati alla realizzazione di digital twins per il patrimonio culturale
Cristina Monterisi

La fotogrammetria garantisce la ricostruzione di accurati modelli digitali utili a preservare il patrimonio culturale, come previsto dalle direttive europee in materia di conservazione e digitalizzazione. Allo stesso tempo, l’Europa sta spingendo verso la raccolta e la elaborazione di informazioni open-source. Perciò, i software open stanno acquisendo un ruolo principe nella modellazione 3D di dettaglio. Tra questi, MicMac rappresenta uno degli strumenti più potenti grazie alla sua programmabilità e versatilità, data la presenza di diversi algoritmi di calibrazione e la capacità di identificare con precisione i Punti di legame (Tie points). Tuttavia, il suo utilizzo è stato limitato a causa della complessità del linguaggio di programmazione. Solo recentemente è stata introdotta un'interfaccia user-friendly per rendere il suo utilizzo accessibile agli utenti con limitate capacità di programmazione.
Ad oggi, i principali approfondimenti della ricerca nella fotogrammetria riguardano la riduzione dei tempi di acquisizione e dell’elaborazione dei dati, la diminuzione dei prezzi delle fotocamere e il miglioramento dell'accuratezza metrica dei risultati. Il crowdsensing basato sull’utilizzo di smartphone o compact camera a basso costo sta assumendo un ruolo rilevante nel soddisfare tali esigenze, poiché permette rapide ricostruzioni e garantisce una adeguata precisione geometrica, prestando attenzione alla stima dei parametri di orientamento interno e alla procedura di calibrazione della camera.
Questo contributo mira, pertanto, a valutare le prestazioni relative alla fase di ricostruzione da dati acquisiti mediante smartphone e compact camera impiegando due algoritmi di calibrazione diversi: RadialExtended – (RE) e FraserBasic (FB), all’interno delle piattaforme open-source MicMac per la preelaborazione delle immagini e CloudCompare per l'elaborazione delle nuvole di punti. Le indagini sono state condotte sulla Chiesa di Ognissanti, a Valenzano, scelta come caso studio per la sua rilevanza storica e la complessità del contesto ambientale nel quale è situata. I rilievi sono stati condotti utilizzando i sensori CMOS Nikon D-3300 e Xiaomi Redmi 10C in condizioni meteo stabili e ottimali.
Per garantire la comparabilità dei dati, le distanze di acquisizione sono state impostate mantenendo costanti le dimensioni dei pixel e la percentuale di sovrapposizione tra immagini contigue. Dopo una valutazione preliminare della qualità delle immagini per eliminare le scene sfocate, la calibrazione è stata condotta con gli algoritmi sopramenzionati e le nuvole di punti dense risultanti sono state filtrate per eliminare gli oggetti indesiderati e il rumore di fondo. Infine, sono state valutate le distanze tra le nuvole e la qualità delle ricostruzioni considerando la corrispondenza dei Tie points, i residui medi nella fase di block-bundle adjustment e i tempi di elaborazione.
Confrontando i risultati dei due metodi di calibrazione applicati alla compact camera, si può notare che un’accuratezza soddisfacente si ottiene con un numero di iterazione diverso per algoritmo, (5 iterazioni con RE corrispondono a circa 60-75 con FB). Al contrario, lo stesso risultato, si ottiene con lo smartphone dopo circa 15-20 iterazioni. Tuttavia, i criteri di corrispondenza dei tie points tra le immagini correlate rimangono costanti tra le varie tecniche di calibrazione, il che si riflette in una sostanziale differenza dei valori appartenenti ad ogni singola classe. Ciò implica che i valori soglia predefiniti di MicMac non sono ottimizzati e dovrebbero essere specificati per ogni processo di calibrazione.
Le nuvole finali sono quasi sovrapponibili in quanto la loro distanza media è di 0,01 m, sebbene la deviazione standard tra le nuvole delle compact camera sia circa 10 volte inferiore di quella generata da smartphone, coerentemente con quanto riportato in letteratura.
Nella fase di ricostruzione non è stato possibile realizzare il modello 3D con algoritmo FB da smartphone a causa della minore flessibilità ed efficacia di questa tecnica quando le circostanze ambientali non consentono un rilievo di immagini di qualità ottima. Inoltre, le nuvole dense prodotte dall’elaborazione delle immagini acquisite da compact camera sono risultate 3 volte più dense di quelle generate dallo smartphone, nonostante i tempi di elaborazione dimezzati.
Poiché i due metodi hanno mostrato prestazioni equivalenti anche nella ricostruzione dei dettagli architettonici, a fronte del maggior tempo di acquisizione e di elaborazione, la combinazione di compact camera e RE rappresenta la soluzione ottimale.
Si rileva che l’uso di MicMac porta notevoli vantaggi rispetto alle piattaforme ormai consolidate poiché performante e programmabile in ogni fase del processo di ricostruzione 3D. Tuttavia, in futuro sarà necessario prevedere valutazioni sull’automazione della calibrazione, sull'ottimizzazione della velocità di elaborazione oltre che sulla modellazione in tempo reale.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:35
12:35
30min
FOSS4G nell'esplorazione del Sistema Solare: strumenti, strategie e prospettive
Alessandro Frigeri

L’esplorazione del Sistema Solare, iniziata negli anni Sessanta con la corsa alla Luna, si è rivolta a partire dal decennio successivo prima ai pianeti più vicini e più simili alla Terra, Venere e Marte, per raggiungere progressivamente tutti gli altri corpi e i confini estremi del sistema. L’Italia e l’ASI contribuiscono da almeno due decenni in maniera determinante alle più grandi missioni internazionali in questo campo.

Strumenti scientifici italiani sono presenti su sonde americane ed europee come Mars Express, TGO e MRO (in orbita attorno a Marte), BepiColombo, per lo studio Mercurio, ed ExoMars, che porterà un rover automatico su Marte . L’Italia è stata protagonista anche nelle missioni Cassini-Huygens (che ha studiato il sistema di Saturno) e Rosetta, dedicata allo studio della cometa Churyumov-Gerasimenko) e sarà a bordo delle prossime sonde europee dedicate allo studio degli esopianeti, Cheops e Plato.

L’esplorazione del Sistema Solare, iniziata negli anni Sessanta con la corsa alla Luna, si è rivolta a partire dal decennio successivo prima ai pianeti più vicini e più simili alla Terra, Venere e Marte, per raggiungere progressivamente tutti gli altri corpi e i confini estremi del sistema. L’Italia e l’ASI contribuiscono da almeno due decenni in maniera determinante alle più grandi missioni internazionali in questo campo.

Strumenti scientifici italiani sono presenti su sonde americane ed europee come Mars Express, TGO e MRO (in orbita attorno a Marte), BepiColombo, per lo studio Mercurio, ed ExoMars, che porterà un rover automatico su Marte . L’Italia è stata protagonista anche nelle missioni Cassini-Huygens (che ha studiato il sistema di Saturno) e Rosetta, dedicata allo studio della cometa Churyumov-Gerasimenko) e sarà a bordo delle prossime sonde europee dedicate allo studio degli esopianeti, Cheops e Plato.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
13:05
13:05
85min
Pranzo
Sala Videoconferenza @ PoliBa
14:30
14:30
15min
Faunalia Toolkit QGIS plugin
Matteo Ghetta

Faunalia Toolkit QGIS plugin

Faunlia Toolkit è un nuovo plugin sviluppato in Python per QGIS. Il plugin aggiunge un provider alla toolbox di Processing in modo da sfruttare tutte le caratteristiche analitiche già presenti in QGIS come la possibilità di aggiungere gli algoritmi in un modello, eseguire gli algoritmi in modalità batch, usufruire dell'esecuzione in background e sfruttare appieno il comando qgis_process per lanciare gli algoritmi in modalità headless (senza la necessità di avviare QGIS).

Faunalia Toolkit comprende una suite di algoritmi geografici, analitici e di scaricamento dati. Grazie al framework molto semplice è facilmente mantenibile e aggiornabile, oltre a essere molto facile da utilizzare per gli utenti.

Ti sei mai chiesto dov'è l'antipode della tua città? Fra gli algoritmi geografici troviamo la possibilità di creare l'antipode partendo da una coppia di coordinate oppure a partire da un layer puntuale.

Potrai scaricare i dati climatici ERA5-Land del progetto Copernicus (https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-land?tab=overview) dal 1950 ad oggi tramite una semplicissima interfaccia grafica che usa la libreria Python cdsapi sviluppata proprio da Copernicus. I dati restituiti sono in formato grib ed è possibile sfruttare il meccanismo temporale di QGIS per animare la mappa in funzione di data e ora.

Potrai usare QGIS come vero e proprio servizio meteorologico del presente, ma anche del passato. Grazie alle fantastiche API del servizio Free Weather (https://open-meteo.com/) potrai scaricare i dati meteorologici di tutto il mondo dal 1940 ad oggi con una risoluzione di 2km. Sempre grazie alle stesse API, potrai avere un bollettino delle previsioni meteo fino a 7 giorni fino a 40 variabili meteorologiche!

Un altro algoritmo, focalizzato sull'analisi vettoriale, ti permetterà di ottenere rapide statistiche (media, mediana, deviazione standard, etc )di uno o più campi di un layer puntuale i cui punti sono contenuti all'interno di un poligono.

Infine, grazie alla libreria pandas, Faunalia Toolkit ti permette di trasformare da wide a long la tabella degli attributi di un layer vettoriale.

In futuro aggiungeremo ulteriri algoritmi a questa "scatola degli attrezzi".

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
14:45
14:45
15min
Integrazione tra G3W-SUITE e QGIS: stato dell'arte, ultimi sviluppi e prospettive future
Walter Lorenzetti

G3W-SUITE è un'applicazione modulare client-server (basata su QGIS-Server) per la gestione e la pubblicazione di progetti cartografici QGIS interattivi di varia natura in modo totalmente autonomo, semplice e veloce.

L'amministrazione degli accessi, la consultazione dei progetti, le funzioni di editing e l'utilizzo dei diversi moduli si basano su un sistema gerarchico di profilazione degli utenti, aperto alla modifica e alla modulazione.

La suite è composta da due componenti principali: G3W-ADMIN (basato su Django e Python) come interfaccia di amministrazione web e G3W-CLIENT (basato su OpenLayer e Vue) come client cartografico che comunicano attraverso una serie di API REST.

L'applicazione, rilasciata su GitHub con Mozilla Public License 2.0, è compatibile con le versioni LTR di QGIS e si basa su una forte integrazione con le API di QGIS.

Questa presentazione fornirà una breve storia dell'applicazione e approfondimenti sui principali sviluppi del progetto nell'ultimo anno, tra cui:
- nuove funzioni di editing e maggiore integrazione con strumenti e widget di QGIS al fine di semplificare la predisposizione di sistemi di gestione cartografica web
- gestione dei progetti QGIS embeddati
- gestione dei dati WMS-T e MESH e integrazione della funzione TemporalController
- gestione on/off per le singole categorie di simbologia come in QGIS
- integrazione dell'API di elaborazione QGIS per consentire l'integrazione dei moduli QGIS di Processing ed eseguire analisi geografiche online
- gestione strutturata per la consultazione dei log su tre livelli: G3W-SUITE, QGIS-SERVER e DJANGO

Il talk, corredato da esempi di applicazione delle funzionalità, è dedicato sia agli sviluppatori che agli utenti di vario livello che vogliono gestire la propria infrastruttura cartografica basata su QGIS

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
15:00
15:00
15min
progettoPRO Punti di Riferimento Open
Fabio Zonetti

progettoPRO è un progetto per la raccolta e la condivisione delle coordinate di punti materializzati sul territorio e misurati con strumentazione GNSS in modalità RTK o FAST-STATIC o STATIC. La condivisione avviene tramite la piattaforma QGISCloud con licenza CC BY NC, pertanto tutti i dati contenuti nel webGIS devono essere liberi. Le informazioni (attributi) sono ricavabili direttamente sulla mappa web dedicata o tramite servizi OGC wms e wfs. L'obiettivo è quello di creare una maglia capillare di caposaldi liberi, senza alcun costo per l'utilizzatore. ProgettoPRO è anche una sorta di protesta riguardo gli oneri richiesti per le monografie dei punti geodetici IGM e catastali. ProgettoPRO è raggiungibile all'url http://progettopro.e42.it e i suoi confini sono limitati al territorio del comune di Roma, ma il progetto può essere copiato e attivato anche su altre realtà comunali.
La partecipazione è aperta a tutti, purché le misurazioni avvengano con strumentazione professionale e si riferiscano a punti materializzati sul territorio. inoltre, è obbligatorio fornire il file rinex delle misurazioni perché dev'essere condiviso e scaricabile dalla piattaforma webgis. L'idea di progettoPRO è ispirata ad OSM che ha lo scopo di creare la mappa del mondo, progettoPRO vuole creare la maglia geodetica libera.
Gli scopi principali di progettoPRO sono due:
- essere una rete di appoggio per le misurazioni topografiche tradizionali (stazione totale) al fine di georeferenziare misure che hanno coordinate locali, fornendo un valido aiuto a chi non possiede una strumentazione GNSS;
- creare una rete di raffittimento tramite misure topografiche appoggiate ai punti GNSS, al fine di ottenere una maglia di caposaldi in zone di canyon urbani o zone di forte copertura dovuta dagli alberi, le quali sarebbero da ostacolo alle misurazioni GNSS.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
15:15
15:15
15min
G3W-SUITE e QGIS: una soluzione per creare gestionali cartografici web
Walter Lorenzetti, Walter Lorenzetti

Facendo riferimento a vari casi di studio, la presentazione illustrerà come l'applicativo G3W-SUITE permetta, in modo semplice ma raffinato, di predisporre gestionali cartografici per la gestione del dato geografico on line.

A titolo di esempio verrà illustrato il caso di La Regione Lazio che utilizza da diversi anni un sistema basato sull'applicazione G3W-SUITE e QGIS che le ha permesso, non solo di pubblicare servizi web pubblici, ma di predisporre sistemi di gestione cartografica web dedicati al personale interno per la gestione degli aspetti territoriali di propria competenza.
- gestione dei danni causati dalla fauna selvatica e relative procedure di indennizzo
- pratiche di valutazione dell'impatto ambientale
genetica del lupo
- segnalazione di presenza di cinghiali nelle aree urbane
- nidi e spiaggiamenti di tartarughe marine
- incidenti stradali con la fauna selvatica

La stretta integrazione tra la suite e QGIS ha permesso di realizzare sistemi di gestione cartografica web caratterizzati da:
- numerose funzioni di editing geometrico
- personalizzazione della struttura dei form di editing e di consultazione degli attributi
- semplificazione della compilazione degli attributi grazie alla possibilità di ereditare da QGIS: widget di editing, vincoli di obbligatorietà e di unicità, valori predefiniti, forme condizionali e cascata drill down basata su espressioni
- possibilità di definire vincoli geografici in visualizzazione e modifica al fine di suddividere il territorio in base ad aree di competenza associate ai singoli utenti
- possibilità di differenziare i contenuti informativi accessibili in base a diversi utenti e ruoli
analisi descrittiva dei dati tramite integrazione con i grafici realizzati con il plugin DataPlotly

Grazie al contributo e finanziamento della Regione Lazio dedicato allo sviluppo e all'integrazione con le funzionalità di QGIS relative all'editing dei dati, G3W-SUITE si configura come un valido strumento per la predisposizione di sistemi avanzati di gestione dei dati geografici sul web.

A titolo di esempio, riportiamo una serie di casi d'uso:
- Agenzia per la Protezione dell'Ambiente della Regione Piemonte: censimento danni post-evento e fruibilità, gestione e rappresentazione cartografica delle richieste di sopralluogo post-sisma
- Parco Nazionale del Gran Paradiso: gestione della segnaletica dei percorsi del parco
- Regione Piemonte: predisposizione dei Piani di Protezione Civile
- Agenzia per la Protezione dell'Ambiente della Regione Lombardia: Sistema Informativo Idrologico

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
15:30
15:30
15min
Come servire una OGC API protetta tramite pygeoapi
Antonio Cerciello

La sicurezza delle API è di importanza cruciale per impedire accessi non autorizzati e proteggere la privacy dei dati. Esistono diversi meccanismi per garantire la sicurezza delle API moderne, tra cui le chiavi API, OAuth2/OpenID Connect e JSON Web Tokens (JWT). Ognuno di questi meccanismi offre un diverso livello di sicurezza e flessibilità, a seconda delle esigenze dell'API.

Anche nelle OGC Open API, la sicurezza deve essere affrontata in modo standardizzato e agnostico. È qui che fastgeoapi, un nuovo strumento open source, entra in gioco. Progettato per essere un livello di autenticazione e autorizzazione sopra pygeoapi, fastgeoapi offre un'infrastruttura protetta facilmente configurabile.

In questo talk, illustreremo come configurare e proteggere un pygeoapi vanilla con Keycloak e Open Policy Agent per pubblicare API OGC sicure in modo standard. Mostreremo come utilizzare fastgeoapi per offrire un'esperienza di autenticazione e autorizzazione sicura e fluida per gli utenti delle API.

pygeoapi è un'implementazione in Python della suite di standard OGC API. È un'implementazione server open-source che fornisce un modo semplice per pubblicare dati geospaziali sul Web utilizzando protocolli standard come HTTP e JSON.

FastGeoAPI è un software open-source basato su FastAPI e pygeoapi, in grado di integrarsi con OpenID Connect provider (Keycloak, WSO2, etc) e Open Policy Agent. Grazie alle sue funzionalità, FastGeoAPI rappresenta uno strumento estremamente utile per garantire la sicurezza delle API, ed offre un modo semplice e rapido per implementare adeguati livelli di autenticazione e autorizzazione.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
15:45
15:45
15min
Foto di gruppo
Sala Videoconferenza @ PoliBa
16:00
16:00
30min
Coffee break
Sala Videoconferenza @ PoliBa
16:30
16:30
15min
Open data per l’analisi multidimensionale del rischio alluvionale
Isabella Lapietra

Le alluvioni sono le pericolosità naturali più comuni che causano danni alle proprietà e perdita di vite umane. Esse possono avere impatti diversi a seconda delle condizioni fisiche locali e del contesto socio-economico della comunità impattata. Pertanto, l'analisi della vulnerabilità sociale diventa di primaria importanza per comprendere i principali fattori che influenzano la capacità di una specifica comunità ad anticipare, far fronte e riprendersi da un evento calamitoso. Le alluvioni non sono prevedibili, ma la valutazione della vulnerabilità insieme a misure di mitigazione e piani efficaci di gestione delle emergenze possono ridurne l'impatto e facilitare le azioni di ripresa. In questo contesto, tra le strategie di mitigazione del rischio, la mappatura della vulnerabilità sociale e delle zone più esposte alla pericolosità alluvionale è cruciale nella fase di preparazione delle emergenze.
Il presente lavoro indaga la correlazione tra pericolosità alluvionale e fattori socio-economici della Basilicata (Italia meridionale) attraverso un approccio statistico e geografico. All’interno del territorio nazionale, la Basilicata rappresenta un hot-spot di dissesto idrogeologico, in quanto quasi il 50% dei comuni è esposto al rischio frane o alluvioni.
Tutto il database è stato costituito da dati esclusivamente open-source che hanno dato la possibilità di investigare liberamente sia gli aspetto fisici delle pericolosità che le caratteristiche del tessuto demografico delle comunità potenzialmente impattate.
I risultati hanno evidenziato la presenza di 107.587 di abitanti socialmente vulnerabili localizzati in aree ad alta pericolosità alluvionale.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
16:45
16:45
15min
Low-cost AirQuality stations + open standard (OGC SensorThings) + open data + open source (FROST + QGIS plugin for sensors)
Piergiorgio Cipriano, Marika Ciliberti

This is the story of 2 twin projects (namely AIR-BREAK and USAGE) undertaken by Deda Next on dynamic sensor-based data, from self-built air quality stations to the implementation of OGC standard compliant client solution.

In the first half of 2022, within AIR-BREAK project (https://www.uia-initiative.eu/en/uia-cities/ferrara), we involved 10 local high schools to self-build 40 low-cost stations (ca. 200€ each, with off-the-shelf sensors and electronic equipment) for measuring air quality (PM10, PM2.5, CO2) and climate (temperature, humidity). After completing the assembling, in late 2022 stations were installed at high schools, private households, private companies and local associations. Measurements are collected every 20 seconds and pushed to RMAP server (Rete Monitoraggio Ambientale Partecipativo = Partecipatory Environmental Monitoring Network - https://rmap.cc/).

Hourly average values are then ingested with Apache NiFi into OGC’s SensorThings API (aka STA) compliant server of the Municipality of Ferrara (https://iot.comune.fe.it/FROST-Server/v1.1/) based on the open source FROST solution by Fraunhofer Institute (https://github.com/FraunhoferIOSB/FROST-Server).

STA provides an open, geospatial-enabled and unified way to interconnect Internet of Things (IoT) devices, data and applications over the Web (https://www.ogc.org/standard/sensorthings/). STA is an open standard, it builds on web protocols and on OGC’s SWE standards and has an easy-to-use REST-like interface, providing a uniform way to expose the full potential of the IoT (https://github.com/opengeospatial/sensorthings/).

In second half of 2022, within USAGE project (https://www.usage-project.eu/), we released the v1 of a QGIS plugin for STA protocol.

The plugin enables QGIS to access dynamic data from heterogeneous domains and different sensor/IoT platforms, using the same standard data model and API. Among others, dynamic data collected by the Municipality of Ferrara will be CC-BY licensed and made accessible from municipal open data portal (https://dati.comune.fe.it/).

During the talk, a live demo will be showcased, accessing public endpoints exposing measurements (timeseries) about air quality (from EEA), water (BRGM), bicycle counters, traffic sensors, etc.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
17:00
17:00
15min
From real to virtual: sharing a urban 3d model as open data in open source environments
Luigi LA RICCIA, Vittorio Scolamiero, Yogender Yadav

The new concept of Digital Twins (DT) as a city model is interlinked with the Smart City applications on many levels, like interactivity between virtual model and real world, simulations and analysis of present and planned city, emergency planning and management to mention a few. The strength of DT is related to the incorporation of time as a fourth dimension, considering the time as a variable that modifies the data and the semantic information. The aim of this emerging concept DT is provide a physical infrastructure, data, information and procedures for the management of a complex system, in order to offer to the public administration a platform for designing, testing, simulations and analysis of present and planned city, and offer through web-sharing high quality urban 3D model as open data to all operators in Open Source (OS) environments. There are several applications to share on the web a 3D model, but the lack of a detailed and adequately sized development platform is a bottleneck in this contest. The purposes of this work are based on the Torino DT project, source data for the 3D model had been acquired from images and point clouds data sets from 2022. The first step after the acquisition phase have been the source data processing, in order to build up a 3D model of the entire city. There have been rapid technological developments in the field of photogrammetric and LiDAR techniques to produce detailed and accurate 3D models. 3D point clouds from the state of the art LiDAR and photogrammetry provides a powerful collection of geometric elements of a scene with their position, orientation and shape in the 3D space. There are numerous computer vision programs like 3D Zephyr, VisualSfM, meshroom, WebODm etc which offers OS solutions for the 3D reconstruction processing from 2D images. The processing capabilities of these open source solutions are comparable as compared to the commercial ones in terms of geometric features and information. These open source tools are adequate for the research purposes to explore the potential of 3D models from photogrammetry and LiDAR. 3D model is based on a reality mesh model with a specific and well-known work-flow. The next step is share, part of, the 3D model in OS environments that can be accessed with JavaScript code like Cesium platform. 3D web platforms analyzed in this work reflect the ever greater interest in OS software, interoperability and collaboration standards, in order to work in the openness ecosystem. This research field open the way to new opportunities, for instance DT as Open Data. The free availability of an urban 3D model, build up from the reality, could create a new order of opportunity for future model updates or utilizations, especially for real estate operators. Furthermore, particular attention has been paid to model update with collaborative and crowdsourcing solutions, with the perspective to develop a community able to use and update the 3D model.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
17:15
17:15
15min
DigiAgriApp: l'applicazione per la gestione dei tuoi campi agricoli
Luca Delucchi

DigiAgriApp è un'applicazione client-server per gestire diversi tipi di dati relativi ai campi agricoli. È in grado di memorizzare informazioni sulle colture (specie, forme/sistemi di coltivazione...), qualsiasi tipo di dato di sensori (inclusi sensori e dispositivi hardware, meteo, terreni...), informazioni sull'irrigazione (tipo di sistema, aperture...), operazioni sul campo (potatura, sfalcio, trattamenti...), dati da telerilevamento (presi da diversi dispositivi come cellulari, droni, satelliti) e quantità di produzione.

Il server DigiAgriApp è composto da un database PostgreSQL/PostGIS e da un servizio API REST per interfacciarsi con esso. Il server è sviluppato utilizzando Django e l'estensione del framework Django REST, altre estensioni minori sono utilizzate per creare l'API REST. Questo servizio rappresenta l'interfaccia chiave tra il database e il client. Per creare l'API abbiamo scelto una modalità annidata, in cui l'elemento principale è l'azienda agricola; in questo modo l'utente può vedere solo le aziende agricole a lui correlate e da lì può guardare ad altri elementi annidati, prima di tutto i campi dell'azienda agricola e poi altri elementi come i dati dei sensori e quelli remoti oppure altri sottocampi, le file fino ad arrivare alle singole piante. L'API REST utilizza JavaScript Object Notation come formato di input e output per semplificare e standardizzare la comunicazione con essa.

Per ottenere i dati dai sensori, il server è composto anche da un numero crescente di servizi per lavorare con i fornitori di dati, di cui attualmente solo alcuni sono implementati. Il Message Queue Telemetry Transport provider è un demone in continuo ascolto di un broker e di diversi topic per ottenere i dati non appena vengono forniti; il secondo servizio già implementato è relativo ai dati di telerilevamento e utilizza la specifica SpatioTemporal Asset Catalogs per ottenere i dati. STAC è un linguaggio comune per descrivere le informazioni geospaziali, in modo che possano essere più facilmente lavorate, indicizzate e scoperte.

Il lato client invece è sviluppato utilizzando Flutter, un kit di sviluppo software open-source per interfacce grafiche basato su dart, un linguaggio di programmazione progettato per lo sviluppo di client. Flutter è in grado di creare applicazioni multipiattaforma ed è stato scelto proprio per la sua capacità di realizzare applicazioni che possano essere eseguite sulle maggiori piattaforme.

Tutto il codice è rilasciato come software libero e open source con licenza GNU General Public License Version 3; è disponibile nel repository DigiAgriApp su GitLab e l'applicazione client sarà pubblicata anche nei principali store per applicazioni mobili.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
17:30
17:30
60min
Assemblea soci GFOSS.it APS
Sala Videoconferenza @ PoliBa
18:30
18:30
150min
Welcome party
Sala Videoconferenza @ PoliBa
08:30
08:30
60min
Registrazione partecipanti
Sala Videoconferenza @ PoliBa
09:30
09:30
15min
Open multitemporal Earth Observation data for land surface albedo estimation in urban areas
Alessandra Capolupo, Carlo Barletta

Carlo Barletta, Alessandra Capolupo, Eufemia Tarantino

Nowadays, data in an open format, easily accessible and characterized by the fact that they can be freely used and shared by anyone and for any purpose, play an important role due to the social and economic impact they can produce, such as, for instance, the possibility of fostering the development of new services based on them, as well as the transparency and the democratic and participatory processes in public policies. In the field of geographic information and Earth Observation (EO), the satellite images collected by Landsat and Sentinel initiatives are the most typical example of open data. The former, provided by National Aeronautics and Space Administration (NASA) and United States Geological Survey (USGS), have a geometric resolution of 30m and have been accessible for decades, whereas the latter, released by the European Union's Copernicus program, have an accuracy of up to 10m and have been available since 2015. According to the literature, both of them are useful for investigating and monitoring natural resources as well as environmental phenomena that occur on the Earth's surface, allowing for the assessment of numerous surface environmental variables on a local and regional scale. Among these, the land surface albedo, which represents the capability of a surface to reflect incident solar radiation, is a useful parameter for climatic and hydrological studies, both in urban and rural contexts. Moreover, the growing attention to the effects of climate change and urbanization on the environment and territory, such as, for example, the Urban Heat Island (UHI) phenomenon, desertification, and drought, makes it necessary for these aforementioned sources of information to be freely and easily available to citizens, researchers and decision-makers.

The objective of this study is to estimate the broadband land surface albedo and its spatial and temporal variability using accessible data from the Landsat 8 and Sentinel-2 satellites over two separate study areas: the city of Bari, in Southern Italy, and the city of Berlin, in North-eastern Germany. Because these two pilot sites have such disparate geomorphological features, they allow generalizing of the research conclusion independent of environmental context. For this purpose, various Landsat 8 and Sentinel-2 satellite images, very close for acquisition time and date, and collected in different seasons, from 2018 to 2019, were used. Furthermore, the performance of the two implemented algorithms, namely the Silva et al. approach for Landsat 8 data and the Bonafoni et al. technique for Sentinel-2 data was assessed and statistically compared. Urban Atlas 2018 land use/land cover (LU/LC) class vector data, provided in an open format by the Copernicus land monitoring service, were used to better explore the variability of the albedo within each case study. These data were processed in the Google Earth Engine (GEE) platform, which is free-to-use for research and non-commercial use, and consists of an integrated data catalogue mainly composed by open raster and vector data, e.g. Landsat and Sentinel images. Such catalogue, daily updated, is directly connected with the interactive programming environment, on which it is possible to process satellite images by developing own codes in JavaScript or Python languages. Most of its available tools are in open-source format. The statistical analysis, on the other hand, was carried out using the free and open-source R environment.

For both case studies, the investigation revealed that the Landsat 8 approach produced somewhat higher mean albedo values than the Sentinel-2 methodology. So far, the statistical comparison indicated that, for the Bari location, all of the returned Landsat 8 and Sentinel-2 albedo maps were strongly correlated, with a correlation coefficient (ρ) higher than 0.84; for Berlin, instead, a medium-high correlation was discovered (ρ > 0.78). Additionally, for both sites, the findings appear to be more correlated when spring and summer scenarios are considered rather than other seasons. Indeed, the correlation between Landsat 8 and Sentinel 2 images appears to follow the same seasonal pattern, though more satellite images from more years should be investigated for a more accurate interpretation. The dependability of the two approaches will be evaluated in the future through the collection of ground control points in field data campaigns. These new data will enable the most accurate findings to be detected and the other methods to be calibrated to increase their reliability.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
09:45
09:45
15min
Potree platform for infrastructure inspection: una soluzione WebGL open-source a supporto del rilievo e dell’analisi difettologica di ponti e viadotti
Federica Gaspari

La necessità di ispezioni mirate alla documentazione delle condizioni di ponti e viadotti evidenzia l’importanza di individuare strumenti per la condivisione ed elaborazione efficace di prodotti 3D georeferenziati, tramite tecnologie WebGL flessibili, personalizzabili e accessibili anche da utenti non specializzati. In particolare, Potree (Schütz, 2016), libreria JavaScript open-source, permette l’esplorazione di nuvole di punti e mesh a supporto di procedure decisionali per la manutenzione e il monitoraggio delle infrastrutture stradali (Gaspari et al., 2022).

In questo contesto si è articolata la collaborazione con le province di Piacenza e di Brescia con lo sviluppo di piattaforme web personalizzate per l’esplorazione di modelli 3D georiferiti di ponti rilevati tramite laser scanner e fotogrammetria da drone. Lo studio si è focalizzato in particolare sull’identificazione e implementazione in ambiente Potree di funzioni utili sia alla documentazione della geometria del ponte che alla compilazione di schede difettologiche, come richiesto dalle “Linee guida per la classificazione e gestione del rischio, la valutazione della sicurezza ed il monitoraggio dei ponti esistenti” del Ministero delle Infrastrutture e dei Trasporti (MIT, 2020).

In risposta alle necessità degli enti gestori, è stata quindi definita una struttura standard per la piattaforma web condivisibile, Potree platfOrm for iNfrasTructure Inspection (PONTI), comprendente 3 funzionalità essenziali personalizzabili: visualizzazione della nuvola di punti della struttura rilevata, caricamento delle immagini orientate sul modello, posizionamento di annotazioni in corrispondenza di elementi significativi della struttura. Il template e le sue istruzioni sono liberamente accessibili in un repository Github dedicata (https://github.com/labmgf-polimi/ponti).

L’inserimento nel Web viewer di Potree della nuvola di punti, sia in visualizzazione RGB che classificata per elementi strutturali, permette di utilizzare funzionalità di misurazione di coordinate, lunghezze e superficie utile sia alla compilazione di schede di censimento di Livello 0 che di attributi del livello “Ponti e Viadotti” del Catasto Strade provinciale. Attraverso opportune modifiche avanzate del codice JavaScript di Potree è inoltre possibile integrare funzionalità di filtraggio della visualizzazione della nuvola per elemento strutturale di interesse.

L’integrazione di immagini ad alta risoluzione acquisite da drone e opportunamente orientate rispetto al modello 3D permette di definire una modalità immediata e intuitiva per l’identificazione sia qualitativa che quantitativa dei difetti riscontrabili e della loro localizzazione sulla struttura, come la presenza di infiltrazioni o di fessure (Ioli et al., 2022). Tale funzionalità si rivela particolarmente efficace in caso di ponti con difficoltà di accesso al sito del rilievo, permettendo un’accurata ispezione visiva della struttura anche a posteriori.

Infine, l’utilizzo di annotazioni ed etichette posizionate in corrispondenza di elementi strutturali di interesse comporta una più immediata identificazione di componenti critiche del ponte. Un’ulteriore personalizzazione di questa funzionalità rende possibile anche l’integrazione di azioni attivabili con click, inserendo nel modello Potree cambi di prospettiva o collegamenti diretti ad archivi esterni per facilitare il download diretto di dati raccolti sul campo (es. immagini originali, nuvola di punti etc.) e la loro associazione a schede di censimento come richiesto dalle Linee Guida.

In conclusione, le funzionalità base del template forniscono un ambiente web user-friendly per l’esplorazione del dato 3D e soprattutto per la sua valutazione condivisa, senza richiedere il download locale di software dedicati né competenze avanzate di manipolazione del dato.

Esempio di Potree implementato per la Provincia di Piacenza: https://labmgf.dica.polimi.it/piacenzacs/lugagnano/

Bibliografia

Gaspari, F., Ioli, F., Barbieri, F., Belcore, E., and Pinto, L.; Integration of UAV-LIDAR and UAV-photogrammetry for infrastructure monitoring and bridge assessment, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., 2022, XLIII-B2-2022, 995–1002, https://doi.org/10.5194/isprs-archives-XLIII-B2-2022-995-2022

MIT, (2020). Linee guida per la classificazione e gestione del rischio, la valutazione della sicurezza ed il monitoraggio dei ponti esistenti

Ioli, F., Pinto, A., and Pinto, L.; UAV photogrammetry for metric evaluation of concrete bridge cracks, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., 2022, XLIII-B2-2022, 1025–1032, https://doi.org/10.5194/isprs-archives-XLIII-B2-2022-1025-2022.

Schuetz, M., 2016. Potree: Rendering Large Point Clouds in Web Browsers. Master’s thesis, Technische Universitat Wien

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
10:00
10:00
15min
Applicazioni di UAV e tecniche SfM per valutare la pericolosità idro-geomorfologica in un sistema fluviale
Marco La Salandra

La fotogrammetria è una delle tecniche più affidabili per generare dati topografici ad alta risoluzione, e risulta fondamentale per la mappatura del territorio e il rilevamento dei cambiamenti delle forme del terreno, soprattutto in aree ad alto rischio idro-geomorfologico. In particolare, la “Structure from Motion (SfM)” è una tecnica fotogrammetrica di rilievo topografico che affronta il problema della determinazione della posizione 3D dei descrittori di immagini per generare strutture tridimensionali. Grazie alle potenzialità del processo SfM e allo sviluppo di “Unmanned Aerial Vehicle (UAV)” che garantiscono l’acquisizione on-demand di immagini aeree ad alta risoluzione, è possibile rilevare ampie aree della superficie terrestre e monitorare fenomeni attivi attraverso indagini multitemporali. Pertanto, questi strumenti rappresentano elementi chiave per affrontare le variabili geomorfologiche ed idrologiche in un sistema dinamico come quello fluviale, al fine di comprendere con precisione il livello di pericolosità da eventi di instabilità (pericolosità di esondazione). Tuttavia, lo sviluppo di nuove tecniche di mappatura da UAV (ad es. missioni di volo in BVLOS) legate alla crescente necessità di rilevare aree più ampie ad alta risoluzione per una migliore comprensione dei processi fluviali, può comportare l’acquisizione di grandi set di dati e limitare il processo fotogrammetrico a causa della necessità di risorse di calcolo ad alte prestazioni (High-performance computing).
Uno dei principali aspetti investigati in questo lavoro riguarda l’implementazione di un workflow fotogrammetrico basato su Free Open Source Software (FOSS), in grado di restituire diversi output ad alta risoluzione e di gestire grossi dataset in tempi ragionevoli, attraverso la distribuzione degli step computazionali più onerosi su cluster di calcolo ospitati dal data center ReCaS-Bari. I risultati sono forniti in termini di valutazioni di performance basate su differenti configurazioni di calcolo dei cluster e setup degli step del workflow.
D’altra parte, è stata investigata l’influenza di output ad alta risoluzione derivanti dal processo SfM sulle analisi di pericolosità idro-geomorfologica, fornendo un approccio originale di valutazione probabilistica della pericolosità.
In conclusione, è stato verificato l’elevato valore di un sistema integrato UAV-SfM-HPC e degli output risultanti nella gestione efficace degli ambienti alluvionali e, nello specifico, nel monitoraggio di dettaglio delle variabili idro-geomorfologiche che risultano fondamentali per valutare gli scenari futuri di instabilità e per pianificare tempestivamente le attività di gestione delle emergenze in caso di eventi catastrofici, con un significativo risparmio di tempi e costi.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
10:15
10:15
15min
Una rete GNSS open-source di strumenti a basso costo per il posizionamento NRTK: quale futuro e quali prestazioni?
Paolo Dabove

Al giorno d’oggi l'utilizzo di dispositivi GNSS è una pratica comune per molte attività grazie alla diffusione di questi strumenti e alla miniaturizzazione di chipset e antenne. A causa della riduzione dei costi di produzione, gli strumenti GNSS a basso costo sono impiegati e utilizzati per molteplici attività, non solo a livello accademico o di ricerca, ma anche per scopi professionali.
Uno dei punti chiave è la necessità di raggiungere soluzioni di posizionamento sempre più affidabili, molto spesso in tempo reale, considerando brevi intervalli di tempo per ottenere un alto livello di precisione e accuratezza. Una delle tecniche più comuni e ancora in fase di sviluppo è il Precise Point Positioning (PPP), ma richiede pochi minuti per raggiungere una soluzione di convergenza al fine di ottenere un livello di precisione centimetrico. Un'altra possibile tecnica di posizionamento è il posizionamento Network Real-Time Kinematic (NRTK), che è stato sviluppato a partire dagli ultimi quindici anni. Basata sullo sfruttare la presenza di stazioni permanenti GNSS, questa metodologia offre la possibilità di estendere il classico posizionamento cinematico in tempo reale (RTK) per baseline anche superiori a 40 km. Ciò ha aumentato l'impiego di dispositivi GNSS a basso costo per molte attività pratiche, dal monitoraggio di versanti fino al posizionamento di precisione, anche utilizzando dispositivi mobili come smartphone e tablet.
Tuttavia, queste infrastrutture non sono diffuse ovunque: anche in questo caso, il costo per realizzare una rete “tradizionale” (composta da dispositivi geodetici) è relativamente alto, e talvolta potrebbe non essere disponibile, soprattutto in aree in via di sviluppo. Per questo motivo, un'interessante alternativa è verificare la fattibilità di utilizzo di dispositivi GNSS a basso costo come stazioni permanenti, analizzando anche le prestazioni di posizionamento di ricevitori rover anche a basso costo.
In questo contesto è stato indagato il progetto Centipede RTK: si tratta di un progetto collaborativo e open-source che mira a creare una rete di stazioni GNSS aperte disponibili a chiunque in una specifica area di copertura. La rete è composta da stazioni GNSS installate su enti pubblici o in aree di proprietà di utenti privati. Questa rete è composta da più di 280 stazioni permanenti, la maggior parte dei quali copre l'area francese, mentre alcuni altri si trovano in paesi europei (ad esempio, Italia, Polonia, Svizzera, Serbia, Slovenia, Germania) oltre che nell'isola di Réunion, nell'Oceano Indiano. Il progetto è sostenuto finanziariamente dall'INRAE e ha beneficiato sin dal suo avvio nel 2019 di risorse condivise tra istituti di ricerca, enti pubblici e aziende private.
Uno dei principali limiti di questo progetto è che questa rete non può fornire correzioni differenziali in termini di prodotti di rete (come le correzioni VRS), ma fornisce solo correzioni da singola stazione. In questo lavoro, partendo da queste stazioni, gli autori hanno implementato un sistema che utilizza questi dati come input per il software di rete, per raggiungere un servizio open-source basato su dispositivi GNSS a basso costo.
Sono state quindi eseguite due campagne di misurazione per testare le prestazioni ottenibili in termini di precisione ed accuratezza, considerando posizionamenti sia statici che dinamici, quali posizionamenti pedestri o veicolari. Al fine di rendere rappresentative le analisi svolte, sono stati testati sia dispositivi geodetici che sensori a basso costo come rover per analizzare i risultati ad oggi ottenibili.
L’intenzione è quella di mostrare le potenzialità di infrastrutture come quella testata, cercando di invogliare gli utenti dell’Associazione GFOSS.it APS e della comunità italiana a contribuire ad un progetto che potrà apportare benefici a differenti livelli, sia scientifici che professionali.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
10:30
10:30
30min
Coffee break
Sala Videoconferenza @ PoliBa
11:00
11:00
15min
Consigli pratici per la mappatura dell'accessibilità con OpenStreetMap e strumenti aperti
Alessandro Sarretta, Elena De Toni

L'accessibilità degli spazi pubblici è un tema riconosciuto come prioritario nella gestione delle città. A livello nazionale, la legge 41/1986 ha introdotto il Piano di Eliminazione delle Barriere Architettoniche (PEBA) come strumento di pianificazione e programmazione dell'accessibilità, disciplinato in seguito da alcune regioni con leggi e delibere regionali.
Nel 2019 a Padova è stato approvato il PEBA comunale, che è stato il primo caso di un PEBA che ha usato e integrato direttamente OpenStreetMap come base di dati per l'analisi dello stato di fatto dell'accessibilità negli spazi urbani. Lo stesso approccio è stato recentemente adottato in altri tre comuni, nei quali l'analisi è ancora in corso.
Questo contributo vuole sintetizzare e presentare il flusso di raccolta dei dati e gli strumenti aperti utilizzati per l'analisi dell'accessbilità urbana, a partire dai dati di OpenStreetMap. In modo non esaustivo si elencano qui i principali: raccolta di immagini a livello strada con Mapillary per un archivio di immagini con licenza aperta focalizzate su marciapiedi e percorsi pedonali; rilievo (anche partecipato) in campo con applicazioni smartphone (e.g. OsmAnd) per raccolta di note vocali e fotografiche e con strumentazione (cordella metrica, livella) per rilievi di dettaglio di elementi sull'accessibilità (caratteristiche fisiche di marciapiedi e attraversamenti, ostacoli, gradini); arricchimento del database OpenStreetMap con le informazioni raccolte usando strumenti di editing da smartphone (e.g. Vespucci) o da computer (e.g. JOSM, iD) e un sistema di tagging consolidato; utilizzo di software GIS open source (QGIS) per l'analisi e la rappresentazione delle informazioni sull'accessibilità attraverso una procedura automatizzata tramite un modello di processing.
L'obiettivo è di fornire un vademecum pratico per il supporto alla redazione e aggiornamento dei PEBA che possa essere usato e adattato/migliorato.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
11:15
11:15
15min
Wikipedia e Open Street Map per la divulgazione e la conoscenza del patrimonio culturale: due casi di studio
Piergiovanna Grossi

Questo intervento intende presentare due casi pratici che riguardano l'utilizzo di Wikipedia e Open Street Map per la conoscenza e la divulgazione del patrimonio culturale, proponendoli come modelli replicabili.
• Il primo caso si riferisce ai laboratori svolti all'interno del Corso in Digital Humanities del Dipartimento di Lingue e Letterature Straniere dell’Università di Verona: "Wikipedia per la divulgazione internazionale del patrimonio culturale". Questi laboratori, che hanno avuto inizio nel 2021, si concentrano sulla traduzione dalle lingue straniere all'italiano di pagine Wikipedia relative alla viabilità antica, ovvero tratti stradali, ponti, viadotti, miliari, infrastrutture e strutture viarie. Finora sono state tradotte circa 100 pagine da Wikipedia Inglese, Francese, Spagnola, Tedesca e Russa a Wikipedia in lingua italiana. Le pagine vengono mano a mano censite e inserite in una mappa online della viabilità antica, consultatile, scaricabile e riutilizzabile con licenza libera. Nel corso dell'intervento verrà presentata la struttura dei laboratori, considerati un modello riproducibile per favorire la circolazione della conoscenza del patrimonio culturale a livello transnazionale, nonché i risultati raggiunti e i progetti di sviluppo futuri.
• Il secondo caso riguarda invece un progetto di Public Archaeology svolto nel 2017 in collaborazione con la Soprintendenza Archeologia Belle Arti e Paesaggio per le province di Verona Rovigo e Vicenza, che sarà ampliato nel 2023 grazie al contributo economico di Wikimedia Italia (bando volontari 2023) e al sostegno della Soprintendenza. Nel corso dell'intervento verranno illustrati il progetto già svolto, considerato anch'esso un modello riproducibile, i risultati conseguiti, tra cui le statistiche di visita dei beni culturali prima e dopo la creazione delle pagine Wikipedia, e il progetto in corso.
Link di riferimento
• Primo caso di studio:
◦ Pagina del progetto (2022-2023): https://it.wikipedia.org/wiki/Progetto:Coordinamento/Università/UniVR/Laboratori_Wikimedia_2022-2023
• Secondo caso di studio:
◦ Presentazione dell’esperienza 2017 (Workshop “digitalizzazione e riproduzione dei beni culturali: aggiornamenti normativi”, Verona, 04 novembre 2022): https://github.com/piergiovanna/DigitalBeniCulturali/blob/main/Grossi-fruizione-pubblica-wikipedia.pdf
◦ Progetto in corso: Wikipedia per la valorizzazione del patrimonio archeologico (finanziamento bando volontari 2023): https://wiki.wikimedia.it/wiki/Bando_2023_per_progetti_dei_volontari_/Wikipedia_per_la_valorizzazione_del_patrimonio_archeologico

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
11:30
11:30
15min
OpenStreetMap come fonte per la produzione di dataset governativi: il caso dell'Istituto Geografico Militare Italiano
Alessandro Sarretta

La raccolta, la cura e la pubblicazione di informazioni territoriali è stata per secoli prerogativa esclusiva delle organizzazioni del settore pubblico. Tuttavia, più recentemente sono emerse nuove fonti dati (ad esempio dal settore privato e generati dai cittadini) che mettono sempre più in discussione il ruolo del settore pubblico come figura predominante nella produzione cartografica. In risposta a ciò, gli enti cartografici governativi stanno progressivamente esplorando nuove modalità di gestione, creazione e aggiornamento dei loro set di dati territoriali.
Un numero via via crescente di iniziative del settore privato (su tutti Microsoft, Facebook, Amazon e la recente Overture Maps Foundation) producono dataset di grande rilevanza al fine di migliorare la copertura delle informazioni territoriali governative esistenti attraverso il rilascio di dati aperti generati e fortemente dipendenti da OpenStreetMap (OSM).
Recentemente, l'Istituto Geografico Militare (IGM, uno degli enti cartografici governativi in Italia) ha rilasciato un dataset multistrato, chiamato "Database di Sintesi Nazionale" (DBSN, https://www.igmi.org/en/dbsn-database-di-sintesi-nazionale), che ha lo scopo di includere informazioni territoriali rilevanti per l'analisi e la rappresentazione a livello nazionale per ricavare mappe alla scala 1:25.000 attraverso procedure automatiche. La creazione del DBSN si basa su diverse fonti informative, con i dati geotopografici regionali come fonte primaria di informazioni e i prodotti di altri enti pubblici nazionali (ad esempio le mappe catastali) come fonti aggiuntive. Tra le fonti esterne utilizzate in input per il lavoro di integrazione nel DBSN, OSM è stato esplicitamente considerato e utilizzato.
Attualmente, il DBSN include dati che coprono 12 delle 20 regioni italiane (Abruzzo, Basilicata, Calabria, Campania, Lazio, Marche, Molise, Puglia, Sardegna, Sicilia, Toscana, Umbria). I dati per le regioni restanti saranno rilasciati nei prossimi mesi.
Uno degli elementi di novità, almeno nel contesto italiano, è il rilascio del DBSN sotto licenza Open Database License (ODbL, https://opendatacommons.org/licenses/odbl), dovuto al fatto che l'inclusione dei dati OSM richiede che i prodotti derivati siano rilasciati con la stessa licenza.
Lo schema DBSN, che è un sottoinsieme delle specifiche definite nel “Catalogo dei dati territoriali - Specifiche di Contenuto per i DB Geotopografici” (Decreto 10 novembre 2011) e che è composto da 10 layer, 29 temi e 91 classi, è stato confrontato con le specifiche di OpenStreetMap, selezionando due temi principali (edifici e strade), analizzati attraverso una serie di script Python disponibili con licenza aperta https://github.com/napo/dbsnosmcompare.
In primo luogo, è stata analizzata la percentuale di edifici e strade nel database IGM in cui OSM è stato utilizzato come fonte primaria di informazioni. La percentuale di edifici derivati da OSM è minima, con valori <2%; per quanto riguarda le strade, le differenze tra le regioni aumentano, passando da quasi lo 0% a più del 90%. In secondo luogo, è stata calcolata l'area coperta da edifici e la lunghezza delle strade nei database IGM e OSM per valutare la completezza di OSM rispetto al dataset ufficiale IGM. Nelle 12 regioni, la superficie coperta dagli edifici in OSM è mediamente circa il 55% della corrispondente superficie in IGM, mentre la percentuale della lunghezza delle strade è di circa il 78%, con elevate differenze tra le regioni. Questi primi risultati mostrano che la principale fonte di informazioni nel DBSN (vale a dire i dati regionali ufficiali) è molto variabile tra le 12 regioni, il che ha richiesto all'IGM di trovare ulteriori fonti di dati per colmare le lacune. OSM svolge un ruolo secondario nell'integrazione degli edifici nel database, mentre dimostra un alto potenziale per contribuire alle informazioni stradali. I risultati mostrano anche che alcuni elementi presenti in OSM non sono ancora inclusi nel DBSN. Ciò può essere dovuto ad almeno due motivi: (i) l'attuale flusso di lavoro di selezione degli elementi in OSM (tramite tag) non include alcuni elementi potenzialmente rilevanti; ii) l'aggiornamento (idealmente) quotidiano di OSM è in grado di arricchire il database con nuove informazioni con frequenze di aggiornamento non raggiungibili da parte di IGM e gli enti cartografici governativi in generale. Oltre ad evidenziare l'importanza che OSM ha raggiunto come fonte di riferimento di informazioni territoriali anche per gli enti governativi fornendo prove del suo contributo al database nazionale dell'IGM, questo studio fornisce inoltre spunti per migliorare il database stesso di OSM attraverso l’importazione di dati dal DBSN, beneficiando del rilascio del database con licenza ODbL.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
11:45
11:45
15min
scaviamo nei dati OpenStreetMap con la riga di comando
Alessandro Palmas

Il database OpenStreetMap, e i suoi "estratti" di cui ci occuperemo, aumentano sempre più in grandezza, contenendo sempre più dati.

Negli anni la comunità di sviluppatori ha prodotto diversi strumenti a riga di comando che senza dover gestire un database, ma semplicemente scaricando gli estratti disponibili in rete, ci permettono di compiere numerose azioni utili.

Vedremo come estrarre dati per categorie, Bounding Box o poligoni vari, o estrarre singoli oggetti conoscendone il loro ID.
Per le statistiche è possibile ottenere quali e quanti tag ha un estratto.
Oppure possiamo ricavare uno snapshot in un tempo passato, utilizzando gli estratti "full history".

Oltre a questi comandi abbastanza noti (osmosis, osmconvert, osmfilter, osmium), esiste una collezione di programmi in Perl che ci permette di calcolare la lunghezza della rete stradale o idrografica di un estratto.
Ma è pure possibile ottenere dei report, in formato html e grafico, sui cambiamenti di un'area tra due date desiderate.
Combinando alcuni di questi comandi è possibile monitorare in maniera relativamente semplice aree molto grandi.

Nel caso del Landuse/Landcoverpossiamo dare in pasto a QGIS gli estratti di queste sole caratteristiche per eseguire verifiche topologiche.

Molte delle operazioni qui descritte possono essere compiute con strumenti online, quali query overpass o attic, ma necessitano di una buona conoscenza del linguaggio utilizzato per strutturare le query. Gli strumenti qui descritti hanno sintassi abbastanza semplici e non richiedono computer potenti.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:00
12:00
15min
YouthMappers@Uniba: attività di mappatura in Puglia
Rosa Colacicco

Nato nell'aprile 2021, "YouthMappers@Uniba" è un gruppo di ricercatori e studenti del Dipartimento di Scienze della Terra e Geoambientali dell'Università di Bari (Italia) appassionati di cartografia e di software open-source. Il gruppo fa parte del network internazionale YouthMappers ed è il secondo capitolo nato in Italia dopo quello dei Polimappers del Politecnico di Milano.
Durante i due anni di attività, i mappatori volontari pugliesi hanno sviluppato diversi progetti, contribuendo a OpenStreetMap (OSM). Uno in particolare ha riguardato la mappatura del territorio del Parco dell'Alta Murgia, aggiungendo e modificando geo-itinerari e siti di particolare interesse geologico, naturalistico e storico. Il progetto, vincitore del Bando Volontari di Wikimedia Italia, ha previsto l'acquisizione di dati in situ, la fotointerpretazione da immagini satellitari e l'acquisizione di dati da UAV.
Il gruppo ha inoltre partecipato ad alcuni progetti con studenti delle scuole superiori e medie nell'ambito dei PCTO: dopo una prima fase di formazione sui progetti Wikimedia (valori, editing, copyright) e in particolare su Wikivoyage, Wikimedia Commons e OpenStreetMap, gli studenti hanno raccolto dati nel centro di Bari attraverso i Field Papers e il loro inserimento nei suddetti progetti, anche in lingua inglese.
Oltre a questi progetti, il lavoro dei mappatori si è concentrato sull'organizzazione di alcuni mapathon, in particolare con gli studenti universitari del Dipartimento di Scienze della Terra e Geoambientali, partecipando anche a corsi per l'acquisizione di competenze trasversali e di orientamento consapevole.

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
12:15
12:15
15min
Chiusura convegno
Sala Videoconferenza @ PoliBa
12:30
12:30
30min
Confronto sulla prossima edizione
Sala Videoconferenza @ PoliBa
14:00
14:00
120min
Tavola rotanda "OpenStreetMap, Wikimedia e FOSS4G: Sinergie e progetti comuni"
Anisa Kuci

Tavola rotanda "OpenStreetMap, Wikimedia e FOSS4G: Sinergie e progetti comuni"

GFOSS.it Contributions
Sala Videoconferenza @ PoliBa
16:00
16:00
30min
Coffee break
Sala Videoconferenza @ PoliBa