Alessandra Capolupo
Sessions
Authors: Claudio Ladisa, Manuel A. Aguilar, Alessandra Capolupo, Eufemia Tarantino, Fernando J. Aguilar.
The use of renewable energy sources in power generation is increasing due to environmental awareness and technological advancements. Solar energy, with its extensive availability and minimal greenhouse gas emissions, is a promising source. However, large photovoltaic (PV) plants require constant monitoring to ensure efficiency and reliability. Remote sensing technology can be beneficial in providing accurate information on the plant's size, shape, and location, reducing costs and increasing monitoring efficiency. The detection of large PV plants can be carried out using various technologies, including the use of satellite imagery, drones imagery or observation from aircraft. However, the use of satellite imagery is advantageous for the detection of large PV plants because it allows to acquire data on a large area without having to move around the site and to monitor the plant over time without interfering with its activity. Open-source imagery from satellites like Sentinel-2 (S2) and Landsat 9 has led to a significant increase in remote sensing research related to extracting (PV) systems. This is because the free and public availability of high-quality images with extensive spatial coverage has eliminated the need to buy costly private satellite images. Additionally, the frequency of image acquisition, which can occur every few days, has allowed for quick and accurate monitoring of areas of interest. Several research have recently merged remote sensing with machine learning (ML) methods to develop automatic classification algorithms for PV systems. Most of these algorithms employ different spectral indices, such as the Normalized Difference Water Index (NDWI), the Normalized Difference Vegetation Index (NDVI), and Normalized Difference Bare Index (NDBI), as input. These spectral indices provide useful information on the presence of water, vegetation and bare soil, respectively, which can be used to identify PV systems more accurately, thus improving classification accuracy. However, there is no specific spectral index that has been tested exclusively for the extraction of PV. This is partially because PV arrays may be constructed on many kinds of surfaces, in various environmental and climatic circumstances, and with different solar panel sizes and types. In this regard, the goal of this work was to suggest a Photovoltaic Systems Extraction Index (PVSEI) for the detection of PV installations from S2 images in two distinct study areas characterized by the persistent presence of large PV installations: The province of Viterbo (Italy) and the province of Seville (Spain). The development of the PVSEI was based on the combination of different bands provided by S2, in order to maximise the spectral difference between the solar panels and their surroundings. For each study area, two S2 images, one taken in February and the other one in August, were used to analyse the seasonal variation of the solar panels' spectral signature and test the PVSEI's accuracy in each of the four scenarios. The image analysis was carried out using an Object-Based Image Analysis (OBIA) method since it allowed for a more accurate identification of PV systems than the pixel-based method, which analyzes individual elements without taking their spatial arrangement and semantic significance into account. Multi-resolution segmentation was used to create segments with different dimensions based on scale, shape and compactness parameters. The Decision Tree (DT) classifier was used to evaluate the effectiveness of the PVSEI and its importance in comparison to the other indices used in the literature in both locations and for both periods after the objects had been labelled as "PV" and "No-PV”. The effectiveness of the new index was demonstrated through the results obtained from the DT analysis. In three out of four scenarios, the PVSEI was selected as the first cut in the DT analysis. In the remaining scenario where it was not ranked first, it still maintained a high level of significance, being the second index in importance. The accuracy was assessed using an error matrix calculated on both the entire segmentation dataset (i.e. using all the objects) and with TTA mask with 2 m pixel size. Four metrics were used to evaluate accuracy of the PVSEI, including Overall Accuracy (OA), Kappa Index of Agreement (KIA), Producer Accuracy (PA), and User Accuracy (UA) for both classes. OA exceeded 98% in all scenarios, both for the segmentation dataset and the TTA mask. KIA values for the TTA mask ranged from 0.81 to 0.86, while values for the segmentation objects ranged from 0.74 to 0.82. In conclusion, the new index has demonstrated favourable outcomes in both study areas, with only a limited number of misclassifications involving bare soil objects that have a spectral signature resembling certain photovoltaic systems.
Authors: A. Capolupo & E. Tarantino
Several research involving Earth's physical processes and depicting environmental systems are computationally time-consuming, and as a result, have a substantial impact on the time necessary to collect and manage the data. Over the years, numerous acceptable methods for describing surface morphology and enabling quick computer solutions were developed. Nevertheless, since 1991, Digital Elevation Model (DEM) has been recognized as the finest alternative for attaining this goal because, in addition to its capacity to provide baseline morphological information quickly, it also has the exclusive property of being a 2.5-D surface. The quality and trustworthiness of the results provided by its use are determined by its resolution, elevation accuracy, and shape/topological correctness. Elevation accuracy is normally established by statistically analysing differences between DEMs and reference datasets such as Ground Control Points (GCPs), whereas shape/topological correctness is typically defined by demonstrating DEM conformity with some universal principles. Therefore, the root mean square error is commonly used to achieve the first aim, whilst DEM derivates are examined in the second one. However, neither approach is without limits since their performance is influenced by the quality of the reference data and the complexity in measuring DEM realism.
This is much more difficult when the DEM under consideration encompasses the entire globe. Even though they are described as a homogenous product, the accuracy of Global DEMs in terms of elevation and realism varies according to geographical location and morphology, land cover, and climate. Furthermore, as satellite stereoscopic technologies, as well as photogrammetric and SAR interferometric methods, have evolved, the amount of Global DEMs collected has substantially increased. Most of them were also collected in different historical periods and, consequently, they may be useful free open-source data for conducting a consistent global study change detection analysis.
In such a framework, this study aims to investigate the appropriateness of medium-resolution open-access Global DEMs in evaluating changes in urban contexts between 2000 and 2011. To accomplish this, the primary freely accessible Global DEMs were statistically examined, and after selecting the best pair, a change detection analysis was carried out. To assess its accuracy, the findings were compared to the Copernicus Land Monitoring service's land use layers from the same historical periods (https://land.copernicus.eu/). Lastly, this study seeks to estimate and predict the caused by building density bias in accordance with the urban fabric type.
The procedure was implemented by writing appropriate Java-script code on the Google Earth Engine (GEE) web-based platform. Hence, the GEE catalogue was first consulted to determine the available Global DEMs corresponding to the historical period under investigation, and, once identified, they were imported into the application programming interface and validated using the "internal" technique. As a result, AW3D30 (3.2), which was launched in early January 2021, and SRTM DEM V3 were deemed the optimal combination for research purposes during an 11-year timeframe. Thus, they were used as input data for calculating the corresponding DEM of Differences (DoD) and quantify the alteration in urban environments. Owing to the law propagation error, the resultant DoD had substantial internal incoherencies, which were subsequently statistically eliminated by using the Tukeys' filter. This is widely acknowledged as an effective method for identifying and cleaning out internal noise without prior awareness of it. Yet, a significant amount of Tukey's outliers was identified and eliminated in their respective DoD, mostly in wooded and hilly zones, owing to differing degrees of quality of the input data. Following that, to reduce misclassification and distinguish noise from real changes, the resulting DoD was further filtered using the Uniformly Distributed Error (UDE) strategy, developed by Brasington et al. in 2003. However, the UDE technique, while exploiting a gaussian distribution of internal error, does not adapt the filtering threshold to the local conditions, resulting in an over or underestimation of the amount of information to remove. Urban variation was now assessed by combining the filtered DoD result with Corine Land Cover (CLC) data. This integration also enabled statistical investigation and modelling of the DoD error associated with urban fabric type. When comparing the CLC information to both Tukey's outliers and UDE noise in urban areas, it is discovered that error increased linearly with building density. This implies that urban changes quantification could be improved further by correcting the building density bias. In future works, the introduced approach will be enhanced by taking building height into consideration.
Questo corso si focalizza sull’estrazione e produzione di informazioni preziose per il monitoraggio ambientale e la gestione del territorio attraverso l'uso di fonti di dati open, come quelle fornite dal programma Copernicus, un'iniziativa dell'Unione Europea volta a garantire la diffusione di dati e servizi ambientali accurati e affidabili per supportare i processi decisionali. Tale servizio si configura come uno strumento essenziale per gestire, monitorare e valutare l'ambiente e le sue risorse, utilizzabile da un'ampia gamma di utenti, tra cui decisori politici, ricercatori e aziende, per analizzare e fronteggiare le più importanti criticità ambientali.
Dopo una panoramica dell'iniziativa Copernicus e dei suoi servizi, saranno introdotte le potenzialità della piattaforma Google Earth Engine (GEE) nel processamento dei big data geospaziali. GEE è una piattaforma cloud, versatile e gratuita, sviluppata da Google nel 2017 per trattare i big data geospaziali, e caratterizzata da un database integrato, continuamente aggiornamento, in cui sono immagazzinati i dati geospaziali open-source e free prodotti e diffusi dai vari programmi spaziali. Al suo interno è possibile effettuare ricerche geospaziali complesse e creare carte personalizzate, integrando una varietà di fonti di dati e tools mediante lo sviluppo di codici in linguaggio di programmazione Javascript o Python.
La natura pratica del workshop suggerisce che i partecipanti acquisiscano esperienze di base per estrarre importanti informazioni dalle immagini satellitari e dalle altre fonti di dati geospaziali di tipo open.
Alcuni potenziali argomenti che saranno trattati nel workshop sono:
• Presentazione del programma Copernicus e del servizio di monitoraggio del territorio da esso fornito
• Introduzione a Google Earth Engine e alle sue potenzialità nel processamento ed analisi dei big dati geospaziali
• Esercizi pratici volti ad analizzare e processare i dati e i servizi Copernicus in ambiente Google Earth Engine.
Carlo Barletta, Alessandra Capolupo, Eufemia Tarantino
Nowadays, data in an open format, easily accessible and characterized by the fact that they can be freely used and shared by anyone and for any purpose, play an important role due to the social and economic impact they can produce, such as, for instance, the possibility of fostering the development of new services based on them, as well as the transparency and the democratic and participatory processes in public policies. In the field of geographic information and Earth Observation (EO), the satellite images collected by Landsat and Sentinel initiatives are the most typical example of open data. The former, provided by National Aeronautics and Space Administration (NASA) and United States Geological Survey (USGS), have a geometric resolution of 30m and have been accessible for decades, whereas the latter, released by the European Union's Copernicus program, have an accuracy of up to 10m and have been available since 2015. According to the literature, both of them are useful for investigating and monitoring natural resources as well as environmental phenomena that occur on the Earth's surface, allowing for the assessment of numerous surface environmental variables on a local and regional scale. Among these, the land surface albedo, which represents the capability of a surface to reflect incident solar radiation, is a useful parameter for climatic and hydrological studies, both in urban and rural contexts. Moreover, the growing attention to the effects of climate change and urbanization on the environment and territory, such as, for example, the Urban Heat Island (UHI) phenomenon, desertification, and drought, makes it necessary for these aforementioned sources of information to be freely and easily available to citizens, researchers and decision-makers.
The objective of this study is to estimate the broadband land surface albedo and its spatial and temporal variability using accessible data from the Landsat 8 and Sentinel-2 satellites over two separate study areas: the city of Bari, in Southern Italy, and the city of Berlin, in North-eastern Germany. Because these two pilot sites have such disparate geomorphological features, they allow generalizing of the research conclusion independent of environmental context. For this purpose, various Landsat 8 and Sentinel-2 satellite images, very close for acquisition time and date, and collected in different seasons, from 2018 to 2019, were used. Furthermore, the performance of the two implemented algorithms, namely the Silva et al. approach for Landsat 8 data and the Bonafoni et al. technique for Sentinel-2 data was assessed and statistically compared. Urban Atlas 2018 land use/land cover (LU/LC) class vector data, provided in an open format by the Copernicus land monitoring service, were used to better explore the variability of the albedo within each case study. These data were processed in the Google Earth Engine (GEE) platform, which is free-to-use for research and non-commercial use, and consists of an integrated data catalogue mainly composed by open raster and vector data, e.g. Landsat and Sentinel images. Such catalogue, daily updated, is directly connected with the interactive programming environment, on which it is possible to process satellite images by developing own codes in JavaScript or Python languages. Most of its available tools are in open-source format. The statistical analysis, on the other hand, was carried out using the free and open-source R environment.
For both case studies, the investigation revealed that the Landsat 8 approach produced somewhat higher mean albedo values than the Sentinel-2 methodology. So far, the statistical comparison indicated that, for the Bari location, all of the returned Landsat 8 and Sentinel-2 albedo maps were strongly correlated, with a correlation coefficient (ρ) higher than 0.84; for Berlin, instead, a medium-high correlation was discovered (ρ > 0.78). Additionally, for both sites, the findings appear to be more correlated when spring and summer scenarios are considered rather than other seasons. Indeed, the correlation between Landsat 8 and Sentinel 2 images appears to follow the same seasonal pattern, though more satellite images from more years should be investigated for a more accurate interpretation. The dependability of the two approaches will be evaluated in the future through the collection of ground control points in field data campaigns. These new data will enable the most accurate findings to be detected and the other methods to be calibrated to increase their reliability.