WS: Mobility Data Science & KI – Potenziale für den öffentlichen Verkehr
ABSTRACT. Hintergrund & Motivation:
Vor dem Hintergrund der Klimakrise und der damit verbundenen Notwendigkeit einer nachhaltigen Verkehrswende gewinnt die Attraktivierung des ÖV zunehmend an Bedeutung. Innovative datengetriebene Lösungen, insbesondere aus den Bereichen Mobility Data Science und Künstliche Intelligenz (KI), bieten hier erhebliche Potenziale. Der Workshop setzt an dieser Schnittstelle an und zielt darauf ab, den Austausch zwischen Expert:innen aus der angewandten Mobilitätsforschung (insbesondere den österreichischen urbanen Mobilitätslaboren), dem ÖV-Sektor sowie der KI-Forschung zu fördern.
Durch aktuelle Forschungsprojekte wie AI4PT (Privacy-preserving AI for Sustainable Transport) wird deutlich, welches Potenzial insbesondere in datenschutzkonformen KI-Anwendungen liegt, um Kapazitätsprobleme zu entschärfen, Komfort und Pünktlichkeit zu verbessern und somit den öffentlichen Verkehr attraktiver zu gestalten.
Der Workshop setzt daher gezielt auf einen interdisziplinären Austausch, um Chancen, Herausforderungen und Strategien zur erfolgreichen Implementierung datengetriebener KI im ÖV aus wissenschaftlicher und praktischer Sicht gemeinsam zu diskutieren und weiterzuentwickeln.
Dauer: 1 x 75 Minuten
Agenda:
1. Begrüßung, Workshopziele & Impulse aus aktuellen Projekten (Anita Graser)
2. Paneldiskussion: KI in der Mobilitätsforschung ... Expert:innen diskutieren aktuelle Entwicklungen und Herausforderungen (geplante Teilnehmer:innen: Martin Loidl (Uni Salzburg), Joachim Pargfrieder (Siemens Mobility), TBC; Moderation: Anita Graser (AIT))
3. Interaktive Session: Impact Assessment in Mobility Data Science ... Teilnehmende entwickeln in Kleingruppen Ideen zur Bewertung des Einflusses von KI-Anwendungen im Mobilitätsbereich
4. Abschluss & nächste Schritte
Erwartetes Ergebnis:
Neben der Förderung der Vernetzung zwischen den Forschungsbereichen KI, Mobilität und öffentlicher Verkehr, ist das Ziel der Organisatoren insbesondere auch die Erstellung eines Positionspapiers, das die Ergebnisse und Empfehlungen aus der Paneldiskussion zu den aktuellen Entwicklungen, Potenzialen und Herausforderungen von KI-Anwendungen im Mobilitätssektor zusammenfasst. Dieses Papier soll konkrete Handlungsempfehlungen für Entscheidungsträger:innen aus Forschung, Verkehrsplanung und Politik beinhalten und eine Grundlage für zukünftige strategische Entscheidungen und Forschungsaktivitäten bieten.
WS: Multitemporale Vegetationsänderungsdynamik in Österreich mittels Analyse aller Sentinel-2-Beobachtungen – erste Ergebnisse aus dem Projekt GTIME
ABSTRACT. Mit diesem Workshop im Rahmen des FFG geförderten Projektes GTIME, beabsichtigen wir, erste österreichweite Ergebnisse zu Vegetationsänderungsdynamik vorzustellen, Statements und Feedback von anwesenden Stakeholdern, Projektteilnehmern/innen und weiteren interessierten Personen einzuholen. Neben der Vorstellung der initialen Ergebnisse werden die Teilnehmer/innen auch über die Integration eines aus verschiedenen Zeitschnitten kombinierbaren österreichweiten Layers, der die Veränderung der Vegetation wiedergibt, informiert. Dieser Layer werden aus allen Sentinel-2 Daten für Österreich in einen semantischen EO-Data Cube berechnet, kombiniert und kommuniziert. Es ist somit möglich, diese Informationen in vorhandene Workflows und Prozesse einzubinden und als multitemporale Veränderungskarte mit anderen, eher statischen, Daten zu kombinieren.
Dieser Layer vereint die Erfassung von Vegetation über mehrere Zeitabschnitte in einer innovativen Darstellung und ermöglicht somit neue Einblicke in die Veränderungen der Vegetation in Österreich, die mit den meisten bestehenden Themen in der österreichischen GTIF-AT Plattform, aber auch mit nicht öffentlichen Daten kombiniert werden kann. Der Workshop soll dabei auch dazu dienen, Ideen für domänen-übergreifende Verwendungszwecken zu sammeln. Über eine Hands-On-Session sollen potenzielle Nutzer erste Eindrücke über den Informationsgehalt des Layers, dessen Einbindung in bestehende Workflows, sowie Kenntnisse zu den technischen Abläufen der weiteren Analyse gewinnen. Ein besonderes Augenmerk liegt dabei auch auf der Vermittlung unserer Erfahrungen mit Analysen, die primär auf semantisch angereicherten EO-Daten im Rahmen des semantischen EO Data Cubes (Sen2Cube.at) und der dynamischen Analyse liegen.
Der Ablauf des Workshops erfolgt im Rahmen eines 75-minütigen Slot. Eine erste Präsentation über das Projekt GTIME und die Thematik von Zeitreihen-basierten EO-Analysen anhand semantischer EO Data Cubes leitet in eine Hands-On-Session über, in der der erste österreichweite Layer zur Vegetationsänderungsdynamik vorgestellt wird. Workshop Teilnehmer/innen sollen dabei bereits die Möglichkeit bekommen, selbst einleitende, explorative Analysen anhand vorbereiteter Datengrundlagen durchzuführen. Es folgen Vorstellungen von ersten Erfahrungen mit dem Veränderungslayer durch im Projekt teilnehmende Nutzer/innen. Dieser Ablauf soll dabei auch Raum für Q&A, Diskussion und Ideenaustausch bieten.
Über GTIME
GTIME zielt darauf ab, aus einer dichten Zeitreihe an mehrjährigen Erdbeobachtungsbildern eine dynamischen, integrierten Sicht auf Oberflächenveränderungen zu erstellen. Unser Ansatz verwendet große EO-Datenmengen in einem semantischen EO-Data Cube und kommuniziert die Ergebnisse unter Verwendung eines aus verschiedenen Zeitschnitten kombinierbaren österreichweiten Layers, bei der die Farbe verschiedene benutzerdefinierte Zeiträume und Veränderungen repräsentiert. Diese Visualisierung kodiert Terabytes multitemporaler Information in einen einzigen, umfassenden und einfach zu nutzenden Interpretations-Layer. Die Layer-basierte Repräsentation ist ein Ansatz zur Kommunikation von multi-temporaler Analysen für Nutzer, die durch Workshops und Feedback-Schleifen direkt im Projekt involviert sind. Während dieser Ansatz durch etablierte Geovisualisierungstechniken unterstützt wird, erweitern wir ihn, um zeitliche Prozesse und Dynamiken aufzudecken, die in großen EO-Daten verborgen sind. Unser Layer zeigt demnach, wo Veränderungen stattgefunden haben, gibt Informationen zur Intensität / Einfluss und kann damit spezifischere Themen kontextualisieren. Dies steht im Gegensatz zu statischen Basemaps, die derzeit in typischen GIS verwendet werden und oft nur monotemporale Informationen bereitstellen (z.B. statische Karten oder Luft-/Satellitenbilder mit unklarem Aufnahmedatum).
Die Produkte, die im Kontext des GTIME Projekts entstehen, basieren auf frei verfügbaren Copernicus Sentinel-2-Bildern und ermöglichen somit einen einzigartigen Blick auf die dynamischen Veränderungen der Vegetation in Österreich. Die Aktualisierung der Ergebnisse entspricht dabei der zeitlichen Abdeckungsrate der Copernicus Sentinel-2 Satelliten. Das GTIME-Projekt bietet diese umfassende Integration aller Sentinel-2-Beobachtungen ab 2018 (Verfügbarkeit von zwei Sentinel-2 Satelliten) in einer Form, die mit den meisten bestehenden Themen in GTIF-AT kombiniert und für andere statische GTIF-AT-Layer als zeitliche Informationsergänzung dienen kann. Anwendungsgebiete beinhalten dabei unter anderem: Monitoring von Grünflächen, Landnutzungsänderungen, Waldänderungen, Umwelt-/Bodenschutz und Naturschutz. Relevante Domänen sind: Energie, Mobilität, Klimaneutrale Städte, und Land bzw. Forstwirtschaft.
ABSTRACT. Es gibt viele Gründe, Programmieren mit R zu lernen. R gehört zu den beliebtesten analytischen Skriptsprachen und findet vielfältige Anwendung, etwa in der statistischen Analyse, der geografischen Informationsanalyse und der Datenvisualisierung mit Diagrammen und Karten. R ist kostenlos und Open Source, bietet umfassendere Funktionalitäten als die meisten proprietären Lösungen, ist mit den gängigen Betriebssystemen kompatibel und profitiert von einer großen Community.
In diesem Workshop werden Sie einen ersten Einblick in die frei verfügbaren UNIGIS-Lehrmaterialien „Automated Data Processing with R“ erhalten. Nach Abschluss dieses Workshops werden Sie über grundlegende R-Fertigkeiten verfügen, die es Ihnen ermöglichen anhand der bereitgestellten UNIGIS-Lehrmaterialien das breite R-Ökosystem für Ihre GI-Projekte zu erschließen.
Visualizing Urban Energy Efficiency: From Ideation to Geospatial Industry Leaders
ABSTRACT. The world wastes more energy than it uses every year. It is estimated that approximately two-thirds of all primary energy is wasted during the production, transportation, and consumption of fossil fuels. This inefficiency results in over $4.6 trillion USD wasted annually, which is nearly 5% of global GDP and 40% of global energy spending (RMI). As a Geospatial scientist, I’ve spent the last 17 years investigating what this invisible wasted energy looks like, where its located and how much is being wasted? To answer these questions, this presentation will discuss (i) some of the innovative university research done in our Geovation group using high-resolution (25-50cm) urban thermal remote sensing, (ii) how this research has led to a university startup (MyHEAT) that has been in business since 2014, has 28 employees and the largest repository of airborne urban thermal imagery on the planet and (iii) introduce our new physics informed machine learning metrics for mapping energy efficiency for millions of homes in Canada and the US.
Geographic Information Systems (GIS) in Disaster Management: Enhancing Efficiency and Coordination in Austria
ABSTRACT. Austria faces recurring natural hazards like floods, landslides, and soil erosion due to its mountainous terrain and variable climate. Managing these risks effectively requires modern technology, and Geographic Information Systems (GIS) have become an essential tool in disaster management. By integrating spatial data and providing up-to-date insights, GIS helps authorities respond more efficiently and make informed decisions.
One of the biggest challenges in Austria’s disaster response is dealing with floods and landslides and their cascading effects. Heavy rainfall can quickly lead to more ground infiltration, hence more unstable slopes which pose a risk to infrastructure and communities. GIS allows for precise hazard mapping and predictive modeling, helping experts identify high-risk areas before disasters strike. Remote sensing tools like satellite imagery and drones further enhance monitoring, allowing for early warnings and better preparation. In addition to high-resolution imagery, drone technology now enables the creation of detailed 3D mesh models, improving terrain analysis and damage assessment.
Beyond crisis response, GIS plays a key role in all phases of the disaster management cycle: preparedness, mitigation, disaster, response, and recovery. In the preparedness phase, GIS-based simulations help identify vulnerable areas and guide emergency planning. Mitigation efforts benefit from GIS-driven risk assessments that support infrastructure reinforcement and improved land-use strategies. When disaster strikes, GIS provides critical situational awareness, enabling authorities to coordinate responses effectively. During the response phase, GIS facilitates tracking of incidents, resource allocation, and evacuation planning. Finally, in the recovery stage, GIS supports damage assessments, reconstruction efforts, and long-term resilience strategies by analyzing post-disaster data.
The evolution of GIS technology has also shifted from traditional desktop applications to cloud-based Software-as-a-Service (SaaS) solutions. This transition has made GIS tools more accessible, scalable, and collaborative, allowing multiple stakeholders to work with the same data in real time. In particular, WebGIS has transformed how spatial data is used in disaster management, offering dynamic and interactive mapping capabilities that go far beyond the static, localized nature of traditional desktop GIS.
Climate change is making extreme weather events more frequent and unpredictable, increasing the need for adaptive disaster management strategies. Combining GIS with artificial intelligence, big data analytics, and drone technology offers new possibilities for hazard monitoring and response. The integration of 3D mesh models from drones enhances spatial analysis, providing a more accurate understanding of affected areas and supporting reconstruction planning.
Despite its advantages, implementing GIS in disaster management comes with challenges. Data integration requires standardization across various systems, and ensuring accessibility for all stakeholders—government agencies, emergency services, and local communities—demands user-friendly interfaces and clear communication. Overcoming these obstacles will maximize GIS’s potential as a vital disaster management tool.
In summary, GIS has transformed how Austria handles natural hazards by enhancing risk assessment, emergency response, and recovery efforts. As environmental threats continue to evolve, leveraging GIS and emerging technologies will be crucial in protecting communities and infrastructure from future disasters. We look forward to sharing further insights into this dynamic and evolving field.
Enhancing Wildfire and Flood Risk Assessment with random forest. Insights into Machine Learning in Munich Re's Natural Catastrophe Model Development.
ABSTRACT. Enhancing Wildfire and Flood Risk Assessment with random forest. Insights into Machine Learning in Munich Re's Natural Catastrophe Model Development.
Dominik Kienmoser, M.Sc. Geography, M.Sc. Geoinformatics
Author
Dominik Kienmoser, who holds master's degrees in Geography and Geoinformatics from the University of Augsburg, is a consultant and wildfire expert at Munich Re's Natural Perils department at Corporate Underwriting, based in Munich. His key responsibility is developing internal and external wildfire models to assess potential insured losses from wildfires. The focus of wildfire model development is currently on Northern America, Australia, and select regions in Southern Europe, Southern America, and Africa. In addition to his existing responsibilities, Dominik Kienmoser is an active member of Munich Re's expert group focused on leveraging Artificial Intelligence (AI) for natural perils modelling. Within this capacity, he has successfully designed machine learning models aimed at assessing wildfire risk, which have also been adapted for application in flood risk assessment.
Extended Abstract
Munich Re's natural catastrophe (NatCat) models rely on accurate and detailed land cover data to assess wildfire and flood risks. However, available land use data often exhibit significant weaknesses, hindering the development of high-quality models. In the context of wildfire risk, there are three primary weaknesses in available land use data. Wildland-Urban-Intermix (WUI) areas, where hazardous fuel densities are high within urban communities, are often misclassified as either urban or vegetated areas, even in high-quality land use data. This oversight is critical, as properties within intermix areas are particularly vulnerable to wildfires. Conversely, properties located within large commercial or industrial areas with ample space between buildings and sealed surfaces are less prone to wildfire risk, as building-to-building ignition is less likely.
In addition to wildfire risk, flood risk assessment also poses significant challenges. Smaller rivers and streams are more prone to flooding due to direct pluvial rainfall, rather than flood regimes of larger rivers. As such, it is essential to detect these areas and develop specific model approaches to accurately assess flood risk.
To address these challenges, Munich Re has developed machine learning models, coded in Python, based on random forest algorithms. The primary objectives of this approach are to ensure consistency across all data manipulations, facilitate faster development, and enable the coverage of larger regions. By utilizing minimal data quality requirements for input land use data, including vegetation inside and outside urban areas, built-up areas, streets, densely populated areas, and mean building size for wildfire, and flooded areas, digital elevation models, and river catchments for flood, our approach can be easily transferred to regions with varying data quality.
The reasons for using this approach are twofold. Firstly, it allows the method to be transferable over the whole globe, including regions with poorer data quality. Secondly, it enables the development of models that can be applied to a wide range of geographic areas, without requiring extensive manual adjustments.
Our training regions include areas where manual adjustments had already been made, such as Idaho and California for wildfire, and Italy for flood. These regions cover the full range of land use data required for accurate modeling. Input data is provided in raster TIFF format, with a resolution of 30 meters for wildfire and 10 meters for flood.
The first step in our approach involves analyzing neighborhoods by creating geometric shapes (rectangles, circles, and donuts) and applying statistical analysis (sum, count, mean, median, and mode) to discrete data. The resulting statistical parameters are then saved as new raster data, which is subsequently converted into arrays, each representing a predictor. Our model is trained using these arrays, and the results are refined through an iterative process.
To validate our model, we predict risk probabilities in a testing region and apply a GIS-based aftermath to transform the predicted single pixels into continuous classified areas. This is achieved using a density-based method for the predicted pixels.
Our approach offers several possibilities for adaptation and application in other areas, including retraining a new model with adapted training data or in a different training area, enabling the model to learn from new data and improve its performance; manipulating input data to increase or decrease the predicted results, allowing for sensitivity analysis and scenario planning; and altering the aftermath process to adjust the classification of predicted areas, enabling the model to be tailored to specific regional characteristics.
By leveraging machine learning techniques, we aim to enhance the accuracy and reliability of our NatCat models, ultimately improving our ability to assess and manage wildfire and flood risks. This presentation or pitch will provide a detailed overview of our machine learning-based approach, highlighting the challenges, solutions, and results of our research, as well as the potential applications and implications of our findings.
Harnessing Object-Based Geospatial Classification for Disaster Risk Management and Hydrological Modeling: A GEE-Based Case Study of Cryosphere Mapping in the Hindu Kush Himalayas
ABSTRACT. Snow and glaciers are critical components of the hydrological system in the Hindu Kush Himalaya (HKH) region, significantly contributing to downstream water resources and food security. However, rapid deglaciation due to global warming and climate change poses serious challenges to water availability and ecosystem sustainability. This study focuses on the improved classification and mapping of snow and glacier-fed basins in the Himalayas of Pakistan, through advanced geospatial techniques and high-resolution satellite imagery to capture seasonal spatio-temporal variability. A classification framework was developed to differentiate snow-covered and glaciated areas into five distinct classes: snow, no snow, debris-covered glaciers, non-debris-covered glaciers, and other land features. Starting with the traditional methods such as the Normalized Difference Snow Index (NDSI), the study integrates object-based classification techniques to address challenges in distinguishing snow from water bodies, shadows, and debris-covered surfaces. To enhance accuracy, the classification incorporates the Normalized Difference Water Index (NDWI) and applies a Shepherd segmentation algorithm, creating spatially homogeneous objects corresponding to ground cover features. The methodology was applied to high-resolution optical imagery over Hunza basin, Gilgit Baltistan, Pakistan in the Google Earth Engine’s cloud computing environment. Classification outputs were validated using a combination of manually digitized maps and independent datasets, showing a marked improvement over pixel-based NDSI methods by reducing misclassification errors. Results reveal the proportional distribution of snow and glacier categories, and demonstrate the efficacy of object-based classification frameworks, providing critical insights into the region's glacial dynamics. This approach, implemented using efficient computational tools, offers significant potential for scaling to larger spatial extents, supporting better water resource management and climate change impact assessments in the HKH region.
Comparative Analysis of Spatial Modeling Approaches for Landslide Susceptibility Mapping: A Study Along the Madan Ashrit Highway, Central Nepal
ABSTRACT. In this study, landslide susceptibility mapping was conducted along the Madan Ashrit Highway (MAH), a critical transportation corridor in Nepal, using two machine learning models, Logistic Regression (LR) and Random Forest (RF). A landslide inventory map comprising 483 landslides was developed using field surveys and satellite imagery, ensuring comprehensive data coverage. The inventory data were divided into training (70%) and validation (30%) datasets for model evaluation. A total of 13 landslide conditioning factors were analyzed, including elevation, slope, aspect, curvature, land use and land cover (LULC), proximity to roads, proximity to streams, topographic wetness index (TWI), stream power index (SPI), and geological characteristics. The susceptibility maps generated were classified into five categories: very low, low, moderate, high, and very high susceptibility zones. Model performance was validated using the area under the curve (AUC) of success and prediction rate curves. The results indicated that the RF model outperformed the LR model, with AUC values of 0.988 and 0.970, respectively. Approximately 24% and 28% of the study area were categorized as high and very high susceptibility zones by the LR and RF models, respectively. Furthermore, slope angle, proximity to roads, LULC, and TWI were identified as the most influential factors driving landslide susceptibility in the region.
The results underscore the importance of combining field data and machine learning techniques to create reliable landslide susceptibility maps. These maps provide valuable tools for engineers, policymakers, and land-use planners to guide sustainable infrastructure development and hazard mitigation strategies, particularly in vulnerable areas along the MAH.
Development of a National Debris Flow Database and Susceptibility Mapping for the Kyrgyz Republic
ABSTRACT. The Kyrgyz Republic is a predominantly mountainous country with a high vulnerability to hazardous geological and hydrometeorological processes, including debris flows, mudflows, landslides and avalanches. The rate of increase in the exposure of the population, territory and infrastructure to debris flows is outpacing the reduction in their vulnerability, leading to the emergence of new hazard factors and an increase in disaster-related losses. This has significant socio-economic impacts, particularly at the local and community level.
The updated catalogue contains 729 debris-flow-prone rivers, indicating the main morphological characteristics: gradient, length of watercourse, as well as the possible genesis of debris flows: active snowmelt, intense rainfall and glacial origin. The locations of debris flows are also mapped as point files and matched to the debris-flow-prone rivers catalogue. More than 1000 recorded debris flows have been added to the catalogue, including 55 catastrophic outbursts from high mountain lakes. Information on debris flows from 1924 to 2024 is included in the catalogue. Debris flow recurrence maps and flow genesis maps were produced from the updated data. Mudflow susceptibility maps were produced for all Regions and 40 districts using Flow-R modelling. The catalogue of potential hazardous lakes was updated and now contains 368 lakes with information on settlements and facilities at risk. At the district level, susceptibility maps show areas at risk and are essential for decision-makers in the context of development. This information is essential for disaster risk reduction management and is the basis for building a comprehensive national debris flow database for the protection of people and territory.
Mapping Snow Avalanche Susceptibility and Assessing Risk for Hazard Management in the Operation of Highways
ABSTRACT. The Kyrgyz Republic, nestled in the heart of Central Asia, is a landlocked nation dominated by rugged mountainous terrain, particularly the Tien Shan mountain range. This topography not only adds to the scenic beauty of the country but also makes it highly susceptible to a range of natural hazards, including snow avalanches. Avalanches, in particular, pose a severe threat to the safety of both human life and infrastructure, especially in the mountainous regions where the country's roads and transportation networks are often located. The operation of highways in such areas requires a detailed understanding of avalanche susceptibility and robust risk management strategies to ensure the safety and sustainability of critical transportation routes.
Given the increasing pressure on Kyrgyz Republic’s transportation infrastructure and the growing challenges posed by climate change, understanding avalanche risks and adopting effective mitigation measures is more important than ever. The aim of this study is to explore the role of Geographic Information Systems (GIS) and remote sensing technologies in mapping avalanche susceptibility and assessing associated risks. Through these methods, it is possible to enhance the safety of highways and minimize the impact of avalanches on the country’s road systems, ultimately contributing to better disaster management and risk reduction strategies.
Snow avalanches are one of the most devastating natural hazards in the Kyrgyz Republic, causing significant damage to infrastructure, transportation routes, and livelihoods. The country's mountainous terrain, particularly in regions with steep slopes, deep snow accumulation, and unstable snowpacks, creates ideal conditions for avalanche formation. The frequency and intensity of these events vary by region, with some areas experiencing avalanches almost year-round. The Western Tien Shan region experiences avalanche activity for 3-4 months, while the Central Tien Shan sees avalanche risks persist for 11-12 months, depending on altitude and local climatic conditions.
Annually, between 800 and 1,500 avalanches occur across Kyrgyz Republic. These events frequently disrupt transportation routes, endanger human life, and damage critical infrastructure such as roads, power lines, and communication systems. Approximately 53% of Kyrgyz Republic's total land area, or roughly 105,000 km², is at risk of avalanches. Notably, there are more than 30,000 avalanche-prone zones in the country, with around 1,000 of these areas posing a direct threat to human life and activities.
VirtuGhan: An Open-Source Virtual Computation Cube for On-the-Fly Earth Observation Data Processing
ABSTRACT. VirtuGhan is a Python-based geospatial data pipeline designed to perform real-time computations on raster tiles by leveraging Cloud-Optimized GeoTIFFs (COGs) and SpatioTemporal Asset Catalog (STAC) endpoints. Unlike traditional data cubes that rely on extensive pre-computation and storage, VirtuGhan dynamically retrieves and processes only the necessary tiles on demand. This approach minimizes data transfers and infrastructure overhead, enabling efficient analysis across multiple zoom levels and time dimensions. The framework supports user-defined band mathematics, multi-temporal analyses, and partial reads from Cloud-Optimized Sentinel-2 data, incorporating a caching mechanism to optimize repeated requests. By focusing on computation over storage, VirtuGhan offers a scalable and cost-effective solution for Earth observation analytics, making large-scale satellite imagery processing more accessible to researchers, analysts, and developers.
Efficient Management of Spatio-Temporal Raster Data in Sovereign Cloud Environments for Machine Learning Applications
ABSTRACT. The use of sovereign cloud environments for processing sensitive data is an emerging field, driven by the need for enhanced data privacy and compliance with regional regulations. Volatile costs associated with public cloud services can exceed budgetary limits, while uncertainties in political relationships increase interest in reducing reliance on Big Tech companies. As part of the research project "The temporal change of geospatial data", driven by the Gauss Centre of the Federal Agency for Cartography and Geodesy (BKG), we develop a spatio-temporal raster data management system. This system is designed to support machine learning applications that operate in sovereign cloud environments, such as on-premises Kubernetes.
While current research predominantly emphasizes data retrieval and accessibility, creating, updating, and deleting operations — essential for machine learning applications — receive less focus. Furthermore, cloud environments are typically used for distributed processing, whereas REST-based microservices are key considerations to support scaling mechanisms. In a concurrent environment, retrieving and accessing spatio-temporal raster datasets primarily involve read-only operations, which preserve data integrity and are generally scalable. In contrast, modification operations pose more significant challenges. Therefore, this contribution discusses preliminary results of a qualitative and quantitative comparison of four well established web service-based approaches to CRUD operations in containerized environments.
This contribution provides an overview of the key concepts, capabilities, and performance of four open-source data management systems: PostGIS raster accessed via PostgREST, Zarr Datacube using MinIO (S3), Cloud Optimized GeoTIFF through MinIO (S3), and Rasdaman configured as a Transactional Web Coverage Service.
PostGIS is an extension that enables the management of spatial and spatio-temporal data within the PostgreSQL database management system. It is widely used due to its robustness and high interoperability. While PostgreSQL itself provides a native database interface, PostgREST adds a RESTful web service layer, allowing users to interact with spatio-temporal data over the web.
Zarr enables efficient storage and processing of chunked, compressed multi-dimensional arrays. When combined with MinIO, leveraging the HTTP-based S3 protocol, Zarr provides scalable, cloud-native operations that support interactive data analysis and robust CRUD capabilities. Its design makes it particularly effective for large-scale, parallel data processing tasks inherent in machine learning applications.
Cloud Optimized GeoTIFF (COG) offers efficient raster data management on cloud platforms. With MinIO, users can take advantage of high-performance, HTTP-based, S3-compatible object storage, facilitating rapid data access and manipulation. COGs are especially conducive for applications involving large datasets, providing direct in-cloud operation capabilities crucial for machine learning.
Rasdaman stands out with its support for complex queries and transactional operations over large multi-dimensional raster datasets. As a web coverage service, Rasdaman allows for dynamic data handling, making it highly suitable for use cases requiring frequent data updates and transformations, such as those in machine learning environments.
The listed systems are assessed within a rootless container environment operated by the open-source container management engine Podman. Our evaluation focuses on a classification workflow utilizing multi-temporal datasets through distributed, concurrent processing services. These services are interconnected via MQTT, a resource-efficient network protocol, to facilitate synchronization.
Preliminary results from the investigated approaches reveal significant differences in capabilities and performance. Consequently, implementing efficient strategies to manage spatio-temporal raster datasets in sovereign cloud environments heavily depends on specific application requirements and the need for interoperability. Especially, machine learning applications that focus on applying spatio-temporal models can greatly benefit from systems capable of managing both irregular and aligned temporal data, as well as data with various spatial resolutions.
Vineyard3D: Grapevine Localization in 3D Point Clouds
ABSTRACT. Abstract
With the growing use of modern sensor technology and precision viticulture in the winemaking process the ability to extract
relevant information from large data sets becomes increasingly important. The precise monitoring of plant health can require
the geolocation of thousands of individual grapevines. We propose a simple method to automate the localization process of
grapevine positions in georeferenced LiDAR-recorded 3D point clouds. Them method consits of ground removal, grapevine row
detection in projected 2D depth maps, and plant detection using 2D occurrence maps. This approach is implemented as part
of a GUI-based application, Vineyard3D, to enable practical use. We also provide two registered point cloud datasets of the
same vineyard, recorded during winter and summer, as well as manually annotated grapevine positions as ground truth. The
proposed method can localize grapevine positions with a mean average error of up to 7 cm.
3D-Storytelling in Urban Planning - a case study in the context of the re-implementation of a land use plan
ABSTRACT. With interactive 3D city models, abstract map representations from land use plans can be prepared in an excellent and easy-to-understand manner as an informal tool for citizens and political committees. This case study shows the real use of 3D Storytelling as part of the re-implemtation of the land use plan ("Flächennutzungsplan") of a Bavarian municipality.
GIS-based Analysis of Walloon Building Inventory for Construction and Demolition Waste
ABSTRACT. This study presents a novel framework designed to create a building inventory for estimating construction and demolition waste at a regional scale. Addressing the challenges posed by insufficient demolition statistics and incomplete high-resolution building data, the approach integrates various Geographical Information System (GIS) data. Meanwhile, we propose conducting a buffer and jointness analysis and a sampling survey to verify the building types and discuss possible reasons for the gaps in the building data. Based on the spatial join analysis, the study demonstrates the potential of utilizing the built inventory to detect the demolished buildings.
Show me Where! Geographic-Aware Augmented Reality (GeoAR) Applications
ABSTRACT. 1. Introduction
Due to the rapid developments in digitalization, the amount of collected information is constantly growing. Thus, people are confronted with overwhelming amounts of information during their daily life. To compensate for this, it becomes increasingly important to provide relevant, situated and context-based information. What is considered relevant varies heavily from person to person, depending on the context, even if the geographic extend of interest remains the same. For instance, while a person might be interested at the gas price shown on a gas station board, another might be more interested on information concerning the rest area of this station. Furthermore, the required level of detail of the provided information might strongly differ. For example, in a wayfinding scenario, users in unfamiliar environments might need extra guidance at a specific intersection compared to familiar ones.
2. Capabilities of Geographic-Aware Augmented Reality
Augmented Reality (AR) is a technique that alters the reality with virtual content to create a coherent whole (Doerner et al., 2022). Virtual information can be dynamically adjusted to the user needs and her surrounding environment. However, state-of-the-art AR devices cannot sense the physical world beyond the personal space (using SLAM-methods), making it impossible to provide augmented information at further distances. Galvão et al. (2024) presented a novel solution to this issue: the introduced GeoAR framework enables geographic-aware AR, allowing to visualize georeferenced objects at their precise geographic location. The placement is independent of the position and movement of the user. Moreover, the authors report a high accuracy with a positional degradation ranging from 0.2cm/m to 2.6cm/m, which is achieved using a GNSS antenna and ground correction. After an initial calibration, such GeoAR systems can localize themselves in a geographic reference even when satellite exposure is not possible anymore.
3. Potential Applications of Geographic-Aware Augmented Reality
GeoAR makes it possible to visualize georeferenced information on the ground, under the ground as well as above the ground. In addition to the provided visualization possibilities, such systems can be utilized to reverse the visualization process by enabling simple surveying tasks. As an example, in an industrial setup, this technology could be used in order to guide a specific worker in the field to successfully complete her task while eliminating ambiguities, e.g., highlighting the wall to be painted. Similarly, workers in the field could utilize this technology in order to mark areas that need to be inspected (simple surveying).
Similarly, GeoAR can also be utilized to enable Human-Environment Interaction, thus, being able to interact with the surrounding structures and buildings through transparent object overlays. As an example, tourists could use this technology in order to obtain information for structures and buildings of interest, e.g., a gesture in the direction of a specific building could enable a historical façade retrospective.
The presented technology and its applications bear the potential to transform research and industry in many ways. The ability to control, manipulate and steer the information flow can make it possible to eliminate ambiguities, make work processes more effective and efficient and even reduce the cognitive load of the users.
References
Doerner, R., Broll, W., Jung, B., Grimm, P., Göbel, M., & Kruse, R. (2022). Introduction to virtual and augmented reality. In Virtual and Augmented Reality (VR/AR) Foundations and Methods of Extended Realities (XR) (pp. 1–37). Springer.
Galvão, M. L., Fogliaroni, P., Giannopoulos, I., Navratil, G., Kattenbeck, M., & Alinaghi, N. (2024). GeoAR: a calibration method for Geographic-Aware Augmented Reality. International Journal of Geographical Information Science, 38(9), 1800-1826.
ABSTRACT. Wird der Verkehr im Zeitalter autonom fahrender Autos flüssiger? Kann die Eichenprozessions-Spinner Plage mit Meisen-Nistkästen in den Griff bekommen werden? Bewirkt der Klimawandel, dass Österreich ein Dengue Risikogebiet wird? Scheinbar einfache Fragen wie diese werden im englischen Sprachraum als „wicked problems“ (komplexe Probleme) bezeichnet. Solche wicked problems sind nicht ganz einfach zu beantworten, denn es bedarf eines Verständnisses der ihnen zugrundeliegenden Prozesse. Für uns Geoinformatiker heißt das: es gilt nicht nur räumliche Daten zu modellieren, sondern räumlichen Prozesse zu simulieren, um diesen ganz praktischen Problemen auf den Grund zu gehen und Lösungen erarbeiten zu können. Ziel dieses Workshops ist es die Methode der Agenten-basierten Simulationsmodellierung anhand einiger Beispiele aus der Praxis kennen zu lernen. Im Workshop werden wir mit open-source Software arbeiten. Eine Vorerfahrung im Arbeiten mit räumlichen Simulationsmodellen ist nicht nötig, allgemeine Grundkenntnisse in Programmierung sind jedoch von Vorteil. Nach Abschluss des Workshops können Sie Ihre Fertigkeiten mit kostenlosen UNIGIS Materialien vertiefen, bzw. den weiterführenden GIS_Update Kurs zu „Spatial Simulation“ belegen, um selbständig Modelle für eine Szenarien-basierte Problemlösung entwerfen und einsetzen zu können.
Info: Bitte unbedinge eigenen Laptop mitbringen (+ vorinstallierte Software)
Understanding User Engagement and Revisitation for Spatial Decision Support Systems in Pollinator Ecology
ABSTRACT. Spatial decision support systems (SDSS) have potential utility in a wide variety of geospatial domains, including ecological informatics, site selection, urban planning, and hazard mitigation. User-centered design approaches are often used to characterize user needs, to develop prototypes, and to evaluate implementations of SDSS. However, we do not have a wealth of empirical results to draw upon when it comes to understanding why users want to use an SDSS or what qualities of an SDSS are likely to encourage their repeated usage. In this paper we highlight and contextualize results from recent work to design and evaluate an SDSS focused on pollinator health in support of beekeepers. We focus specifically on what factors seem to drive user's desire to use a system more than once. Our findings help pave the way for future work to address this crucial challenge in SDSS design to ensure that our systems become seen as more than single-use platforms.
Deep Learning Insights into Glacier Dynamics in the Austrian Alps (2015–2024): Potential and Challenges
ABSTRACT. Glacier retreat is a critical indicator of climate change, with significant implications for hydrology, biodiversity, and local
economies. This study employs an Attention U-Net deep learning model to analyse glacier retreats in selected regions in the
Austrian Alps from 2015 to 2024. By integrating Sentinel-2 imagery and a high-resolution Digital Elevation Model (DEM),
we quantify ice loss across four major glacier zones: Großglockner, Großvenediger, Sonnblick, and Wildspitze. The model
achieved an Intersection over Union (IoU) of 80.56% and a recall of 89.72%, demonstrating its effectiveness in automated
glacier mapping. Our results indicate a total glacier area reduction from 154.90 km² in 2015 to 130.56 km² in 2024, equating
to a 15.72% loss. These findings align with global glacier retreat trends. Despite data inconsistencies, particularly in 2017 due
to temporal mismatches in satellite imagery, the study underscores the potential of deep learning for accurate and scalable
glacier monitoring. This research provides a robust framework for future climate impact assessments and adaptation strategies
in alpine environments.
Requirements for EO-derived information about glacier retreat impacts on alpine hiking infrastructure
ABSTRACT. Mountaineering and the alpine hiking infrastructure in the Central Alps of Austria face challenges because of glacier retreat and related geomorphological and periglacial processes. They cause increased frequency of natural hazards and rapid changes of the environment and, thereby, have a strong impact on efforts for infrastructure conservation and safety of mountaineers. Alpine infrastructure management needs comprehensive and up-to-date information about glacier retreat, related processes in their proximity, the diverse impacts on alpine hiking infrastructure and the potential risks for mountaineering. Earth observation (EO) satellite data can facilitate mapping and monitoring the evolution of phenomena related to glacier retreat. To investigate this opportunity, we collected user requirements to assess the impacts of glacier retreat on alpine hiking infrastructure systematically. We performed a local experts’ workshop with trail keepers, hut guardians, national park administrators, alpine guides, and alpine police to locate hot spots in our three study areas along the main ridge of the Austrian Alps. For the hot spots, the participants described processes and consequences for infrastructure management and hazards for mountaineering to create a prioritization of urgency. Further, the collected hot spots were characterized by their cause for understanding which features must be mapped in detail for addressing the problems. A total of 42 hot spots were identified in the map, with 31 current issues and 11 past issues. The participants assigned the current hot spots to management tasks, i.e. ‘hut maintenance’, ‘trail maintenance’, ‘trail relocation’, ‘trail abandonment’ and ‘no effect on trails’ and rated the frequency of occurrence of problems at that location and resulting efforts. Furthermore, the hot spots were grouped into the categories ‘hut-specific topics’, ‘changes in the occurrence of natural hazard events’, ‘phenomena in non-glaciated areas’, ‘phenomena in the glacier forefield areas becoming ice-free’, ‘phenomena on the glacier surface’, and ‘phenomena in summit areas becoming ice-free’. The categories and their descriptions are the basis for defining the content, type and accuracy required for mapping the phenomena relevant to an assessment of glacier retreat impact on alpine hiking infrastructure, and subsequently investigating the performance of EO-based information products.
Assessment of flood susceptibility using AHP and remote sensing-derived products, a case study in Punjab, Pakistan
ABSTRACT. Flood-prone mapping is a valuable approach for disaster risk management, especially in flood-risk regions such as Punjab, Pakistan. This study applies a multi-criteria decision analysis, using the Analytical Hierarchy Process (AHP) and remote sensing-derived products in a Geographic Information Systems (GIS) environment to assess flood-prone areas across the region. We selected seven flood indicators based on a literature review and weighed them by consulting expert knowledge. Two flood-prone maps were calculated, (1), a 30 m resolution raster, and (2), an aggregated map of administrative boundaries. The model was validated with historical flood events, derived via Sentinel-1 Synthetic Aperture Radar (SAR), to assess the model’s reliability. The findings revealed that 43.23% of the study area is at high and very-high flood-prone, especially low-lying areas characterized by proximity to riverbanks and flat slopes covered with cropland. The results can support policymakers and disaster management authorities in enhancing flood mitigation.
WS: Visualising and Analysing Copernicus satellite imagery through Copernicus Data Space Ecosystem APIs
ABSTRACT. The widespread availability of satellite imagery from the Copernicus programme has recently been complemented by a state-of-the-art open data infrastructure that hosts dedicated codebases and processing capacity: the Copernicus Data Space Ecosystem. Not merely a data provider, CDSE is an open platform that enables individual users to visualise and analyse satellite imagery using the cloud as its engine for free within the scope of a generous quota.
In this short course we will introduce participants to how to access the data via APIs using the Copernicus Data Space Ecosystem. In an interactive session using a Jupyter Notebook, participants will be guided how to search, discover, analyse and visualise or download their analysis in the dedicated Copernicus Data Space Ecosystem online Jupyter Lab instance.
In the first session, participants will discover how to filter their data search by their area of interest, desired time range and even the cloud cover percentage of the satellite imagery. They will then be shown how to visualise the data, calculate spectral indices and even derive statistics such as spectral signature plots and time series analyses without downloading a single pixel.
In the second session, participants will be guided through a more advanced use case combining the Sentinel Hub APIs with python data science libraries to produce detailed insights in a workflow relevant to an Austria specific use case.
After this session, participants will be confident enough to start developing their own workflows in the cloud, leveling up their processing and analytic capabilities with the power and scalability of the cloud!
The sessions only requires participants to have:
A laptop with access to the internet on which they can access the dedicated CDSE Jupyter Lab instance.
A Copernicus Data Space Ecosystem user account - which they can sign up to before the course for free.
This short course should ideally be organised as two 75 minute sessions: the first session focusing on beginners and the second session focusing on more advanced users who already have experience analysing earth observation datasets. This will enable participants to get maximum value out of the workshop with sufficient time for self-guided exploration and the chance to interact with the instructors.
WS: Introduction to the ArcGIS Platform: Cloud, Desktop, and Mobile Applications
ABSTRACT. The Esri ArcGIS platform is a leading geographic information system (GIS) developed by Esri. It offers comprehensive functions for creating, managing, analyzing, mapping, and sharing data in a geographic context. ArcGIS is used in numerous industries, including urban planning, environmental management, disaster response, transportation, and economic analysis.
In this 75-minute workshop, participants will receive a comprehensive overview of the various applications of the ArcGIS platform, from ArcGIS Online, ArcGIS Enterprise, and ArcGIS Pro to mobile apps and web GIS solutions. The goal of the workshop is to demonstrate the seamless transition between the individual platforms and the integration of the various Esri products. It will show how ArcGIS Pro and ArcGIS Online, mobile applications such as Field Maps and Survey123, and tools like Experience Builder and Dashboards can be linked to create efficient workflows and seamlessly edit and visualize data.
The workshop will focus on a brief introduction to all components of the ArcGIS platform, practical application examples, and an interactive exercise where participants can collect data themselves.
ArcGIS Pro: ArcGIS Pro is Esri's powerful desktop GIS application, offering comprehensive tools for creating, managing, analyzing, and visualizing geographic data. A particular focus of the workshop is the integration of ArcGIS Pro with ArcGIS Online to seamlessly exchange data between desktop and cloud environments. Participants will learn how to import, edit, and synchronize an online layer in ArcGIS Pro to ensure consistent data display.
ArcGIS Online: ArcGIS Online is Esri's cloud-based GIS platform that allows users to create and share maps and applications. It will be shown how to create web maps and applications that can be updated in real-time. An interactive element of the workshop is the use of ArcGIS Survey123, where participants can collect data that is immediately visualized in ArcGIS Online.
ArcGIS Enterprise: A scalable server solution for companies that want to manage large amounts of data and host their own GIS services. This workshop will show the role ArcGIS Enterprise plays in the ArcGIS platform.
WebGIS: WebGIS is an integral part of the ArcGIS platform, enabling GIS data and functions to be provided over the internet. Participants will learn how to use WebGIS to create interactive maps and applications that are accessible from any device. It will be shown how to develop and implement WebGIS solutions to support a wide range of GIS applications.
Esri Mobile Apps: Esri offers a variety of mobile applications that allow users to collect and use GIS data on the go. In this workshop, participants will be introduced to the use of apps like Survey123. They will learn how to collect, synchronize, and analyze data in the field with these apps. It will be shown how mobile applications can be integrated into the ArcGIS platform to create efficient workflows.
This workshop thus offers a comprehensive introduction to the various components of the ArcGIS platform and shows how they can be linked together. Participants will also gain practical insights and can participate in an interactive exercise.
WS: Räumliche Simulation in der praktischen Anwendung
ABSTRACT. Wird der Verkehr im Zeitalter autonom fahrender Autos flüssiger? Kann die Eichenprozessions-Spinner Plage mit Meisen-Nistkästen in den Griff bekommen werden? Bewirkt der Klimawandel, dass Österreich ein Dengue Risikogebiet wird? Scheinbar einfache Fragen wie diese werden im englischen Sprachraum als „wicked problems“ (komplexe Probleme) bezeichnet. Solche wicked problems sind nicht ganz einfach zu beantworten, denn es bedarf eines Verständnisses der ihnen zugrundeliegenden Prozesse. Für uns Geoinformatiker heißt das: es gilt nicht nur räumliche Daten zu modellieren, sondern räumlichen Prozesse zu simulieren, um diesen ganz praktischen Problemen auf den Grund zu gehen und Lösungen erarbeiten zu können. Ziel dieses Workshops ist es die Methode der Agenten-basierten Simulationsmodellierung anhand einiger Beispiele aus der Praxis kennen zu lernen. Im Workshop werden wir mit open-source Software arbeiten. Eine Vorerfahrung im Arbeiten mit räumlichen Simulationsmodellen ist nicht nötig, allgemeine Grundkenntnisse in Programmierung sind jedoch von Vorteil. Nach Abschluss des Workshops können Sie Ihre Fertigkeiten mit kostenlosen UNIGIS Materialien vertiefen, bzw. den weiterführenden GIS_Update Kurs zu „Spatial Simulation“ belegen, um selbständig Modelle für eine Szenarien-basierte Problemlösung entwerfen und einsetzen zu können.
Info: Bitte unbedinge eigenen Laptop mitbringen (+ vorinstallierte Software)
WS: GIP Forum 2025 – das update zur Graphenintegrations-Plattform GIP
ABSTRACT. Die Graphenintegrations-Plattform GIP ist das digitale Referenzsystem der Öffentliche Hand für das Wegenetz in ganz Österreich. Betrieb, Weiterentwicklung und Wartung der Graphenintegrations-Plattform GIP werden seit 2016 im Rahmen eines Produktivbetriebs über den Verein ÖVDAT - Österreichisches Institut für Verkehrsdateninfrastruktur – abgewickelt, mit dem operativen GIP-Österreich-Betrieb ist ITS Vienna Region, das ITS Kompetenzzentrum der Länder Wien, Niederösterreich und Burgenland beauftragt. Eine gute Übersicht zur GIP bietet die Website www.GIP.gv.at.
Die GIP wird laufend weiterentwickelt, neu angewendet und auch für spezielle Zwecke adaptiert. Das GIP Forum im Rahmen der agit_25 bietet einerseits durch 4-5 Vorträge ein update zur dynamischen Entwicklung der GIP und andererseits in seiner Form als lockerer Workshop ausreichend Raum für individuelle Interessen, Erläuterungen und detaillierte Einblicke. Themen können beispielsweise sein (die finale Vortragsreihe wird noch intern abgestimmt):
GIP 2.0, Zukunft und Skalierfähigkeit der GIP unter anderem im Kontext zu CCAM (Connected Cooperative and Automated Mobility), nutzungsstreifengenaues Routing, die GIP für zu Fuß gehen und Radfahren, Modellierung komplexer Haltestellen oder die Bewährung der GIP in Krisensituationen.
Das GIP Forum 2025 wird organisiert von ITS Vienna Region und kann einen gesamten Slot von 75 Minuten ausfüllen.
Quantification of wall-to-wall timber volume for the federal state of Brandenburg based on LiDAR data
ABSTRACT. As an essential and adaptable natural resource, wood is highly valued in the building industry, which is realizing its worth for both structural and aesthetic reasons. Because of its durability, strength, and flexibility, wood is used in construction for a variety of purposes, including large-scale commercial buildings and domestic homes. Wood, being a renewable resource, has many advantages over other building materials like steel and concrete. These advantages include carbon sequestration, decreased energy usage during manufacture, and a lower overall carbon footprint.
Gaining information about the actual timber volume available within a certain area can be very time consuming, tedious and effort costly, especially if done with traditional forest mensuration field measurements methods. Consequently, there is a growing need for a more efficient way to monitor and measure a larger area of the forest and to gain respective structural information. The discipline of forestry has been touched since more than two decades by the adoption of remote sensing technologies, such as Light Detection and Ranging (LiDAR), which provide precise and reliable information about the composition and structure of the forest. A crucial part of forest inventory and management is the estimation of individual tree metrics, such as tree height, diameter at breast height (DBH), and tree crown features.
Wide-scale forest inventories have made rapid progress in the last years, more often using Airborne Laser Scanning (ALS) technologies (White et al. 2016). While these technologies have proven to be effective recently—especially given how quickly technologies are developing overall (Hansen et al. 2015) – the accuracy of the estimated and derived metrics regarding the required individual tree measurements continues to be a major challenge for these methods (Hudak et al. 2014).
The present ALS data study and biomass estimation at state level effectively implemented a semi-automated workflow to process and analyse extensive LiDAR point-cloud datasets, enabling the detection of individual trees and the estimation of crucial forest metrics, including above-ground biomass and timber volume, throughout the Brandenburg forests. The results demonstrated that pine trees are the dominant species in the area, which is consistent with the known composition of Brandenburg's forests. The analysis yielded significant insights about the resources and structure of the forest, as well as valuable information for efforts related to monitoring, conservation, and forest management. It is also noteworthy that the study identified several significant methodological concerns. The over-segmentation of the data set led to the generation of artificially elevated estimates of tree numbers, biomass, and timber volume, particularly in deciduous tree species such as oaks. Furthermore, the precision of the results was further confounded by the edge effect, which occurred when trees on the borders of LiDAR plots were purposefully divided into several segments.
The aforementioned difficulties demonstrate the necessity for improvement of the segmentation algorithm, with the objective of reducing over-segmentation, particularly in deciduous trees. In order to achieve more accurate tree measurements, it is essential to address the edge effect, especially for trees situated in close proximity to plot boundaries. In order to enhance the precision of the estimates, the study emphasises the importance of integrating current LiDAR data with ground-based validation techniques, such as Terrestrial Laser Scanning (TLS).
The findings of the study indicate that, despite the identified limitations, LiDAR technology has a significant potential for use in the field of forestry. It provides a comprehensive foundation for future assessments, indicating that LiDAR has the potential to be an effective instrument for extensive forest inventory and monitoring with further methodological advancements.
In conclusion, this research effectively calculated the volume of timber and above-ground biomass found in Brandenburg's forests, thereby providing a strong basis for future forestry evaluations that will be more accurate and productive.
ABSTRACT. Alveolar echinococcosis (alveococcosis) is a potentially life-threatening disease caused by an infection by zoonotic parasites (transmitted through animal vectors), increasingly impacting human populations in Central and East Asia. In the case of Kyrgyzstan, local incidences are recorded and collected in a national register, which currently lacks a geographical view. In order to track and monitor the regionalized impact on public health, the prototypical development of an interactive online dashboard is being pursued. This requires database and UX design to facilitate the support of medical scientists and health officials to prioritize measures counteracting further spread and to target interventions. Research and development towards this important tool development is based on a cooperation of information science, geoinformatics and medical scientists.
The PDmon project, with support from the Eurasia-Pacific Uninet, aims at designing, implementing, and validating a prototype architecture for an online reporting, checking and presentation system supporting the regionalized monitoring of parasitic diseases. This interactive platform is intended to support the development and implementation of public health policies as well as specific interventions. Additional online media are aiming at public awareness and communication of a wider context of disease causes and management.
A cloud based geospatial platform (ArcGIS Online) serves as the content management environment facilitating the distributed collection and validation of data, maintenance of regionalized time series, and access to dynamic visualisations as charts and maps. All steps are supported through dedicated web apps based on a common open platform architecture. User interfaces include simple web apps, storymaps and dashboards.
Dashboards present views of statistical information based on spatiotemporal data that allow to monitor regionalized events, make decisions, inform others, and see trends. Dashboards are designed to display multiple visualizations that are linked together on a single screen. They offer a comprehensive view of data and provide key insights for at-a-glance decision-making. Dashboard support and encourage exploration through interaction with all elements, including lists, charts, maps, indicators and selectors or filters.
These web apps support and interact with the different groups of actors involved in the entire workflow. Data acquisition from medical offices and health centers is being done through simple web forms (although in the early stages of the initiative also based on data conversion from legacy tables). National level monitoring is facilitated via an interactive dashboard, with regional views enabled for use on a sub-national level. For more contextualized communication to audiences like policy-makers or a concerned public, selected views are embedded into geomedia like storymaps or single task apps, including versions for personal mobile devices.
Specific stated objectives of the PDmon initiative include:
- Defining the specific needs of stakeholders regarding information access and interfacing
- Checking existing data structures and designing a data model supporting the required analyses and presentation / interaction media while conforming to the flow of case documentation data
- Establishing a web services schema for flexible access to case statistics per region
- Designing and implement a monitoring dashboard allowing flexible queries, filtering and selection. This dashboard, or these dashboards, address the needs of general monitoring for policy development, as well as the targeting and priorization of specific local interventions
- Developing a sample storymap addressing stakeholders from a policy / decision making group
- Based on the above, work with a focus group to identify required changes and to sketch a way forward for a definitive PD monitoring platform
While the project aims at a prototypical implementation, it will ultimately reach beyond a proof-of-concept by supporting urgently needed insights from a science as well as a management perspective. By putting the created tools and interfaces into a real-world application context, a smooth progression into an operational work environment is intended. This will need support from subsequent funding and a broader platform of actors and stakeholders.
The introduction of a GIS program for monitoring echinococcosis and alveococcosis in the Kyrgyz Republic will be a key step in ensuring public health. The data obtained will contribute to improving the quality of medical services and increasing the level of preventative measures against these diseases. PDmon already has successfully started work towards these objectives:
• Creation of a unified national regionalized database containing up-to-date information on cases of echinococcosis and alveococcosis.
• Raising the level of awareness and qualification of district parasitologists in the issues of data registration and processing.
• Monitoring the number of reported cases and improving response to preventative measures and interventions.
• Providing high-quality analytical information for the development and adjustment of state programs for the control of zoonotic diseases.
According to the WHO, at least 270 million people (58% of the total population) are at risk of cystic echinococcosis (CE) in Central Asia and neighbouring countries, with the highest prevalence reaching 10% (ranging from 0.8 to 11.9%) in some communities. Although echinococcosis control programs have been initiated in some countries in Central Asia, control efforts are generally considered fragmented and uncoordinated, and prevalence largely associated with social factors including limited community awareness. The PDmon initiatives aims at countering some of these issues.
Projected Expansion of Aedes albopictus Habitat in Austria: A Climate- and Land Cover-Based Habitat Suitability Assessment
ABSTRACT. Since its initial detection in 2012, the Asian tiger mosquito (Aedes albopictus) has been spreading across Austria (Seidl et al., 2012). This alien species has now been reported from all federal states, and established populations are known from the cities of Vienna, Graz and Linz (Bakran-Lebl and Reichl, 2025). The Asian tiger mosquito is a public health concern because it is a vector for several exotic pathogens that are not transmitted by native mosquito species. Furthermore, as the species becomes established in new regions, there is an increased risk of autochthonous cases of diseases such as dengue, chikungunya, and Zika virus.
Given that the tiger mosquito is still in the early stages of colonization, understanding its current and future habitat suitability in Austria is critical for targeted mitigation and preparedness planning. We present the results of a habitat suitability model based on a review of empirical studies on the species’ climatic and environmental preferences. Results from the literature review were used to parameterise a model based on climate and land cover indicators. Climate (1991–2020) was represented using the SPARTACUS dataset for present conditions, while future projections were based on ÖKS15 climate scenarios for the late 21st century under RCP4.5 and RCP8.5. Land cover indicators — including imperviousness density, woody vegetation, and sealed areas — were sourced from the Copernicus Land Monitoring Service and incorporated to account for factors that facilitate overwintering (e.g. provision of shelter) and breeding (e.g. shade and water-holding containers).
A hierarchical approach was applied to weight climatic and environmental indicators, first computing indices for overwintering, environmental, and climatic suitability before integrating these into a habitat suitability index. Citizen science data (2020–2024) were used to classify the index based on the 75th and 99th percentiles of ranked index values at known presence locations. Climate scenario uncertainty was assessed using 16 ensemble members. The resulting bivariate maps jointly visualize projected habitat suitability and the corresponding uncertainty due to variability across ensemble members.
While colonization by the Asian tiger mosquito is incomplete in Austria, results suggest substantial increases in suitable habitat by end of the century. Based on verified observations over the past four years, <1% of Austria is currently classified as highly suitable for Ae. albopictus. This class is equivalent to habitat where 75% of past observations (N= 2301) were recorded. Based on the 99th percentile of observations, 5.3% (4,400 km²) of Austrian territory is suitable, primarily in urban and semi-urban areas with mild winters. By the end of the century, suitability is projected to increase to 20.8% (17,500 km²) under RCP4.5 and up to 44.8% (37,600 km²) under RCP8.5. However, highly suitable areas remain limited to 2.2% (1,800 km²) and 3.1% (2,600 km²), respectively. These represent core areas where the Asian tiger mosquitoes are expected to thrive. The ongoing colonization and expanding habitat under a warming climate pose increasing public health risks, as vector-borne diseases may become more frequently transmitted in Austria.
Towards in-situ near real-time 3D environmental monitoring and geospatial point cloud analysis with open-source software
ABSTRACT. Objective:
Rapid surface changes, particularly in densely vegetated or complex terrains such as landslide-prone sites, require monitoring techniques that deliver timely, actionable insights. Permanent laser scanning (PLS) systems generate high-frequency, multitemporal point clouds that capture rich environmental information. These datasets include temporary variations (e.g., wind-induced tree movement, precipitation) that can obscure critical signals such as rockfalls. Filtering temporary changes from relevant persistent changes (e.g., rockfall) relies on labeled training data of point cloud time series, which are typically not available. We propose the concept of virtual laser scanning (VLS) of dynamic 3D scenes to generate comprehensive reference data for robust calibration and training of change detection methods. We achieve this by using simulated, fully annotated synthetic laser scanning data from a virtual replica of the study site and the PLS system combined with hierarchical change analysis and machine learning.
Methods and Results:
We present an integrated workflow for in-situ near real-time 3D environmental monitoring using open-source point cloud analysis tools. Our workflow consists of three main steps. 1) Scene construction and simulation: A 3D scene is constructed based on a study site, and dynamics (e.g. rockfalls or trees in the wind) are introduced using Blender software. A virtual PLS, configured to mirror the real survey settings, generates a fully annotated point cloud time series using the LiDAR simulator HELIOS++. This approach allows the generation of a far greater variety of scenarios than those represented in existing real-world acquisitions or open datasets, capturing the full spectrum of potential geographic processes. 2) Voxel-based change detection: Voxel-based change detection is calibrated on the synthetic data to rapidly filter temporary, non-relevant changes using VAPC. In this step, parameter calibration on the virtual dataset significantly reduces the area of interest while preserving the changes of interest with a low miss rate. 3) Point-based change analysis: Regions identified as relevant undergo detailed point-based change analysis to accurately classify and quantify detected changes. With this three-step approach we effectively filter out wind-induced noise and areas of no change, reduce overall computation time by up to 97%, and maintain a miss rate of less than 6%. Thus, we significantly enhance 3D data processing efficiency and maintain high accuracy of change detection when monitoring dynamic landscapes. The resulting open-source Python framework also allows users to incorporate their own or alternative methods at any stage of the process.
Conclusion:
The integration of simulated, fully annotated data into the calibration process addresses the challenges posed by the unavailability of labeled reference data in real-world in-situ 3D monitoring. This approach enables calibration and training of methods for rapid detection of hazard-relevant changes while significantly reducing computational requirements.
Enhancing Class-Wise Accuracy in Machine Learning-based Multiclass Land Use and Land Cover Classification: A Comparative Analysis between Different Machine Learning Algorithms
ABSTRACT. Urbanization is rapidly transforming land use and land cover (LULC), impacting the urban environment on a global scale. Hence, continuous and accurate monitoring of LULC changes is essential to address the related challenges. The high-resolution satellite sensors and integration of Machine learning (ML) algorithms in remote sensing-based LULC classification have contributed to improved accuracy and efficiency of LULC mapping. Additionally, spectral band-ratio-based indices have been proven effective in distinguishing diverse land cover features, further enhancing classification performance as additional source layers.
Despite these advancements, a common challenge in LULC classification is the inconsistency of model performance across different land cover classes. While ML models often achieve high overall accuracy, not all the algorithms perform similarly in different LULC classes. Hence, choosing a single algorithm for classification leads to compensation of specific classes.
This research addresses this issue by evaluating tree-based Random Forest (RF), discriminative Support vector Machine (SVM), and perceptron-based Artificial Neural Network (ANN) algorithms using different satellite bands for the tropical city of Phnom Penh in 2023. The results showed a high weighted F1-score, a well-accepted performance evaluation matrix based on the producer and user accuracy ratio, 88%, 85%, and 88%, respectively, for SVM, RF and ANN models. However, specific algorithms have different accuracies in different classes. Additionally, it was observed that, although adding spectral indices as an additional source layer does not significantly improve the overall performance, it improves the class-wise performance of the specific model.
Based on our analysis and results, we proposed a framework that collects the results from the best-performing algorithms for that specific class, creating a composite final classification based on F1 Score and accuracy. This leads to an improved F1 Score of 90%, higher than any individual model could achieve. This approach also enhances the classification accuracy of underperforming classes, including trees, roads, bare soil, and water.
This research enhances the methodological approach of machine learning-based LULC classification, ensuring the best possible class accuracy and improving the overall classification accuracy in multiclass LULC classification. While this study focused on three algorithms, the methodology can be extended to other ML/DL models for LULC mapping and change detection.
GIS-Based Data Management for Long-Term Stability Monitoring in Salt Dome Mines with 3D Visualization
ABSTRACT. The goal of this research was the development and implementation of an advanced data management system for a salt dome mine, designed to monitor and assess the stability of underground conditions over a 70-year period. The system is built on the integration of Geographic Information System (GIS) technology, consolidating and managing a wide range of data, including measurements of underground and surface displacement, convergence, and deformation, to assess mine stability and operational safety.
A core objective of this system is to create a centralized database capable of storing and organizing extensive volumes of data accumulated over several decades. This data comes from geotechnical surveys, monitoring instruments, and historical studies. By systematically collecting and archiving this information, the system allows continuous analysis of the evolving conditions of the mine, supporting both retrospective comparisons and real-time stability evaluations. The database architecture is optimized for long-term data management, ensuring compatibility with future data inputs and seamless integration of new measurements.
A key feature of this system is its ability to process both temporal data (such as time series displacement measurements) and spatial data (such as 3D geospatial mappings of underground structures). The integration of these data sets within the GIS framework allows spatial-temporal analysis, providing a detailed understanding of the structural dynamics of the mine over time. This 3D spatial analysis enables visualization of the evolving conditions and deformation processes of the mine, helping operators better interpret the stability of the underground environment. Visualizing the data in three dimensions helps identify areas of concern, such as subsea zones, convergence, and underground voids, providing a clearer understanding of potential hazards.
The system database structure is specifically designed to accommodate future data, ensuring that new monitoring results can be seamlessly incorporated. This adaptability allows the system to evolve as new measurement techniques or instruments become available, without disruption. The development process emphasizes data integrity and consistency, ensuring that historical and current datasets are comparable across time periods. Standardized protocols for data collection, storage, and analysis guarantee the reliability of these comparisons, while metadata inclusion ensures the transparency and verifiability of data sources.
In addition, the system integrates critical data on factors that affect the stability of the mine, such as underground cavities, mining holes, and water leakage. Incorporating 3D spatial analysis allows for better understanding of the location and extent of these factors, strengthening the predictive capabilities of the system. The integration of water leakage data is particularly valuable in identifying areas at risk of flooding, further enhancing the system’s ability to proactively manage safety and mitigate risks.
In conclusion, the development of this data management system represents a significant advancement in the monitoring and management of salt dome mines. Using GIS technology and integrating historical and real-time data with advanced 3D spatial analysis, the system provides a comprehensive framework to evaluate the stability of the mine. This approach supports informed decision making, enhances risk management capabilities, and ensures the long-term safety and operational integrity of mining operations.
Automated Segmentation of Historical Cadastral Maps: A Machine Learning Approach to the Franziszeischer Kataster
ABSTRACT. The Franziszeischer Kataster, created between 1817 and 1861, represents one of Europe's most eminent historical cadastral mapping projects, covering over 670,000 square kilometers across the Habsburg Monarchy. Despite its significant potential for historical research, ecology, geography, and spatial planning, the utilization of this valuable resource has been limited by accessibility challenges and the complex nature of the source material. This paper presents an automated approach to segment key structural elements from these historical cadastral maps using machine learning techniques. We developed a Unet-based model for identifying and extracting four primary features: roads, buildings, water bodies, and forests. For the evaluation, we selected the Styrian district of Murtal and achieved an overall Intersection over Union (IoU) score of 96.28% and maintained F1 scores consistently above 95% across all feature classes, despite significant class imbalance in the training data. The results demonstrate the potential for automating the analysis of historical cadastral maps, enabling their integration into modern Geographic Information Systems (GIS) and facilitating comparative studies of land use changes over the past two centuries. This approach represents a significant step forward in making historical spatial data more accessible and open for analysis for various research disciplines.
A Historical GIS approach to analyze the impact of WWII bombings in the City of Treviso (NE Italy)
ABSTRACT. The study of aerial bombings during World War II is a critical aspect of urban planning in Italian cities, particularly in the densely populated regions of Northern Italy. Assessing the risks posed by unexploded ordnance (UXO) requires a precise identification of areas requiring caution. In this context, a GIS-based approach, supported by historical sources, can be a key tool for analyzing and predicting the distribution of war-related remnants.
Despite Italian laws and the ongoing risk of UXO contamination, Italy currently lacks a standardized national methodology for mapping and recognizing war-related damage. This research aims to develop methodologies for creating a GIS database documenting bomb craters resulting from World War II aerial bombardments on the city of Treviso (NE Italy) case study, and to perform an initial investigation of the possible impact of those bombardments from a diachronic perspective.
AustriaDownloader: A Processing Pipeline for Merging Austrian Orthophotos and Cadastral Data for Deep Learning Applications
ABSTRACT. Training deep learning (DL) models for remote sensing applications requires high-quality datasets with precise ground truth (GT) annotations. However, available datasets often lack the required spatial resolution, semantic label accuracy, or open-access rights. To address these challenges, we present AustriaDownloader, a processing pipeline that merges high-resolution Austrian orthophotos with cadastral data from Austria's Digital Cadastral Map to generate DL-ready training datasets. Our pipeline ensures spatial and temporal alignment of imagery and GT data. Moreover, it facilitates flexible dataset generation by customizing parameters such as image and pixel size, spectral bands, and cadastral class filters. The provided datasets cover all of Austria and are published under an open-access license. Our approach streamlines dataset assembly for DL applications, improving accessibility to high-quality training data for geospatial and remote sensing research.
Minimizing trampling effects in sensitive areas with the help of terrain route-planning
ABSTRACT. Terrain route-planning aims to model the navigation process with the help of GIS tools in areas with sparse road networks. Movement in such areas requires off-road travel for both professions and recreational activities, thus the chosen route affects soil and vegetation. This trace may disappear quickly, but in sensitive areas, trampling can cause permanent environmental damage. This study examines off-trail movement in the Vértes Hills, Hungary, using GIS-based modelling and real-world GPS data from the 2024 European Orienteering Championships (EOC). A cost raster model was developed based on terrain characteristics, specially protected areas, and soil depth. Least-cost paths (LCPs) were calculated for selected routes, considering different route-planning priorities—fastest route versus minimizing environmental impact. These modelled routes were compared to real competitor movements recorded via GPS. By incorporating environmental sensitivity factors into the model, trampling impact in sensitive areas can be reduced. Identifying these areas can help to protect them through appropriate management strategies, such as signage or restricted access.
ABSTRACT. Meet & Match 2025 – Unternehmen treffen Talente
The event is expected to be held in English – depending on participants, a bilingual format (German/English) can be offered if needed. English announcement please see below!
Bei Meet & Match bringen wir Unternehmen und Berufseinsteiger in einem kompakten 75-Minuten-Format gezielt zusammen. In entspannter Atmosphäre können sich beide Seiten in kurzen, persönlichen Pitches vorstellen: Unternehmen präsentieren ihr Profil und ihre Jobangebote – Absolvent:innen und Studierende zeigen, wer sie sind, was sie können und was sie suchen. Im Anschluss laden thematisch vorbereitete Gesprächsinseln zu vertiefenden Kontakten und individuellen Austausch ein. Eine ideale Gelegenheit, um ins Gespräch zu kommen und neue Perspektiven zu entdecken.
Meet & Match 2025 – Connecting Companies and Talent
Meet & Match brings together companies and early-career professionals in a compact 75-minute format. In a relaxed setting, both sides introduce themselves in short, personal pitches: companies present their profiles and job opportunities – while graduates and students highlight who they are, what they offer, and what they are looking for. Afterwards, topic-based discussion hubs provide space for deeper conversation and individual exchange. An ideal opportunity to connect and explore new career perspectives.
ABSTRACT. ArcMap was Esri's desktop GIS solution for many years. In January 2015, ArcGIS Pro was added as another, more computing-capable desktop solution. As ArcMap will be deprecated in 2026, the migration from ArcMap to the more efficient, computing-capable and user-friendly ArcGIS Pro represents a significant step. Geoprocessing tools and workflows provided by Esri make the migration an easy and smooth process. This workshop will show how to migrate to ArcGIS Pro and focus on the capabilities of ArcGIS Pro.
The workshop consists of three parts followed by a 15 Minute Q&A Session:
migration from ArcMap to ArcGIS Pro (15 min):
In this demo it is shown how easy it is to migrate from ArcMap to ArcGIS Pro. Thanks to the possibility of importing ArcMap projects directly into ArcGIS Pro, maps, layouts and all other elements of an ArcMap project are transferred to ArcGIS Pro.
2 ArcMap Workflows in ArcGIS Pro (15 min):
In this part of the workshop, classic ArcMap workflows will be shown in ArcGIS Pro. The section will start with an introduction to ArcGIS Pro and run over the program structure. The simple switch to ArcGIS Pro is shown by demonstrating that ArcMap workflows and scripts can be easily transferred using geoprocessing tools.
Advantages of ArcGIS Pro and Q&A (30 min + 15 min):
Compared to ArcMap, ArcGIS Pro offers several advantages, which are emphasised in this part of the workshop. Firstly, the focus will be set on 2D and 3D integration, which simplifies the analysis and visualisation of complex two- and three-dimensional elements in ArcGIS Pro. Furthermore, the advanced analysis functions in ArcGIS Pro are discussed, including machine learning, big data analyses and AI-supported analyses. Showing the full potential of ArcGIS Pro. Finally, the seamless integration of ArcGIS Pro into the ArcGIS Online Suite will be demonstrated. ArcGIS Pro is tightly integrated with the ArcGIS Online platform, facilitating the sharing and publication of maps and data. This promotes collaboration and data sharing within organisations. Finally, there will be 15 minutes for questions or possible in-depth discussions.
In conclusion, this workshop shows that migrating from ArcMap to ArcGIS Pro is a worthwhile investment for all GIS users. The advanced features and improved usability of ArcGIS Pro offer numerous advantages that significantly increase the efficiency and quality of GIS work.
WS: Interdisziplinäre Forschungsfragen erforschen durch räumliche Verknüpfung von Umfrage- und Geodaten
ABSTRACT. Gibt es einen Zusammenhang zwischen individueller Gesundheit und der Verkehrsbelastung am Wohnort? Wohnen Menschen mit niedrigerem Einkommen eher in Gebieten mit hoher Gebäudedichte? Wirkt sich der Bildungsstand auf die Lebensqualität der Wohnumgebung aus? Wer solche und ähnliche Forschungsfragen untersuchen möchte, bewegt sich interdisziplinär zwischen Sozial- und Raumwissenschaften. Für die Analyse müssen geeignete Umfrage- und Geodatensätze räumlich verknüpft und ausgewertet werden, was einige Hürden mit sich bringt: die Daten liegen in voneinander getrennten Forschungsdatenzentren und sind zum Teil nicht interoperabel. Die Verknüpfung stützt sich auf persönliche und geschützte Daten der befragten Personen (z.B. Koordinaten des Wohnorts), welche während der Verarbeitung nicht missbräuchlich genutzt werden dürfen. Weiterhin erfordert die Verknüpfung Kenntnisse zu Werkzeugen der Geoinformatik. Um diese Hürden zu senken und das Forschen an solchen Fragen zu vereinfachen, wird der Geolinking Service SoRa entwickelt.
Der Geolinking Service SoRa (https://sora-service.org/ ) ist eine technische Infrastruktur, welche die Datenintegration über mehrere Schnittstellen im Backend realisiert, und mit Open Source Technologien implementiert wird. Die Nutzung erfolgt über ein R Package, und kann so in eigene Analyse-Skripte eingebunden werden. Verfügbar sind sowohl sozialwissenschaftliche Forschungsdaten (der SOEPcore des Deutschen Instituts für Wirtschaftsforschung), sowie raumwissenschaftliche Geodaten (ca. 95 deutschlandweite Indikatoren in mehreren Zeitschnitten des Monitors der Siedlungs- und Freiraumentwicklung / IÖR-Monitor). Die SOEP-Daten bieten über die Wohnadresse der befragten Personen einen hochgenauen räumlichen Bezug für eine Verknüpfung mit Geodaten. Weiterhin existiert die Möglichkeit eigene Daten zu nutzen. Der Dienst bietet verschiedene vielseitig einsetzbare Verknüpfungsmethoden an, um eine hohe Flexibilität an zu untersuchenden Fragestellungen zu bieten (u.a. zu Aspekten von Erreichbarkeit oder Dichte).
Der 75-minütige Workshop ist dabei wie folgt gegliedert: nach einem thematischen Einstieg und einer Vorstellungsrunde soll eine kompakte Einführung in die Struktur und die Nutzungsmöglichkeiten des neuen Dienstes gegeben werden. Der Dienst wird dabei an Beispielen live gezeigt, um die Funktionalitäten zu demonstrieren. So soll ein Eindruck der Vielfalt an Verknüpfungsmethoden und Datensätze vermittelt werden. Es folgt ein praktischer Teil, bei dem die Funktionalität des Verknüpfungsdienstes gemeinsam getestet werden kann. Und dabei auch auf die Wünsche der Teilnehmenden eingegangen werden kann. So ist es möglich, die benutzerdefinierten Import-Funktionen anhand eigener Punktdaten der Teilnehmenden (siehe unten) gemeinsam zu testen. Im Anschluss folgt eine Diskussionsrunde, welche relevante Fragestellungen (u.a. zur Usability, zu den angebotenen Verknüpfungsmethoden und weiteren Funktionalitäten) thematisiert, und den Workshop mit einer finalen Feedbackrunde abschließt.
Der Workshop richtet sich an Geoinformatiker*innen und Raumwissenschaftler*innen, welche Interesse an interdisziplinären Forschungsfragen und die Verwendung sozialwissenschaftlicher Daten haben. Reizvoll könnte die Diskussion über die Integration weiterer räumlicher Verknüpfungsmethoden, sowie weitere relevante Geodatensätze nachzudenken. Relevant ist auch die Frage, wie der Geolinking Service SoRa bei der räumlichen Verknüpfung zweier Geodatensätze durch die Nutzung von benutzerdefinierten Datenimports unterstützen kann, um auch dort die Abläufe im Alltag zu vereinfachen. Zur Mitwirkung an digitalen Beteiligungsformaten (u.a. Mentimeter), ist die Nutzung eines mobilen Endgerätes (Smartphone oder Laptop) für die Teilnehmenden empfehlenswert. Es besteht die Möglichkeit, dass Teilnehmende eigene Koordinatensätze (Punkte im GeoJSON Format im CRS 4326, Maximal 1.000 Punkte) mitbringen, um damit verknüpfen zu können.
Der Workshop soll zu vielfältigen praktischen Erfahrungswerten und einem Ideenaustausch führen, und auch den neu entwickelten Geolinking Service SoRa in die Nutzung überführen.