Learning Labs: situation, potential and challenges when using digital geomedia
ABSTRACT. There is an increasing need for knowledge and competencies throughout society, including digital and geomedia skills. To meet this demand, non-formal education is crucial as it complements formal education and is often based on the use of innovative learning approaches and settings. Learning labs are an opportunity for non-formal education to build geomedia skills which are still little addressed in formal education but play an important role in everyday life. In this respect several questions on the use of digital geomedia in learning labs are of interest: Which role do digital geomedia play here? What challenges and potential exist? To answer these questions, learning labs in German speaking countries, the so-called DACH region, and the iDEAS:lab, the learning lab of the Department of Geoinformatics, Salzburg University, were examined more closely using an online questionnaire and a focus group discussion. The results reveal that digital geomedia is used in learning labs to different extents and for different purposes such as to support teaching in various school subjects and to promote geomedia skils. Further, learning labs offer training opportunities to motivate and enable educators in the use of geomedia, and they offer various opportunities for research and community building.
Young people's participation in urban green planning
ABSTRACT. This presentation, as part of a master’s thesis, focuses on success factors of youth participation, and their underlying motivation, framework conditions, and methods to address the youth appropriately in initiatives for urban planning processes in the context of urban green spaces.
The relevance of urban green for young people arises on the one hand from the importance of the availability of non-consumption places for social interaction and numerous activities. However, cities have been severely affected by climate change. Therefore, urban green plays a major role in terms of many positive ecological effects. Regarding the Sustainable Development Goals, the Convention on the Rights of the Child, and UNICEF's call for child- and youth-friendly cities, it is not only important to make supposed improvements for young people, but also to actively involve young people and listen to their needs and wishes. This social participation of young people is an essential cornerstone of democracy and, if carried out appropriately, also benefits public and political awareness and contributes to social skills.
Therefore, there has been a growing interest in the integration of young people into urban planning processes in recent years. However, in addition to numerous advantages, such as the change of perspective and the generation of new ideas, the participation of young people in urban planning processes also faces challenges, which include the motivation, communication, and cooperation of young people.
Due to the challenges of youth participation, these initiatives often fall short of expectations. Therefore, there is a lack of research into what specific opinions and needs young people have to participate in urban planning processes, what aspects of participation they consider important, and how these can be better implemented. This also raises the question of which methods (e.g., workshops, surveys, GIS) and framework conditions (e.g., online, offline, accessibility) participatory initiatives need to have in order to address and involve young people in an appropriate way.
Based on a literature review of the topics of youth participation in urban planning, young people were examined using a mixed-methods approach. Key findings are based on the Q-method, a focus group of young people, and expert interviews with urban planners who have previously worked with young people. Contrary to the original assumption, the initial results from the Q-method and the focus group show that young people often participate in urban planning processes for urban green spaces out of an intrinsic motivation (e.g., implementation of the ideas contributed, personal connection, feeling that their opinion counts, a city worth living in), while extrinsic factors such as rewards or confirmation of participation play a subordinate role. The results of this study can therefore help improve the success and involvement of young people in urban planning initiatives.
GIS for Children: Exploring the World with Maps (Ages 6-12)
ABSTRACT. Children from age 2 to 3 years old begin to notice routes and landmarks when walking, riding a bike, or traveling by car, even though they can’t verbalize it. Research has shown that children aged 3 to 4 can engage in path integration, a skill that involves using self-motion information from one’s own body movements to keep track of spatial position. Despite having some studies on these topics, there is a limited amount of research on the development of spatial thinking in children at this age, and many experts agree that spatial skills interventions should ideally begin in early childhood. Furthermore, using maps increases childrens’ graphicacy - essential life skill that students need to develop from the primary grades.
Even though authors argue about gender differences in spatial abilities, with various interpretations suggesting biological, evolutionary, and strategic reasons for the disparity, they should be kept in mind when working with children this age. One significant factor, important in the context of children may be the different amounts of time spent engaging in spatial activities by boys and girls. Boys, from an early age, often participate more in activities with high spatial components, such as team sports, LEGO construction, and video games. This exposure leads to a greater "spatial experience" for males compared to females.
Instead of focusing on why things are where they are, geography (or some form of an environmental subject in lower grades) education was for years limited on memorizing sterile facts and definitions. Even though widespread use of GIS changed it, there is still a lot of space for improvement. By working in GIS, children may be encouraged to ask more questions because process of making a map takes some time and they are able to think about it.
When designing workshops for younger children, it is essential to avoid overwhelming them with excessive information. The use of large-scale, and when possible, giant maps of (to them) well-known space is recommended. The size of the map is important, as it encourages children to use their whole bodies in activities such as measuring, gesturing, and moving, which help to strengthen their spatial skills.
Since some spatial cognitive skills are necessary to operate Geographic Information Systems (GIS) technology, it is generally easier to introduce GIS to older children. Despite this, students often find the technical difficulty of GIS less challenging or not challenging at all, whereas teachers tend to have greater concerns about its implementation. However, GIS alone is insufficient for enhancing geographical knowledge. It requires the involvement of a teacher or another skilled mentor who can facilitate learning by providing prompts and guiding childrens’ thinking processes.
To prevent frustration and establish a strong foundation for advancing GIS and cartographic knowledge in the future, it is essential to design user-friendly, hands-on materials for young children, along with "real" GIS tools for older ones. Well-structured workshops and the involvement of engaged mentors from an early age are also a key. These elements should be integral components of modern curricula and non-formal educational programs when it comes to popularization of geographical science.
Arguments for the Implementation of Remote Sensing and Multispectral Images in Geography Education
ABSTRACT. Remote Sensing (RS) is a relatively new way of using spatial data in geography education. RS integrates information and skills necessary for 21st-century learning, promoting natural literacy and digital literacy. Images from satellites in the visible spectrum are already found in both analog and digital forms in schools. However, freely accessible images utilizing electromagnetic radiation in the non-visible spectrum are not common in the educational environment. These images offer the opportunity to view the Earth in different colors, providing a new perspective on the planet and enabling the study of environmental issues in their complexity and entirety. RS is a multidisciplinary method that connects several subjects taught in primary and secondary schools. The systematic review study emphasizes presenting a literature analysis describing specific approaches, methods, and projects involving images in true and false colors in the educational settings of primary, secondary, and high schools. The study's output includes research on study materials, implementation methods, as well as perspectives on integrating RS data and methods into the educational space. The SCOPUS and Web of Science databases were used, and articles were searched in English without any specific time frame.
Die Entwicklung der Bundes Geodaten-Infrastruktur (BGDI) der Schweiz: Von den Anfängen bis zur neuen Geoplattform
ABSTRACT. Die Bundes Geodaten-Infrastruktur (BGDI) der Schweiz hat sich seit ihren Anfängen in den späten 1990er Jahren zu einem zentralen Element der nationalen Geoinformationsstrategien entwickelt (Bund und Kantone). Ihr Ziel war und ist die effiziente Bereitstellung, Harmonisierung und Nutzung raumbezogener Informationen für Verwaltung, Wirtschaft und Gesellschaft. Dieser Vortrag gibt einen umfassenden Überblick über die wesentlichen Meilensteine der BGDI-Entwicklung und beleuchtet insbesondere die Rolle von Standards, Normen und gesetzlichen Grundlagen in diesem Prozess. Derzeit wird an einer neuen Geoplattform gearbeitet, die die Nutzung von Geodaten weiter vereinfachen, offene Daten zugänglicher machen und innovative Nutzungskonzepte fördern soll. Dabei wird besonderer Wert auf eine flexible Architektur gelegt, um zukünftige technologische Entwicklungen und Anforderungen optimal integrieren zu können.
Offene Katasterdaten? — Oder wie der Clash zwischen der HVD-Verordnung der EU und dem deutschen Föderalismus zum Bedarf nach OpenData++ führt
ABSTRACT. Seit dem 9. Juni 2024 gilt die „Durchführungsverordnung (EU) 2023/138 der Kommission vom 21. Dezember 2022 zur Festlegung bestimmter hochwertiger Datensätze und der Modalitäten ihrer Veröffentlichung und Weiterverwendung“, die sogenannte HVD-Verordnung (High-Value Datasets). Diese steht auch in Zusammenhang mit INSPIRE und legt für die Datenkategorien Georaum, Erdbeobachtung und Umwelt, Meteorologie, Statistik, Unternehmen und Eigentümerschaft von Unternehmen sowie Mobilität Datensätze fest, die unter den Bedingungen der Lizenz Creative Commons BY 4.0 oder einer gleichwertigen oder weniger einschränkenden offenen Lizenz in einem öffentlich dokumentierten, unionsweit oder international anerkannten offenen, maschinenlesbaren Format über Anwendungsprogrammierschnittstellen (APIs) und Massen-Download in ihrer jeweils aktuellsten Version zur Weiterverwendung zur Verfügung gestellt werden.
Diese Datensätze umfassen insbesondere auch Gebäude, Flurstücke/Grundstücke (Katasterparzellen), Referenzparzellen und landwirtschaftliche Parzellen, die in Deutschland als wesentliche Inhalte des Amtlichen Liegenschaftskatasterinformationssystems (ALKIS) von den Vermessungsverwaltungen der Länder erfasst werden. Die Geltung der o.g. Durchführungsverordnung zum 9. Juni 2024 hat dazu geführt, dass nun 15 von 16 Bundesländern entweder die kompletten ALKIS-Daten gemäß GeoInfoDok (aber aus Datenschutzgründen ohne Eigentümerangaben) oder in der sogenannten Schemavariante „Vereinfachtes Datenaustauschschema“ als Open Data zur Verfügung stellen. Lediglich Bayern schert hier mit einer fragwürdigen Begründung aus.
Im Pitch wird der aktuelle Stand der Verfügbarkeit von ALKIS-Open Data in den deutschen Bundesländern zusammengefasst und mit der Situation in Österreich und Frankreich verglichen, insbesondere bzgl. Lizenzen, APIs, Massen-Download und Formaten sowie den sich daraus ergebenden Nutzungsmöglichkeiten. Des Weiteren werden Möglichkeiten vorgestellt, wie diese Daten als „Open Data++“ – besser als von den deutschen Bundesländern umgesetzt – bereitgestellt und genutzt werden können.
Ableitung von LOD2 Gebäudemodellen und 3D-Baummodellen aus Airborne Laserscanning-Daten am Beispiel der Stadt Salzburg und des Kantons Basel-Landschaft
ABSTRACT. Im Kontext der 3D Gebäude- und Baummodellableitung entwickelt die Firma LASERDATA GmbH aus Innsbruck innovative Verfahren in ihrer Software LIS Pro 3D und wendet diese auch für die Abwicklung von Dienstleistungsaufträgen für Kunden großflächig an. Beide 3D Modelltypen wurden flächendeckend für den Kanton Basel-Landschaft über 518 Quadratkilometer und für die Stadt Salzburg über 65 Quadratkilometer erzeugt.
Im Teil der Gebäudemodelle der Kurzpräsentation steht die Erstellung von Gebäudemodellen aus Airborne Laserscanning-Daten bis zur Level of Detail-Stufe 2 nach einem datengetriebenen, möglichst realitätsgetreuen Ansatz mit Informationen zu Baukörpervolumen, verschiedenen Gebäudehöhen und Teildachflächenausrichtung sowie –neigung im Vordergrund. Der an den Beispielen von Salzburg und Kanton Basel-Landschaft eingesetzte vollautomatisierte Ansatz grenzt sich von bestehenden mustergetriebenen Ansätzen mit Musterdächern prozedural und hinsichtlich der manuellen Interaktion ab. In vorverarbeitenden Schritten werden die Gebäudepunkte des Laserscanning-Daten extrahiert und gefiltert. Die Teildachflächen werden über Ebenensegmentierung der Punktwolke abgeleitet und über Vektorisierung, Polygonvereinfachung, Rektifizierung und ein speziell angepasstes Lückenschlussverfahren generiert. Die Grundrisstreue der nach der Verbindung mit dem Boden entstehenden, wasserdichten Gebäudemodelle wird über die Bereitstellung eines Grundrisslayers und Geländemodells durch den Auftraggeber gewährleistet. Die Ergebnisse werden als GIS-fähige 3D-Shapefiles, in den Standard-Formaten für Stadtmodelle CityGML und CityJSON, sowie im Format OBJ erzeugt und sind damit optimal geeignet für den Austausch zwischen den Nutzern aus den Bereichen Stadt- und Kantonsverwaltung, Planung, Architektur sowie dem touristischen Visualisierungskundenkreis.
Auch die in der Kurzpräsentation vorgestellte 3D Baumableitung zeichnet sich durch ihre vollautomatische und großflächig anwendbare Erstellung aus. Ziel ist es dabei, dass jeder erfasste Baum 3-dimensional, lagerichtig, größenskaliert und texturiert dargestellt wird. Die Generierung der 3D Baummodelle basiert auf einem mehrstufigen Verfahren unter Verwendung der Vegetationklassen der Laserscanning-Daten und beginnt nach Filterungsmaßnahmen mit einer Berechnung der Höhe jedes Vegetationspunktes über Boden. Aufgrund der hohen LiDAR-Punktdichten konnte sowohl für Salzburg wie für den Kanton Basel-Landschaft eine Bottom-Up Segmentierung der Vegetationspunkte in Einzelbäume angewandt werden. Dies erlaubt auch eine Ableitung von kleineren Bäumen, die von größeren Bäumen überdeckt sind. Für die Stadt Salzburg wurden zudem Baumpositionen vom städtischen Baumkataster im Segmentierungsprozess berücksichtigt. In Anhängigkeit von Lage, Höhe und Durchmesser der Einzelbaumsegmente wurden 3D Baummodelle positioniert und skaliert. Die Baummodelle umfassen den Baumstamm und das amorphe texturierte Blattwerk. Letzteres wird über einen Polygonansatz in speicherplatzextensive sogenannte "Bigleafs" übergeführt. Dabei wird mittels eines aufgespannten Voxelsystems die vorkommende Vegetationspunktverteilung durch eine repräsentative Fläche abgebildet (Bigleaf), welche die Summe der Blattflächen und die geschätzte Blattorientierung annähert. Die Einfärbung des Blattwerks erfolgt über die Laserscanning-Intensitätswerte pro Einzelbaum. Semantische Informationen zu Position, Höhe, Kronenfläche und -durchmesser werden bei dem geschilderten Verfahren direkt aus den abgeleiteten Modellen aggregiert werden. Die Ergebnisse der Baumrekonstruktionen werden wie bei den Gebäudemodellen in die Zielformate CityGML und CityJSON und OBJ exportiert.
ESG-Rating: Geodaten als Prognosefaktoren der Wertentwicklung von Immobilien
ABSTRACT. The Goal of the ESG Rating was to define a comprehensible rating system for the assessment of future short/middle/long-term financial risks of the real estate value. As a company that develops software for automatized real estate valuation, we aim to include such a rating into our system. On the one hand, it has to comply to the EBA guidelines, but on the other hand, it should be customisable for the user according to its own needs.
The rating includes not only current physical risk zones and (future) climate risk assessment ("E" of ESG) but also mobility and infrastructure indicators.
Digitalisierung und Weiterverarbeitung der Daten von Arbeitsstellen auf Straßen in Bayern.
ABSTRACT. Zu Beginn des Vortrags wird der Focus auf die Herkunft und den Inhalt der bayerischen Arbeitsstellendaten gelegt. Die meldenden Organisationen decken das ganze Spektrum der Straßenbauverwaltung in Bayern ab – von den Niederlassungen der Autobahn GmbH des Bundes (D) bis hin zur Kommune. Unterschiedliche Organisationsstrukturen im allgemeinen sowie differenzierte Möglichkeiten des Freistaats zur Erhebung und Digitalisierung von Arbeitsstellendaten im zentralen Arbeitsstellenintegrationssystem „ArbIS“ führen jedoch derzeit noch zu einer heterogenen Flächenabdeckung im nicht klassifizierten Straßennetz, teils auch im niederrangigen klassifizierten Straßennetz. Durch verschiedene Maßnahmen wird der Lückenschluss vorangetrieben.
Nachfolgend wird darauf eingegangen, wie diese Daten verarbeitet werden. Hierzu stehen dem Freistaat zwei zentrale Systeme zur Verfügung, deren Funktionen erklärt werden – das „ArbIS“ und das „Traffic Information Center – TIC (Fa. GEWI)“. Diese Systeme sind jedes für sich als eigenständig zu betrachten und nutzen unterschiedliche Netzgrundlagen zur Verortung der Arbeitsstelleninformationen. Trotzdem handelt es sich letztendlich um einen Systemverbund, in dem Datenübertragung, Qualitätssicherung und eng abgestimmte Kommunikation ineinandergreifen (müssen), um das Ziel einer umfassenden und aktuellen Verkehrsinformation zu erreichen.
Darüber hinaus sind Datenübergabeprozesse zwischen den beiden o.g. Systemen, dem landeseigenen Verkehrsauskunftssystem „Bayerninfo“ und externen Partnern / Datenabnehmern insgesamt ebenfalls Teil dieses Vortrags.
Suitability Framework für Gehsteigroboter : Ein GIS-basierter Bewertungsansatz als Planungsgrundlage zur urbanen Integration neuer Logistiktechnologien
ABSTRACT. (Präsentation in EN oder DE möglich)
Die technologische Entwicklung von Robotern für den Warentransport schreitet global rasant voran und stellt auch europäische Städte und Gemeinden zunehmend vor neue Herausforderungen. Neben den wirtschaftlichen Potenzialen dieser Logistikinnovation treten insbesondere ihre Auswirkungen auf den öffentlichen Raum und das alltägliche urbane Leben in den Vordergrund. Im Zentrum steht die Frage, ob und in welchem Umfang Lieferroboter in bestehende städtische Infrastrukturen integriert werden können, ohne die Nutzbarkeit und Qualität öffentlicher Räume zu beeinträchtigen.
Bislang wird diese Frage häufig vornehmlich aus technischer Perspektive betrachtet – mit Fokus auf die Navigationsfähigkeit der Roboter. Dabei bleibt die Dimension der sozialen und räumlichen Verträglichkeit oft unberücksichtigt. So können beispielsweise städtische Gebiete aufgrund ihrer baulichen Struktur oder Oberflächenbeschaffenheit grundsätzlich für die Navigation von Robotern geeignet erscheinen. Dennoch können sich aus planerischer Sicht Zielkonflikte ergeben, etwa wenn es sich um hochfrequentierte Fußgängerzonen handelt oder um Bereiche mit sensibler Nutzung wie Schulen, Kindergärten oder Pflegeeinrichtungen. Der entwickelte Forschungsansatz innerhalb des HORIZON-Projekts TRACE zielt darauf ab, beide Perspektiven – technische Befahrbarkeit (Drivability) und städtische Kompatibilität (Compatibility) – systematisch zu erfassen und analytisch zusammenzuführen. Dabei handelt es sich nicht um eine rein mathematisch berechnete Kennzahl, sondern um eine Matrix, die verschiedene Bewertungsdimensionen zusammenbringt.
Das Suitability Framework ist ein GIS-basierter Bewertungsansatz, der eine strukturierte und datenbasierte Grundlage für die Analyse und Planung robotischer Lieferdienste im urbanen Raum bietet. Zwei zentrale GIS-Teilmodelle – das Compatibility Model und das Drivability Model – bilden die methodische Basis, deren Ergebnisse auf der Ebene einzelner Straßensegmente von 100 Metern Länge ermittelt und skaliert ausgewertet werden. Zusätzlich wird ein Freight Demand Model eingebunden, das Informationen über das zu erwartende Roboteraufkommen auf Basis der Güterverkehrsabschätzung liefert.
Das Compatibility Model fokussiert auf die Interaktion zwischen Robotern, bestehender Infrastruktur und menschlichen Aktivitäten. Dabei fließen Daten zu Points of Interest (POIs), ÖPNV-Anbindung und Bevölkerungsverteilung in ein Modell um das Fußverkehrsaufkommen modellieren zu können. Diese Fußverkehrsmengen werden mit Informationen zur Gehwegbreite verschnitten, um den Level of Service aus Sicht des Fußverkehrs zu bewerten. Deutlich wird dabei eine eingeschränkte Datenverfügbarkeit, insbesondere hinsichtlich der Gehsteigdimensionen und -qualität, die in vielen Städten weder systematisch erfasst noch über öffentlich zugängliche Datensätze verfügbar sind.
Das Drivability Model bewertet die technische Befahrbarkeit urbaner Räume aus Sicht der Roboter. Berücksichtigt werden Straßenkategorien, topografische Gegebenheiten sowie statische und dynamische Hindernisse. Diese werden aus der Perspektive des eingesetzten Robotertyps interpretiert. Im Projekt TRACE erfolgt dies beispielhaft anhand der Technologie des spanischen Herstellers Robotnik. Dessen spezifische Anforderungen an die Infrastruktur wurden systematisch erhoben und in die GIS-Modelle integriert.
Die Bewertung in beiden Modellen erfolgt über Skalen, deren Werte in einer integrierten Suitability-Skala zusammengeführt werden. Diese aggregierte Bewertung dient nicht nur der Identifikation geeigneter Räume, sondern stellt auch eine Grundlage für die Diskussion zwischen technischer Machbarkeit und stadtplanerischen Zielsetzungen dar.
Ein wichtiger Kontextfaktor ist hierbei die zunehmende Notwendigkeit, den öffentlichen Raum im Zuge der Mobilitätswende verstärkt für den Fuß- und Radverkehr sowie für andere Formen aktiver Mobilität bereitzustellen. Daraus können Zielkonflikte entstehen, wenn neue technologische Nutzungen wie der Einsatz von Lieferrobotern in Konkurrenz um den begrenzten Straßenraum treten. Das Suitability Framework ermöglicht es, solche potenziellen Nutzungskonflikte sichtbar zu machen, faktenbasiert zu diskutieren sowie zukunftsblickend zu planen.
Darüber hinaus kann der Output der Suitability-Bewertung auch für das Routing von Lieferrobotern genutzt werden. Anstatt ausschließlich die kürzeste oder schnellste Route zu berechnen, kann ein routingbasiertes Entscheidungssystem auf Grundlage der Suitability-Werte jene Wege bevorzugen, die als besonders verträglich und konfliktarm eingestuft wurden. Dies stellt ein Beispiel für eine konkrete Entscheidung dar, die auf Basis der Suitability-Bewertung getroffen werden kann. Es verdeutlicht, dass das Framework nicht nur zur Analyse bestehender Bedingungen dient, sondern auch zur aktiven Problemlösung beitragen kann – etwa durch die Ableitung raumverträglicher und sozial akzeptabler Routenführungen.
Land Cover analysis of European wind park development
ABSTRACT. This study examines the multifaceted land use changes associated with the development of wind energy infrastructure across Europe during the period from 2015 to 2023. Utilizing high-resolution multispectral data from the Copernicus Sentinel-2 satellite in combination with a sophisticated Multivariate Alteration Detection (MAD) algorithm, our research quantifies both the direct physical impacts and the indirect ecological and socio-economic consequences of wind farm construction. The comprehensive analysis encompasses over 15,000 turbines spread across 1829 wind parks in 18 European countries, capturing diverse regional patterns of land transformation. Key indicators include the conversion of agricultural and forest lands for turbine foundations, access roads, and associated electrical infrastructure, as well as secondary effects such as habitat fragmentation, biodiversity loss, and shifts in land ownership.
The results reveal that while the direct spatial footprint of wind energy installations is modest compared to fossil fuel infrastructure, the cumulative indirect impacts on land use can be substantial, particularly in regions with intensive renewable energy development. Notably, the Mediterranean and Boreal regions experienced higher rates of land use change relative to the continental and Atlantic areas, highlighting significant regional disparities in environmental pressures. These findings underscore the importance of integrating advanced geospatial techniques in environmental impact assessments and call for balanced, strategic planning that reconciles renewable energy expansion with sustainable land management practices. Overall, this study contributes critical insights into the environmental trade-offs inherent in the global transition towards renewable energy sources.
An open-source tool to assess shadow flicker of wind turbines: The WIMBY SF_tool
ABSTRACT. Wind power is critical for the energy transition, yet its expansion faces local resistance due to concerns over e.g. visual
impact, noise, and shadow flicker (SF). SF, caused by rotating turbine blades, is a concern due to its potential social and
environmental impacts. Existing SF assessment tools are either simplistic or proprietary, limiting accessibility. This paper
presents the WIMBY SF_tool, an open-source Python-based solution for SF simulation integrating the GDAL tool, GIS
Python packages, advanced shadow geometry transformation over complex terrain, and approximated turbine operation.
It refines visibility constraints through viewshed analysis and accurately accounts for topographic influences. SF_tool was
validated against WindPRO using hypothetical turbine locations in Spain including complex terrain areas. The SF-affected
areas above 30 hours/year show strong agreement with WindPRO, with Pearson’s correlation coefficient of 0.907 (p-value
< 0.01), demonstrating SF_tool’s reliability compared to the industrial standard. The SF_tool, being open-source, enhances
transparency and accessibility in SF assessments, supporting policymakers, developers, and the public for decision-making in
wind power deployment.
Mapping Wind Potential Areas in Bavaria, Germany: The Impact of Forest Use and Distance to Residential Buildings
ABSTRACT. Identifying feasible and socially accepted sites for wind turbines is challenging, especially in Bavaria, which has restrictive local regulations in this regard.
However, the federal Onshore Wind Energy Act requires Bavaria to designate 1.8\% of its area for the expansion of wind energy by the end of 2032.
We perform geospatial eligibility analyses with varying distances to residential buildings and degrees of forest use to find solutions that comply with the Onshore Wind Energy Act on the one hand and local regulations on forest protection on the other.
We analyse the results to find out what maximum distance to residential buildings can be achieved in the current legal framework and to what extent this distance can be increased through elevated forest use.
Our results indicate that a maximum distance of slightly below 1200 metres to residential buildings can be realised to reach the 2032 requirements of the Onshore Wind Energy Act and comply with current regulations on forest protection.
Also designating currently protected forest areas could enable maximum distances of about 1400 metres to residential buildings.
If no forested areas were designated at all, the maximum distance to residential buildings would decrease to less than 800 metres.
Simulating the Impact of Public Policies on Heat Transition Scenarios Using Agent-Based Modeling
ABSTRACT. This paper introduces a spatial agent-based modeling approach to simulate regional heating and energy transition scenarios and to analyze the impact of public policies on homeowner decision-making regarding building renovations. The model integrates spatial and technical data on buildings and energy infrastructure with socio-demographic factors and household-level renovation behavior derived from an empirical survey. This approach enables a comprehensive assessment of key energy indicators, including heating energy demand and CO2 emissions, over the simulation period. By providing a decision-support tool for policymakers, the model facilitates the design of effective decarbonization strategies for the built environment.
3D-Visualization of Operational Weather Forecasts, Using High Resolution Topographic Data and Weather APIs
ABSTRACT. MetGIS GmbH, a spin-off of the University of Vienna, Austria, is a specialist in ultra-high resolution weather forecasts and historical weather data reconstruction. The company operates its own powerful, automated modeling system for meteorological forecasting and data visualization. In contrast to the usual standard, extra high-resolution terrain data is included in the weather simulation. This leads to a superior quality of computations, which are very useful for all kinds of applications.
The technology was developed as part of various interdisciplinary, international R&D collaborations. They brought together research institutes, universities and weather services from a number of countries (USA, Switzerland, Japan, Peru, Chile) as well as experts from a variety of fields. Currently, MetGIS continues to place great emphasis on research at an international level, including the coupling of meteorological with hydrological models, and is always open to partners for joint developments.
One of the most prominent developments of MetGIS is the dashboard MetGIS Pro+. This is a brand new, turn-key graphical user interface. It includes 3d visualization and provides access to interactive weather forecast and snow cover maps which can be zoomed to big detail (ultra high horizontal resolutions of less than 100 m). This is possible through the application of innovative downscaling algorithms, based on the sophisticated combination of meteorological models, satellite data and detailed terrain models. The forecasts are available world-wide in several languages. Any location within the maps can be clicked to reveal 10-day forecast histograms. The maps relate to meteorological parameters such as temperature, rain, snowfall, wind, sunshine duration and snow cover.
MetGIS Pro+ is mainly fed with data from the MetGIS Weather APIs. These are also available as independent products. Popular APIs relate to point forecasts (including long-range predictions), weather maps, historical weather data, climate data, snow related topics, precipitation radar and weather warnings.
Customers served by MetGIS Pro+ stem from sectors such as tourism, energy, mobility and agriculture, from ski resorts, avalanche control centers and the mining industry.
More information:
- MetGIS Website: https://www.metgis.com.
- Interactive demo of MetGIS Pro+: https://demo-tiles.metgis.com/Alps/.
- MetGIS APIs: https://metgis.com/de/wetter-apis/
Generating thermal 3D models of buildings in vulnerable neighbourhoods from uncrewed aerial vehicle (UAV)-based thermal images
ABSTRACT. Buildings contribute significantly to global energy consumption and subsequent carbon dioxide emissions. A key first step in reducing energy consumption is measuring and sensing the thermal performance of buildings. Heat loss in buildings occurs due to the design geometry, different material thermal conductivities, and air leaks in the envelope. A key contributor to such losses is thermal bridges, which are localised areas of increased heat transfer from insulation gaps, material junctions, or structural connections. In particular, buildings in vulnerable neighborhoods characterised by aging infrastructure and limited maintenance are prone to increased heat loss. Detecting thermal bridges helps prioritise energy efficiency interventions. Thermography is currently the state-of-the-art method for detecting thermal bridges.
Thermal infrared (TIR) imagery is widely used to identify thermal inefficiencies. Detailed 3D models of building envelopes facilitate the identification of heat loss and the planning of building improvements. Uncrewed Aerial Vehicles (UAV)-derived 3D models can help communicate the status of the building stock and its energy efficiency to policymakers and local communities. Despite the widespread use of thermography, research on UAV-based 3D thermal modeling for residential buildings remains scarce, particularly for vulnerable neighbourhoods. The study aims to generate a 3D thermal building envelope model of buildings in a vulnerable social-housing neighborhood on Texel, an island in the Netherlands, using UAV-based photogrammetry. Involving residents in UAV-based thermal imaging presents both social and technical challenges. While some recognise the value of thermal assessments, others may be unwilling to participate or unresponsive. Additionally, the studied houses include both corner and terraced units, which pose flight constraints in capturing facades due to obstructions. Overcoming these challenges highlights the novelty of UAV-based 3D thermal modeling in complex urban settings.
Here, we employed a DJI Mavic 3 Thermal drone to capture simultaneous RGB and TIR imagery of ten houses in the neighbourhood. Data acquisition was performed on 12th and 13th February 2025 through automated nadir and oblique flights (45°) at 25 meters with an 85% side overlap. We used a higher side overlap to improve the alignment of images. Depending on the obstructions, we captured close-range images using manual flights at a distance of approximately 2 meters from the building in a systematic manner. The data obtained from DJI mavic 3T drone is saved as radiometric JPEGs, which is a binary format used to display the coloured TIR images. Using DJI thermal SDK, the data was processed and saved as standard raw files with absolute temperature values for each pixel. Next, using the structure-from-motion (SfM) software Agisoft Metashape, we performed a guided image alignment on both RGB and TIR images, followed by point cloud and 3D reconstruction. The TIR images were used for texturing. Lastly, we identify the regions of heat loss in the generated 3D model of different houses and compare them.
Preliminary results reveal significant heat loss at the roof level. However, the extent of roof heat loss may vary based on heating patterns, with homes heating only the ground floor showing less apparent heat loss compared to those with heated upper floors. Additionally, heat loss was observed near doors and windows, indicating potential air leakage. Comparing thermal images of houses within the neighborhood reveals variations in building envelope efficiency, identifying homes in critical need of retrofits. The findings highlight the importance of integrating UAV-based thermal 3D models into urban energy assessments, providing a valuable tool for policymakers in the region. Local residents and stakeholders supported this work, recognising the value of thermal imaging in identifying heat loss locations. Future research will investigate the use of the thermal 3D model for recalculating thermal transmittance (u-values) to ensure accurate building energy demand estimation.
Maximum land surface temperature in different forest types in the Barnim district, Brandenburg, Germany
ABSTRACT. Land surface temperature (LST) is a significant indicator in understanding local to global climate and land surface interactions. However, the variability of LST across different forest types remains to be fully elucidated. According to Ermida et al (2020), LST is regarded as the radiative skin temperature of the land, derived from solar radiation. Significant factors influencing LST include surface albedo, emissivity, vegetation cover and soil moisture.
This research analyses differences in the maximum LST (LSTmax) in a time period of 35 years between forests in eight different forest types in the district of Barnim in federal state of Brandenburg, Germany. Using Landsat 4, 5, 7 and 8 thermal infrared data in a time series from 1982 to 2023, the SMW (Statistical Mono-Window) algorithm in Google Earth Engine was used to determine the LST. Six anomalous years were excluded so that finally a30-year data set was available for analysis.
The aim is to gain insights on how the LSTmax varies in different temperate forests and which silvicultural factors could have an influence on it. Richter et al. (2021) found that different tree species exhibited significant variations in canopy temperature in an urban floodplain forest in Leipzig, Germany, influenced by factors like leaf size, transpiration rates and canopy structure. Furthermore, some studies showed that broadleaved trees exhibit a lower LST during the growing season compared to needle-leaved trees (McGloin et al., 2019; Schwaab et al., 2020; Yi et al., 2020).
Therefore, we investigated 32 representative plots of eight different forest types typically found in Barnim district, selected based on silvicultural characteristics such as dominant tree species, stratification, height and age. The main tree species under consideration in an area of approximate 675 km² were Pinus sylvestris, Fagus sylvatica and Quercus spec.in both pure and mixed forest stands.
Results showed that maximum LST (LSTmax) tended to be higher in pure pine stands compared to stands with a higher proportion of deciduous trees. This was observed in years with higher LSTmax values as well as in years with lower LSTmax values. Differences in species composition, crown closure and stratification appeared to moderate LSTmax, possibly due to higher transpiration rates, canopy shading and albedo in broadleaf-dominated stands. These findings underscore the importance of forest conversion from monocultural pine stands in Brandenburg into near-natural mixed forests in order to mitigate extreme temperatures and provide climate change resilience at a regional level. Future research needs to identify the specific reasons for the cooling properties of deciduous forests and their actual impact in order to better understand how species-specific characteristics and management practices influence the regulation of forest microclimates.
PRIMA - Planungsgrundlagen für raumtypenspezifische, integrierte Mobilitätsangebote im Bedarfsverkehr
ABSTRACT. Das Projekt entwickelt Grundlagen für eine nachfrageorientierte Planung integrierter Mobilitätsangebote mit Schwerpunkt auf dem Zusammenspiel von öffentlichem Verkehr (ÖV) und Bedarfsverkehren (Mikro-ÖV). GIS-gestützte Modelle analysieren Nachfragepotenziale und die Versorgungssituation im ÖV und ermöglichen eine evidenzbasierte Planung auf verschiedenen Ebenen. Dabei werden auch österreichweit übertragbare Raumtypen für Bedarfsverkehre entwickelt. Dadurch entsteht eine fundierte Daten- und Wissensbasis zu den Rahmenbedingungen, Wirkungen und Erfolgen integrierter Mobilitätsangebote mit Bedarfsverkehren. Die praktische Umsetzbarkeit wird in einer Testregion gemeinsam mit dem Salzburger Verkehrsverbund erprobt. Ziel ist die Entwicklung integrierter Strategien, die zur nachhaltigen Mobilitätsentwicklung beitragen.
Raumzeitliche Variabilität der Fluggastverteilung auf Zielregionen am Beispiel des Salzburger Flughafens
ABSTRACT. Dieses Shortpaper beschreibt einen Lösungsansatz aus dem aktuellen Forschungsprojekt „Flughafen 4.0“, das den Flughafen Salzburg (W. A. Mozart) als bedeutenden Mobilitätsknoten hinsichtlich der Optimierung von Wegeketten und Schaffung von zukunftsweisenden Verkehrsangeboten untersucht. Flughäfen verfügen üblicherweise über detaillierte Daten zu Passagierzahlen und deren Herkunftsflughäfen, während die weiteren Reiserouten und Reiseziele weitgehend unbekannt sind. Dem gegenüber stehen statistische Tourismusdaten, die neben Ankunfts- und Nächtigungszahlen auch die Herkunft der Gäste enthalten, allerdings ist die Wahl der Verkehrsmittel für die Anreise unbekannt. Eine direkte Verknüpfung dieser raumbezogenen Datenquellen ist selbst retrospektiv nicht möglich, wäre andererseits aber im Sinn der Optimierung von Mobilitätsangeboten hilfreich und wird daher im folgenden Beitrag auf statistischem Weg hergestellt. Die stützende Kernaussage ist, dass die Herkunft maßgeblich das Ziel beeinflusst und diese Information zur Modellbildung genutzt werden kann. Das Ergebnis ist ein Modell, das den regionalen Mobilitätsbedarf durch Fluggäste schätzt und somit die bedarfsgerechte Planung durchgängiger Wegeketten und Mobilitätsangebote im Umweltverbund unterstützt.
Integration eines Energiesystemoptimierungs-Frameworks in die geodatenbasierte Energieleitplanung
ABSTRACT. Die Erreichung der Klimaschutzziele erfordert eine präzise Abschätzung zukünftiger Energiebedarfe und deren Deckung durch erneuerbare Energien, um in Landkreisen und Kommunen Technologie- und Investitions-Entscheidungen treffen zu können. Räumliche Analysen wie die geodatenbasierte Energieleitplanung und Szenarien für das zukünftige Energiesystem, wie sie von Energiesystemoptimierungs-Frameworks generiert werden können, können dabei bei der Entscheidungsfindung unterstützen. Daher wird in diesem Beitrag ein Ansatz vorgeschlagen, der es ermöglicht, mit dem Energiesystemoptimierungs-Framework REMix erzeugte Szenarien in eine Geodatenanalyse-Anwendung zur Energieleitplanung zu integrieren. Der Proof of Concept basiert auf einem Business-Intelligence-Werkzeug und demonstriert am Beispiel des Landkreises Wesermarsch, dass der vorgeschlagene integrierte Ansatz datenbasierte, fundierte und intuitive Entscheidungen ermöglicht. Die interaktive Aufbereitung von Analyseergebnissen und Szenarien für das zukünftige Energiesystem bietet regionalen Akteuren ein Werkzeug zur Entscheidungsunterstützung um nachhaltige und langfristig wirksame Klimaschutzmaßnahmen zu beschließen.
ENERGIEatlas: Von Rohdaten zu handlungsrelevanten Informationen
ABSTRACT. Die Energiewende erfordert nicht nur Daten, sondern auch handlungsrelevante Informationen, die fundierte Entscheidungen ermöglichen. Der ENERGIEatlas bietet eine GIS-basierte Lösung zur Integration und Analyse heterogener Geodaten, um Kommunen, Planer*innen und Entscheidungsträger*innen eine evidenzbasierte Grundlage für die räumliche Energieplanung bereitzustellen. Durch die Entwicklung und Integration umfassender Geodaten, darunter Potenzialstudien und Energiebedarfsmodelle, werden umfangreiche Entscheidungsgrundlagen geschaffen. Die Ergebnisse werden z.B. über Landes-GIS-Dienste, WebGIS-Anwendungen und interaktive Dashboards zielgerichtet zugänglich gemacht, wodurch ein einfacher und rollenspezifischer Zugang zu relevanten Informationen ermöglicht wird. Der ENERGIEatlas unterstützt neben der kommunalen Energieplanung auch Forschungsprojekte zur Weiterentwicklung nachhaltiger Energiesysteme.
Heizungs-Check im EnergieKompass Salzburg: Wegweiser zum optimierten Heizsystem
ABSTRACT. Die Energiewende erfordert den Einsatz innovativer digitaler Werkzeuge, um CO₂-Emissionen signifikant zu reduzieren und eine nachhaltige Energieversorgung sicherzustellen. Besonders in der Wärmewende stellt die Wahl der richtigen Heiztechnologie eine zentrale Herausforderung dar. Hierbei spielen viele Faktoren eine Rolle – wie geologische Gegebenheiten, die bestehende Energieinfrastruktur und die Verfügbarkeit von Fördermöglichkeiten – die die Entscheidungsfindung komplex und zeitintensiv machen. Trotz der Vielzahl verfügbarer Daten mangelt es jedoch oft an benutzerfreundlichen, integrierten Lösungen, die eine effektive Analyse und Entscheidungsunterstützung bieten.
Der EnergieKompass Salzburg ist eine digitale Plattform, die Bürger:innen bei der Entscheidung für nachhaltige Energieversorgungslösungen unterstützt. Ein zentrales Modul dieser Plattform ist der Heizungs-Check, der durch eine automatisierte, geodatenbasierte Analyse hilft, die optimale Heiztechnologie auszuwählen. Durch Eingabe ihrer Adresse erhalten Nutzer:innen präzise, standortbezogene Empfehlungen. Die Grundlage hierfür bilden die Daten im EnergieAtlas des Salzburger Geodateninformationssystems (SAGIS), die aus dem Forschungsprojekt GEL-S/E/P hervorgingen und eine wissenschaftliche Basis zur Modellierung von Wärmenachfrage, erneuerbaren Wärmequellen sowie Wärmeversorgungsoptionen liefern.
Durch Echtzeit-Schnittstellen greift der Heizungs-Check auf diese Daten zu und sorgt dafür, dass alle Nutzer:innen mit aktuellen und konsistenten Informationen versorgt werden. Ergänzt werden diese Daten durch kommunale Wärmeplanungsinformationen, wie etwa Fernwärmeversorgungsgebiete und Potenzialflächen für Wärmepumpen, die ebenfalls in die Analyse einfließen und so eine präzisere und gezieltere Planung ermöglichen.
Ein besonderes Merkmal des Heizungs-Checks ist die gebäudescharfe Analyse, die eine erste grobe Einschätzung der optimalen Heizsysteme liefert. Diese Heiztechnologien werden mit einem Ampelsystem bewertet, das den Nutzer:innen eine schnelle Orientierung darüber gibt, welche Systeme unter den standortspezifischen Voraussetzungen voraussichtlich am besten geeignet sind – einschließlich der verfügbaren Fördermöglichkeiten sowie regionaler Umsetzungspartner. Die übersichtliche Darstellung dieser Ergebnisse reduziert den Rechercheaufwand erheblich, sodass Nutzer:innen fundierte Entscheidungen treffen können, ohne tiefgehendes technisches Fachwissen zu benötigen.
Angesichts der steigenden Nachfrage nach Energieberatung bietet der EnergieKompass Salzburg eine effiziente und benutzerfreundliche Lösung, die Beratungsressourcen entlastet und den Bürger:innen eine niederschwellige Orientierungshilfe bietet. Ein weiteres Merkmal des Heizungs-Checks ist die enge Integration von Verwaltungsprozessen, wodurch Daten aus verschiedenen Bereichen – wie Raumplanung, Energieplanung, Förderwesen und Energieberatung – miteinander verknüpft werden. Diese Vernetzung ermöglicht eine koordinierte Bereitstellung relevanter Informationen und reduziert den Verwaltungsaufwand erheblich.
Die Weiterentwicklung der Plattform wird kontinuierlich vorangetrieben, um zusätzliche Funktionen bereitzustellen und die Nutzererfahrung weiter zu verbessern. Eine bedeutende Erweiterung ist die geplante Einführung der Bürgerkarten-Authentifizierung, die eine sichere Verarbeitung personenbezogener Daten ermöglicht und somit die Genauigkeit der Empfehlungen weiter steigert. Zudem können die Nutzer:innen ihre Projektdaten sicher speichern und fortschreiben.
Neben dem bereits verfügbaren Heizungs-Check sind auch weitere Module in Planung, darunter der PV-Check und der Sanierungs-Check. Diese Erweiterungen machen den EnergieKompass zu einer umfassenden Unterstützung für eine nachhaltige Energieversorgung. Aufgrund seiner Skalierbarkeit kann der EnergieKompass mit minimalen Anpassungen auf andere Bundesländer übertragen werden, sofern die entsprechenden georeferenzierten Daten vorliegen. Dies verleiht dem Tool das Potenzial, als Modell für digitale Entscheidungsunterstützung in der nachhaltigen Energieversorgung weit über Salzburg hinaus zu wirken.
Zusammenfassend lässt sich sagen, dass der EnergieKompass Salzburg eine innovative, benutzerfreundliche Lösung darstellt, die den Zugang zu relevanten Energieinformationen erleichtert und eine fundierte Entscheidungsfindung für eine nachhaltige Energieversorgung ermöglicht. Durch die Kombination aus GIS-gestützter Datenanalyse, wissenschaftlicher Methodik und der Integration verschiedener Verwaltungsprozesse leistet das System einen wichtigen Beitrag zur Förderung nachhaltiger Heiztechnologien und zur Erreichung der Klima- und Energieziele Salzburgs.
WS: KomMonitor - Kommunales Monitoring zur Raumentwicklung
ABSTRACT. Kommunen stehen täglich vor der Herausforderung, komplexe Stadtentwicklungsstrategien mit einer Vielzahl von Einflussfaktoren entwickeln zu müssen. Dabei sind datenbasierte Werkzeuge essentiell, um allen Bedürfnissen annähernd gerecht werden zu können. In diesem Workshop wird gezeigt, wie die Software KomMonitor kommunale Planungsprozesse durch die Verknüpfung von Sozial-, Umwelt- und Infrastrukturdaten unterstützen kann.
KomMonitor ist eine Open-Source-Plattform, die die kommunale Entscheidungsfindung durch räumliche Überwachung und datengesteuerte Analyse über eine einfach zu bedienende Webanwendung unterstützen soll. Im Gegensatz zu vielen Urban Digital Twin Anwendungen, die sich in erster Linie auf Infrastruktur- und Umweltaspekte konzentrieren, legt KomMonitor einen starken Schwerpunkt auf soziale Faktoren und zielt darauf ab, die Lücke zwischen technologiegetriebenen Stadtmodellen und realen kommunalen Bedürfnissen zu schließen. Bereits zahlreiche Kommunen in Deutschland setzen KomMonitor erfolgreich im produktiven Einsatz ein, indem sie die Software zur Analyse städtischer Dynamiken nutzen, die Zugänglichkeit zu wichtigen Dienstleistungen bewerten und gezielte Planungsmaßnahmen entwickeln. Die Flexibilität und Modularität der Software ermöglicht die Anpassung an verschiedene städtische Kontexte und macht sie zu einem wertvollen Werkzeug für Stadtverwaltungen, die einen ganzheitlichen Ansatz für das digitale Stadtmanagement suchen.
Der geplante 75-minütige Workshop besteht aus einer interaktiven Live-Demonstration, in der realitätsnahe Anwendungsfälle aus der praktischen Stadtplanung in der Weboberfläche von KomMonitor durchgespielt werden. Die Live-Demonstration kann von den Teilnehmern ebenfalls an eigenen Geräten durchgeführt werden, um einen Eindruck von der Bedienbarkeit der Anwendung zu bekommen. Voraussetzung dafür ist lediglich ein Standardbrowser sowie ein Internetzugang. Es wird lediglich empfohlen, mindestens ein Tablet zu nutzen, um eine ausreichende Bildschirmgröße zu gewährleisten.
Ein zentraler Anwendungsfall in diesem Workshop ist die klimaresiliente und altersgerechte Stadtentwicklung. Hierbei wird mithilfe von Sozial- und Umweltdaten analysiert, welche Stadtbereiche besonders von Hitzebelastung betroffen sind und wo sich vermehrt vulnerable Gruppen, insbesondere ältere Menschen, aufhalten. Anschließend werden über eine Erreichbarkeitsanalyse bestehende Versorgungslücken hinsichtlich Trinkwasserbrunnen, Grünflächen und weiteren Ausgleichsflächen identifiziert. So erhalten die Teilnehmenden einen Einblick in die Nutzung von KomMonitor zur Indikator-basierten Analyse städtischer Strukturen, einschließlich der Bewertung von Versorgungs- und Belastungslagen sowie der Simulation geplanter Maßnahmen. Ergänzt wird das Szenario durch einen interaktiven Bestandteil, die Simulation einer neuen Trinkwasserstelle mit Hilfe von KomMonitor, bei der die Teilnehmer*innen selbst Kriterien für eine optimale Standortwahl definieren. Diese Standortwahl wird dann live getestet und mit potenziellen Bauflächen abgeglichen. Darüber hinaus wird darüber diskutiert, welche zusätzlichen Maßnahmen Städte ergreifen können, um Hitzebelastungen für ältere Menschen zu minimieren und welche Rolle digitale Werkzeuge wie KomMonitor dabei spielen können.
Die abschließende Diskussionsrunde dient der Identifikation potenzieller Anwendungsfälle für KomMonitor in den Kommunen der Teilnehmenden, der Ermittlung zusätzlicher benötigter Daten oder Funktionalitäten sowie der Analyse bestehender Herausforderungen bei der praktischen Nutzung solcher Werkzeuge. Durch diese interaktive Gestaltung des Workshops wird ein praxisnaher Wissenstransfer ermöglicht, der direkt auf die Bedürfnisse der kommunalen Anwender*innen eingeht.
Dieser Workshop richtet sich primär an kommunale Vertreter*innen, die GIS-gestützte Analysen für eine nachhaltige Stadtentwicklung nutzen möchten, sowie an interessierte Wissenschaftler*innen. Er bietet eine wertvolle Gelegenheit, die Möglichkeiten von KomMonitor kennenzulernen, eigene Anwendungsfälle zu diskutieren und sich mit anderen Expert*innen im Bereich digitaler Stadtplanung zu vernetzen.
WS: Multimodal GeoAI-Powered Big Data Analysis for Disaster Management
ABSTRACT. Disaster management typically requires situational information in near real time to identify the most affected areas before, during or after a distaster event. While EO-based approaches of satellite image analysis are well established, they exhibit a number of shortcomings including limited spatial and temporal resolution, or real-time data and information availability. Conversely, data from geo-social media have proven to be a reliable foundation for assessing a disaster. This workshop discusses new approaches to disaster management by leveraging the mulitmodal nature of large-scale datasets. Discussed topics range from combining satellite and drone-based image analysis and geo-social media analytics to precise location extraction from text and images as well as social media-based satellite tasking.
Modeling energy consumption of (fuel-cell) battery-electric buses with geoinformatics
ABSTRACT. To meet national and international climate goals, a transition from conventional fuels like diesel to battery-electric (BE) and hydrogen fuel cell electric (FCE) mobility is essential. Therefore, the project ZEMoS (Zero Emission Mobility Salzburg) project is launched, focusing on bus routes in the alpine regions of Salzburg. With the help of geoinformatics, a representative data source is generated to calculate energy consumption in cold and steep mountain areas. Two modeling approaches - physical and regression models - are applied and compared with real-world measurements from a test operation. The physical model demonstrated a higher level of accuracy in predicting drivetrain power energy consumption. This validated data is subsequently applied to the study area's bus routes, serving as a foundation for fleet management and cost analysis.
Exploring the Effect of Port Disruptions on Global Supply Chains: Detecting Outliers in Maritime Shipments
ABSTRACT. Global supply chains have been increasingly disrupted by environmental and geopolitical events. While prior research has analyzed the economic impact of port disruptions, their direct effects on the material flow between buyers and suppliers remain largely unexplored. This study investigates the impact of port disruptions on global supply chains by analyzing shipment data from a retailer that imports goods primarily from Asia to Europe and the US. Using vessel trajectory data, we develop a novel similarity measure that captures spatio-temporal effects of disruptions at both the point level (port nodes) and segment level (shipping routes). Our method integrates point-based and segment-based similarity measures to detect changes in vessel behavior, such as port-skipping or increased waiting times due to congestion. By applying clustering techniques, we identify outlier trajectories that indicate disrupted shipments across different trade routes. The proposed trajectory similarity measure advances the understanding of disruption effects in maritime logistics by incorporating both spatial and temporal characteristics. Our findings provide actionable insights for supply chain managers seeking to enhance resilience in global sourcing strategies.
Integrating and Representing Information and Knowledge of Climate Mobility Semantically Through Geospatial Knowledge Graph
ABSTRACT. According to IPCC assessment reports, climate change is increasingly exposing populations to risks and affecting patterns of human mobility. Environment and climate mobility have drawn significant attention from policymakers and researchers as evidenced by a growing number of studies and reports. However, as the field evolves, it has become fragmented and ambiguous, since different studies examine various research areas across scales, environmental events, and socio-demographic groups, using multi-perspective data with diverse schemata and format. Moreover, the nexus itself is inherently complex, with underlying intermediate factors including agriculture, economy, society, demography and policy interacting in multifaceted ways. Diverse research methods further complicate the picture, making it challenging to grasp the overall dynamics. Such complexity hinders scholars and stakeholders in cross-scale knowledge management, analysis, decision making for responses, and transdisciplinary collaboration.
Multiple works have attempted to bridge this gap, particularly those involving literature reviews and expert consultations. However, such insights are rarely accessible to computers, making them hard to serve as foundations for further analysis, and often suffer from ambiguity, intransparency and redundancy. Furthermore, integrating them with other information remains another significant challenge. In this study, I rethink these traditional synthesis methods, and introduce a novel approach using domain specific, spatially explicit knowledge graphs. With this approach, it is possible to semantically organize, connect and represent distributed, implicit information and knowledge on climate mobility in an interlinked and reusable way, and highlight the semantic relationships and interactions among diverse entities.
In the first phase, this study uses climate mobility literature as primary input. In collaboration with domain experts, we develop an ontology and taxonomy to form a shared conceptual framework that organizes key concepts, relationships, and attributes. This framework facilitates the representation of information as entities and relationships, simplifies data retrieval and integration, ensures interoperability, and supports complex, multi-hop queries and inference in later stages. We adopt the newly published KnowWhereGraph ontology, which covers natural hazards, place identifiers, climate variables, and demographic factors etc., since it highly aligns with our needs. Leveraging this framework, NLP technologies including named entity recognition, relation extraction, event extraction etc. will be employed in the next phase to systematically discover, transform, link and disambiguate knowledge from those literature, ultimately unifying scattered insights into a coherent graph. Furthermore, I am considering enriching this knowledge graph using RDFized quantitative data that exist as well in climate mobility discipline, and employing knowledge fusion for synthesizing. Faceted queries like ‘Which regions with socio-demographic vulnerabilities are exposed to environmental risks’ will also be enabled using GeoSPARQL, to facilitate human-computer interaction and knowledge exchange. Finally, as suggested by knowledge graph specialists, reasoner would also be considered, serving as an important part of current knowledge graph. Methods like knowledge graph embedding will be implemented to find new patterns that are still implicit despite using this explicit knowledge representation tool, supporting people from different positions to observe, collaborate, and plan for the future.
Unlike the black-box models frequently found in generative AI, this symbolic AI approach ensures explainability and transparency with explicit and structured semantic relationships. Flexibility, accessibility, reusability and interoperability are also maintained. It enables scientists to gain a much faster overview of new developments in the climate mobility field, and identify relevant research problems. It also enables scientists to make their work more accessible to colleagues as well as partners in industry, policy, and society at large.
The cyclist’s perspective: car-to-bicycle overtaking maneuvers analyzed using a research bicycle
ABSTRACT. Sustainable transportation, such as cycling, is vital for meeting Europe’s climate goals. Therefore, cycling should be promoted as an alternative to individual motorized transportation. According to the European Declaration on Cycling, encouraging cycling depends on a safe cycling infrastructure (European Commission, 2023). In literature on the perceived safety of cycling infrastructure, one repeatedly mentioned issue is car-to-bicycle overtaking maneuvers investigated in this work (e.g. Llorca et al., 2017; Rasch et al. 2022).
Analyzing car-to-bicycle overtaking maneuvers relies on sensors to gather data. To this end, the Holoscene research bicycle can be used to collect precise data on car-to-bicycle overtaking maneuvers. It is provided by BB Boreal Bikes GmbH and is equipped with a comprehensive set of sensors comparable to an automated car. A precise GNSS receiver localizes the bicycle during its ride in order to later localize the analyzed overtaking maneuvers. LiDAR sensors mounted on the bicycle provide a 3D representation of the surroundings which can be used to automatically detect vehicles. Based on these detections, the trajectories and the dimensions of overtaking and oncoming vehicles may be extracted. By analyzing the resulting spatiotemporal configuration of the bicycle and the overtaking vehicle as well as the position of the overtake on the road section, the following questions can be addressed:
- Where are bicyclists overtaken by vehicles?
- Is there any oncoming traffic related to the overtake?
- What are the distances between the vehicle and the bicyclist during an overtake, especially while passing laterally?
- What is the overtaking speed of the vehicle?
- How long does the vehicle follow the bicycle before overtaking it?
Summarizing, it is possible to measure distances and velocities during overtaking maneuvers precisely and from a bicycle’s point of view.
To better communicate these measurements, the trajectories of the road users can be visualized to investigate individual overtaking maneuvers in detail. To analyze the pattern of different overtaking maneuvers at the same location, aggregated measurements can be visualized. These results support traffic planners making evidence-based recommendations to improve cycling infrastructure.
Such recommendations were developed in RADBEST project based on investigating overtaking maneuvers on urban road sections, focussing on sections too narrow to allow segregated cycling facilities (Leitinger et al., 2024). In the upcoming MZSFreiland project, this approach will enable further recommendations for cycling infrastructure on rural roads.
References:
- European Commission. (2023). European Declaration on Cycling (COM (2023) 566 final).
- Leitinger, S., Wies, H., Loidl, M., Werner, C., Eckart, J., Fath, M., Hagedorn, C., Hunziker, R., Ruegge, L., Szeiler, M., Richter, M., Mellauner, M., & Fleischer, M. (2024). RADBEST – Radverkehrsführung bei beengten Verhältnissen [final report]. on behalf of: Bundesministerium für Digitales und Verkehr (BMDV), Bundesministerium für Klimaschutz (BMK), Bundesamt für Strassen (ASTRA).
- Llorca, C., Angel-Domenech, A., Agustin-Gomez, F., & Garcia, A. (2017). Motor vehicles overtaking cyclists on two-lane rural roads: Analysis on speed and lateral clearance. Safety Science, 92, 302–310. https://doi.org/10.1016/j.ssci.2015.11.005
- Rasch, A., Moll, S., López, G., García, A., & Dozza, M. (2022). Drivers’ and cyclists’ safety perceptions in overtaking maneuvers. Transportation Research Part F: Traffic Psychology and Behaviour, 84, 165–176. https://doi.org/10.1016/j.trf.2021.11.014
Seasonal Mobility: Human-Centered and Weather-Aware Routing
ABSTRACT. Mobility in urban environments has been an important topic of research for many years, with many different groups like scientists, engineers, politicians, and policymakers investigating various aspects of this topic (Tiboni et al., 2021).
However, nowadays, everything is changing rapidly, and there is a continuous need for quick modeling of the changes that happen or could happen in the transportation network environment. In this context, the idea of this abstract is to derive a working framework (Canestrini et al., 2024; Gogousou et al., 2024)for the city of Vienna and enrich it with further information regarding weather conditions. In recent years, with climate change becoming a reality (Hamza et al., 2020) and weather changing abruptly from one moment to another, investigating how human routes will or could change according to weather conditions is important for predicting or preparing for situations that could occur in future events. With the aforementioned framework, the multi-modal transportation network of the city is modeled in a way that different parameters can change. It has the ability and modularity to do multiple analyses, and the idea here is to derive aggregated historical weather information from data.gov.at in order to include a weighting process that can account for weather variations during the route generation phase. By incorporating seasonal variations as weights in the routing algorithm and combining them with the existing human mobility filters, the framework ensures that the generated routes are both weather-aware and align with typical human travel behavior.
Boosting Public Transport! The Impact of Public Transport Frequency on Modal Split and Trip Duration
ABSTRACT. Sustainable urban mobility is a key challenge of our time. One major aspect is the reduction of private motorized transport and the shift to active mobility (e.g., walking and biking) and public transport. Many studies analyse factors influencing the transition to more sustainable modes of transport. Previous studies, primarily based on surveys and questionnaires, highlight the importance of vehicle frequency and public transport intervals. The work of Santos et al. [2013] suggests that increasing the number of vehicles is likely to increase the share of public transport. Similarly, De Vos et al. [2022] report that public transport frequency (along with its interaction with user satisfaction) influences people’s intentions to use public transport. However, investigating vehicle frequency in real-world settings is challenging, especially in a cost-efficient and ethical manner as this would either require acquiring more vehicles or deliberately reducing the frequency, potentially disadvantaging users. To address these issues, we propose to adopt a modular algorithmic framework recently introduced by Gogousou et al. [2024] and Canestrini et al. [2024]. Their approach allows us to algorithmically adjust public transport waiting times, thus simulating variations in vehicle frequency. These modifications of the average waiting duration (i.e., increasing or decreasing public transport frequency) enable us to assess its impact on the modal split and to quantify changes in trip duration in an economical way.
ABSTRACT. In addition to professional experience, data can significantly facilitate decision-making processes in traffic management. Data sets from vehicle, detector and environmental data can be stored historically and analyzed at any moment in time. Together with live data, traffic management measures can be prioritized, decisions made and justified. Existing databases and use cases in traffic management are predestined to be used by AI.
Possible queries include:
1. Forecasting data: traffic volumes, speeds, congestion
2. Effects of measures in post analysis: How do traffic management strategies affect traffic flow and environmental pollution?
3. Estimating the effects of measures in advance: What should be implemented when, with what priority and what consequences?
The aforementioned queries are potential avenues of exploration: an AI-model for time series (e.g. Chronos) can be trained with big datasets from all over the world. For precise predictions for a specific area, the model can be finetuned with local data. With this application, traffic volume, speed and/or passenger volume (public transport) can be predicted.
In order to understand how measures in traffic management will work, an already trained model for the area could be used. A new measure/strategy like changing traffic signals or adding/reducing lanes could be translated into a trained parameter, e.g. in a spatio-temporal graph neural network. This query is to be applied with real data.
Machine Learning algorithms were used to investigate post analysis (2) as part of a FFG-funded project. This project examined the causal effects of traffic management interventions in Frankfurt, Germany: The implementation of a Traffic Light Control Strategies (TLCS) on Friedberger Landstraße in 2020.
Various data like floating car data (provided by the city of Frankfurt, TomTom International BV), traffic volume (provided by the city of Frankfurt) and emission data (NO₂ and NOx levels, sourced from Hessisches Landesamt für Naturschutz, Umwelt und Geologie) were harmonized in 30 minutes intervals. Using the Causal Impact R-Package in a Python environment, the goal was to find effects from the traffic light control strategies. Due to the number, length of the strategies and the lack of data quality and consistency, this model became imprecise and irrelevant. Using causal impact methods, traffic volume impact on the level of emissions was detected but was not necessarily coherent with the strategies.
The use of AI in forecasting and predicting traffic systems and patterns is very complex due to diverse unknown variables. Despite the large amount of data available, many factors remain unconsidered, from spontaneous human decisions to events unknown to the model, accidents and other unforeseeable events. Data quality and completeness are also basic requirements.
Funding:
This research was supported by the Austrian Research Promotion Agency (FFG) and the Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK).
Bridging Education and Employment: Career Trajectories of CDE Graduates in Earth Observation and Geoinformatics
ABSTRACT. The Copernicus Digital Earth (CDE) Master's programme is designed to equip students with advanced expertise in Earth Observation (EO) and Geoinformatics along with GeoData Science or GeoVisualisation depending on the specialisation track. It is designed to address the growing demand for skilled professionals in these fields. Despite the increasing availability of EO and geospatial data and technologies, there remains a gap in structured academic pathways that effectively bridge higher education and industry requirements. The CDE program aims to fill this gap by offering a comprehensive curriculum, combining theoretical foundations with practical training, research-driven education, and industry collaborations. This paper presents a career trajectories overview of CDE graduates, providing statistics on employment sectors, job roles, and PhD placements. The findings highlight the programme's success in preparing graduates for diverse career opportunities, demonstrating its impact on the EO job market.
Comparative Analysis of Chatbot Use and Evaluation in Two Geomatics Courses
ABSTRACT. This study examines the effectiveness of chatbots in two university-level Geomatics courses — Geospatial Analysis and Measurement Science — at the University of Florida. Students formulated five course-related questions, interacted with chatbots, and evaluated their responses. The findings revealed that the inclusion of images in prompts, as required in the Geospatial Analysis course, resulted in more varied chatbot usage but also lower satisfaction with the responses. Students in the mathematically intensive Measurement Science course preferred using pre-existing questions to avoid the difficulty of creating original tasks. ChatGPT-3.5 was the most widely used chatbot for text-based prompts, whereas Copilot was preferred for tasks involving multiple modalities. This research provides valuable insights for enhancing the integration of chatbots into higher education.
ABSTRACT. Geoinformation helps to assess environmental conditions and thereby facilitates decisions on land use, sustainable resource management, and nature conservation. In the Western Balkans, there is still much room for progress in the use of geoinformation technologies, with a particular lack of appropriately trained personnel and training opportunities. This is particularly important to support the Western Balkan countries’ desire to complete the chapters 11 (agriculture and rural development) and 27 (environment) of the EU Acquis to become member states of the EU. The project GEO-WB6 has been initiated by the Leibniz Institute of Agricultural Development in Transition Economies (IAMO) and the Agricultural University of Tirana to close this gap. The project establishes a Geoinformation Centre for the Western Balkans to improve human resources in the management and analysis of geodata, strengthen the regional network of geoinformation experts, and advance the region‘s research portfolio in agricultural and environmental science.
In the project, we provide a comprehensive course program and thematic summer schools for graduate students, researchers, governmental staff and practitioners from the six Western Balkan countries who work in the agricultural, forestry, and related sectors. The course participants are trained at the Agricultural University of Tirana by experienced researchers and lecturers in GIS tools, using interactive learning and group work approaches, fieldwork exercises, and eLearning material. Besides, we are strengthening regional research networks and aim at sustainably improving education and collaboration in geoinformation sciences with an exchange programme, online seminar series, and a stakeholder workshop, and strive to integrate geoinformation technologies in university curricula. Finally, we collect geodata from the region and make it available in an online data portal. In the long term, the project envisions becoming a key regional institution for geodata analysis, research, and networking in the Western Balkans to effectively support the region in developing spatial data skills and architectures needed in the EU accession process.
ABSTRACT. A digital twin is a virtual representation of reality, including physical objects, processes, and relationships. When built on a foundation of geography, it becomes a geospatial digital twin.
Geospatial digital twins are rapidly gaining significance in today's technology-driven world, becoming a focal point in discussions about urban planning, infrastructure management, and environmental sustainability. As cities grow and evolve, the need for accurate, real-time representations of physical spaces has never been more critical. These digital replicas enable stakeholders to visualize, analyze, and simulate various scenarios, leading to more informed decision-making.
A geospatial digital twin aggregates the data that is found all around us. Sensor networks and Internet of Things (IoT) devices produce a constant stream of information. Often, the only attribute these various forms of data share is location. A geospatial digital twin depicts this commonality, revealing how the various types of data exist and interact in relation to one another. Together, this data provides an intelligence edge, adding context about critical places.
It does this by displaying data using the common visual language of maps. By emphasizing the where of data, decision-makers gain the perspective to observe the why and how.
Digital twins are sometimes described as 3D models, but that description glosses over the value of digital twins. A 3D model may be meticulously accurate, but it is a snapshot, a frozen moment. A digital twin is dynamic, modeling change over time.
Nearly every decision a leader makes will involve change over time: the present situation, how it has changed, and how it is likely to evolve. Any digital twin, by design, includes this temporal element. This trait is what distinguishes even the simplest digital twin from a 3D model.
Digital twins evolve. This is what makes them such powerful analytic tools. It is what makes them seem real.
Geospatial digital twins have now reached a level of realism that justifies the term reality capture. The geospatial digital twin aggregates high-resolution data with models from building information modeling (BIM) software and GIS data. Drone and satellite footage can help extend digital twins over entire cities or nations. Gaming engines turn these digital twins into truly immersive environments within virtual reality headsets.
The decisions you need to make for your business demand this ultimate command of the physical environment. Geospatial digital twins are not meant to imitate the world. They are tools to help understand it.
Möglichkeiten und Potenziale von Retrieval-Augmented Generation für die räumliche Analyse von Kriminalitätsdaten
ABSTRACT. Die steigende Komplexität und Sensitivität der Kriminalitätsanalyse erfordert innovative Ansätze, um präzise Einblicke zu gewährleisten. Dieser Beitrag untersucht das Potenzial der Retrieval-Augmented Generation (RAG) für die Analyse von Kriminalitätsdaten unter besonderer Berücksichtigung deutschsprachiger Texte. RAG integriert fortschrittliche Natural Language Processing (NLP) Techniken mit Retrieval-Modulen, um kontextreiche Antworten auf spezifische Fragen zu generieren. Anhand eines prototypischen RAG-Systems, implementiert mit polizeilichen Falldaten aus Rheinland-Pfalz, werden Möglichkeiten und Herausforderungen für die räumliche Analyse untersucht. Ergebnisse zeigen, dass das System präzise in der Identifizierung von Tatort-Hotspots und der Analyse von Deliktmustern agiert, jedoch Herausforderungen in der Konsistenz und Abdeckung räumlich breiter Phänomene aufzeigt. Durch Feintuning und verbesserte Datenintegration könnte RAG zukünftig eine robuste Unterstützung für die Kriminalitätsanalyse bieten.
Harmonisierungs- und Integrationsstrategien unterschiedlicher digitaler Gebäudeplan- und Objektdaten für die Verwertung in der Indoor-Kartographie am Beispiel der PLUS campusMap
ABSTRACT. Harmonisierungs- und Integrationsstrategien unterschiedlicher digitaler Gebäudeplan- und Objektdaten für die Verwertung in der Indoor-Kartographie am Beispiel der PLUS campusMap
Impact of Temporary Location Visitors on Mobile App Usage in French Cities: Implications for Socio-Economic Segregation Studies
ABSTRACT. ==== Introduction
This study investigates the drivers of 4G mobile app usage in urban environments by disentangling the influence of local resident socioeconomic characteristics from that of non-residents (temporary visitors). Recent research on the usage of apps and services on smartphones links this usage directly to the territory and associated land use composition (Furno et al. 2017; Miao, Qiao, and Yang 2018; Novović et al. 2020) and to the socioeconomic profile of residents (Ucar et al. 2021; Goel, Furno, and Sharma 2023). Most researchers (with few exceptions (Yu et al. 2018; Singh et al. 2019)) therefore ignore the use of Wi-Fi (at home, work, or elsewhere). However, 4G traffic at a location can be influenced primarily by non-residents if local residents predominantly use Wi-Fi at home. Singh et al. (Singh et al. 2019) argue that the differences in the usage of mobile services cannot be explained by the differences in coverage of Wi-Fi and cellular networks and the choice between Wi-Fi and cellular networks is not random and depends on whether the location is a known regular location (Oliveira, Obraczka, and Rodríguez 2016). Studies across several countries have shown that users spend more time and use more data on Wi-Fi rather than on cellular connections (de Reuver and Bouwman 2014; Hyun et al. 2016; Walelgne et al. 2021). Ucar et al. (Ucar et al. 2021) and Goel et al. (Goel, Furno, and Sharma 2023) used mobile app data to predict socio-economic indicators, achieving R-squared values up to approximately 0.75 and 0.659, respectively, by integrating behavioral patterns with demographic and urban attributes. While Ucar et al. limited their analysis to nighttime usage to approximate resident behavior, Goel et al. incorporated full-day usage signatures, yet both approaches still risked misattributing mobile activity due to Wi-Fi connections and non-residents.
==== Materials & Methods
We use NetMob23 dataset (Martínez-Durive et al. 2023), which captures 4G network traffic every 15 minutes across a 100×100 meter spatial grid by mobile app and service types for more than 70 days in spring of 2019 in 20 French cities. We use the following additional datasets to characterize NetMob23 dataset locations: (1) land use – Copernicus CORINE Land Cover (100 m resolution) and Imperviousness Density (10 m resolution), as well as highly detailed (10 m resolution) Theia OSO Land Cover Map for France; (2) socioeconomic profile – spatial polygons for French IRIS and associated socioeconomic variables on income and age distribution; (3) socio-economic profile of non-residents. For each grid cell in every city in the NetMob23 dataset, we characterize each location based on its physical infrastructure and resident population by determining the shares of specific land use types, the population counts by age and by mean income per resident individual. To estimate the socio-economic profile of non-residents, we constructed a network from OpenStreetMap street data, data on public transit timetables from GTFS (General Transit Feed Specification) and conducted accessibility analysis using r5r R package (Pereira et al. 2021). This way we estimated how many individuals by age group and income bracket could reach any particular location using public transit within reasonable time interval (empirically determined optimal at 60 minutes for Paris and 30 minutes for other cities). This accessibility measure is a proxy for the probability of individuals from different socioeconomic groups visiting a particular location conditional on travel time. We use random forest models to predict the total traffic for each mobile app/service for several time intervals matching the arrival time of non-residents with time when mobile data is transmitted at each location. 2/3 of the data is used for training and 1/3 for testing/validation. This out-of-sample validation shows minor reduction in model error and R-squared values, which certifies that the results are reliable.
==== Results
Our findings show that accessibility variables — representing non-residents — consistently have a greater impact on mobile usage than residents’ demographics. This demonstrates that (1) accessibility is a strong proxy for mobility patterns, (2) non-residents shape mobile app usage more than residents, and (3) previous studies misattribute usage trends to residents, overlooking non-residents’ influence. The implications of these results are far-reaching for urban segregation studies: misinterpreting mobile usage patterns by ignoring transient populations may lead to an incomplete understanding of social mixing and economic disparities. Ignoring non-residents may lead to misleading conclusions about segregation and inequalities.
==== References
de Reuver, Mark, and Harry Bouwman. 2014. “Preferences in Data Usage and the Relation to the Use of Mobile Applications.” In. Calgary: International Telecommunications Society (ITS). https://www.econstor.eu/handle/10419/101437.
Furno, Angelo, Marco Fiore, Razvan Stanica, Cezary Ziemlicki, and Zbigniew Smoreda. 2017. “A Tale of Ten Cities: Characterizing Signatures of Mobile Traffic in Urban Areas.” IEEE Transactions on Mobile Computing 16 (10): 2682–96. https://doi.org/10.1109/TMC.2016.2637901.
Goel, Rahul, Angelo Furno, and Rajesh Sharma. 2023. “Predicting Socio-Economic Well-being Using Mobile Apps Data: A Case Study of France.” arXiv. https://doi.org/10.48550/arXiv.2301.09986.
Hyun, Jonghwan, Youngjoon Won, David Sang-Chul Nahm, and James Won-Ki Hong. 2016. “Measuring Auto Switch Between Wi-Fi and Mobile Data Networks in an Urban Area.” In 2016 12th International Conference on Network and Service Management (CNSM), 287–91. https://doi.org/10.1109/CNSM.2016.7818434.
Martínez-Durive, Orlando E., Sachit Mishra, Cezary Ziemlicki, Stefania Rubrichi, Zbigniew Smoreda, and Marco Fiore. 2023. “The NetMob23 Dataset: A High-resolution Multi-region Service-level Mobile Data Traffic Cartography.” arXiv. https://doi.org/10.48550/arXiv.2305.06933.
Miao, Qing, Yuanyuan Qiao, and Jie Yang. 2018. “Research of Urban Land Use and Regional Functions Based on Mobile Data Traffic.” In 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC), 333–38. https://doi.org/10.1109/DSC.2018.00054.
Novović, Olivera, Sanja Brdar, Minučer Mesaroš, Vladimir Crnojević, and Apostolos N. Papadopoulos. 2020. “Uncovering the Relationship Between Human Connectivity Dynamics and Land Use.” ISPRS International Journal of Geo-Information 9 (3): 140. https://doi.org/10.3390/ijgi9030140.
Oliveira, Larissa, Katia Obraczka, and Abel Rodríguez. 2016. “Characterizing User Activity in WiFi Networks: University Campus and Urban Area Case Studies.” In Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, 190–94. MSWiM ’16. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2988287.2989172.
Pereira, Rafael H. M., Marcus Saraiva, Daniel Herszenhut, Carlos Kaue Vieira Braga, and Matthew Wigginton Conway. 2021. “R5r: Rapid Realistic Routing on Multimodal Transport Networks with R 5 in R.” Findings, March. https://doi.org/10.32866/001c.21262.
Singh, Rajkarn, Marco Fiore, Mahesh Marina, Alberto Tarable, and Alessandro Nordio. 2019. “Urban Vibes and Rural Charms: Analysis of Geographic Diversity in Mobile Service Usage at National Scale.” In The World Wide Web Conference, 1724–34. WWW ’19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3308558.3313628.
Ucar, Iñaki, Marco Gramaglia, Marco Fiore, Zbigniew Smoreda, and Esteban Moro. 2021. “News or Social Media? Socio-economic Divide of Mobile Service Consumption.” Journal of The Royal Society Interface 18 (185): 20210350. https://doi.org/10.1098/rsif.2021.0350.
Walelgne, Ermias Andargie, Alemnew Sheferaw Asrese, Jukka Manner, Vaibhav Bajpai, and Jörg Ott. 2021. “Understanding Data Usage Patterns of Geographically Diverse Mobile Users.” IEEE Transactions on Network and Service Management 18 (3): 3798–3812. https://doi.org/10.1109/TNSM.2020.3037503.
Yu, Donghan, Yong Li, Fengli Xu, Pengyu Zhang, and Vassilis Kostakos. 2018. “Smartphone App Usage Prediction Using Points of Interest.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1 (4): 174:1–21. https://doi.org/10.1145/3161413.
Urban Water Loss assessment in Tropical Cities: A Multi-Sensor Sentinel and Machine Learning Approach
ABSTRACT. Water bodies are essential elements of urban ecosystems. They provide drinking water, regulate rainfall, and sustain the local biodiversity, among other benefits. Rapid and unplanned urbanisation, especially in the global south, often results in water loss, affecting the urban ecosystem. Thus, continuous and accurate monitoring of urban water loss is critical for sustainable urban planning. Traditional ground survey-based methods are time-consuming and resource-intensive. Remote sensing offers an efficient alternative for large-scale monitoring and historical analysis of urban water dynamics. Sentinel-2 imagery, with its 10-meter spatial resolution, is widely used for water body assessment.
However, a significant challenge, especially in tropical areas, is the presence of clouds, which affects the accurate assessment of waterbodies or other land use land covers. Hence, radar images from Sentinel-1 emerged as an alternative to address such challenges since cloud covers do not affect these data. Recent advancements in state-of-the-art remote sensing have also introduced modern techniques like machine learning, and more and more studies are now opting for machine learning as a principal method instead of traditional spectral indices-based analysis. While previous studies have utilised Sentinel-1 and Sentinel-2 for water body detection—either as a primary research focus or as part of broader land use/land cover (LULC) classifications—challenges remain in accurately identifying urban water bodies of varying sizes. Sentinel-2 often struggles with small water bodies, whereas Sentinel-1 performs better in detecting them but is less reliable for large water bodies and their temporal changes. In addition, typical classification accuracy does not always coincide with the accuracy of water loss assessment. Hence, some studies achieved high classification accuracy in waterbody detection, but a study diving into urban water loss assessment using different data sources remains absent from the literature.
This study employs machine learning to assess Sentinel-1 and Sentinel-2 imagery for urban water loss detection in Phnom Penh, Cambodia. We analysed 16 images (eight from each sensor) spanning 2016 to 2023, applying a Support Vector Machine (SVM) classification approach. Classification accuracy was assessed using a traditional validation dataset approach. At the same time, Google Earth historical imagery was utilised for water loss assessment and visual interpretation where the classification performed well or encountered challenges. Results indicate that Sentinel-2 and Sentinel-1 individually achieve 93% classification accuracy, but their straightforward combination lowers accuracy to 83%, suggesting that direct sensor integration is not optimal. Also, when investigating the water loss assessment accuracy, the accuracy for Sentinel-1 was 82%, and for Sentinel-2, we also found that 90% of Sentinel-2 often ignores smaller waterbodies but accurately identifies the larger ones. While Sentinel-1 identifies the smaller waterbodies better than Sentinel-2, they do not perform well in large water bodies and their water loss assessment. A selective integration approach, including the strengths of each sensor while masking out their weaknesses, significantly improved water loss assessment accuracy. Findings indicate a substantial decline (about 28 sq km) in Phnom Penh's urban water area over the past eight years, raising concerns for thermal comfort and climate resilience. Our findings on the study area are helpful for city authorities and urban planners. At the same time, our findings and proposed idea of generating a better water loss assessment will be helpful in future general urban water loss assessment studies.
3D Datasets: Properties, Gaps, and an Open Repository
ABSTRACT. The growing availability of 3D datasets, combined with the lack of standardized metadata, poses challenges in selecting appropriate datasets for an application-specific task. This paper lists interesting dataset characteristics, analyzes their representation across 263 datasets, and identifies trends and gaps in the current landscape. Based on the findings, an open GitHub repository was developed to help researchers choose appropriate datasets.
Application of the Spatio-Temporal Asset Catalog Specification for Open Government Data
ABSTRACT. The Spatio-Temporal Asset Catalog (STAC) specification enables spatial data providers to distribute their datasets within a
hierarchical metadata catalog structure containing collections, items, and assets. This catalog specification, implemented as an
Application Programming Interface (API), allows spatial and temporal search queries making the data providers’ spatial assets
discoverable. "Organization" sought to apply this novel specification to improve its spatial data infrastructure and make its open
government datasets more accessible to a wider range of users. Another motivation for deploying a STAC API is the possibility
of providing spatially and temporally overlapping datasets, enabling the distribution of both recent and historical spatial data.
To generate dynamic catalog data sustainably, a primary focus of this project was to create the STAC structure using already
existing metadata stored in relational database tables. Having this prerequisite in mind, the generation process was achieved by
implementing a script that extracts this metadata from the organization’s relational database and builds the catalog structure. To
facilitate access to the dynamic catalog data and enable clients to query the datasets, the conceptualization and implementation
of an API was necessary. This involved designing an API architecture used to set up a Python interface accessing and querying
the storage system containing the dynamic STAC data. Thorough testing processes evaluating the STAC data generation script
and the STAC API confirmed highly satisfactory outcomes. For both results, the reusability of the implementation for generating
and deploying the dynamic STAC played a significant role in the process
Potentials of a Research Infrastructure for Linking Survey and Spatial Data to Enable Interdisciplinary Research
ABSTRACT. An interdisciplinary approach to the spatial analysis of survey data and the use of social science and spatial science research data is a good way to analyse questions of spatial justice and spatial inequality in more detail. The technical linkage required for this entails a number of hurdles such as data privacy, data access, reproducibility, and technical knowledge. The ’Geolinking Service SoRa’ is being developed to operationalise this linkage and make it easier to offer for research. The linking methods offered, their parametrisation and the expected output will be presented in this article. Furthermore, the linkage will be demonstrated using the ’Practise Dataset’ by the Socio-Economic Panel (SOEP) and a spatial indicator of the ’IOER Monitor’ as an example to show the functionalities of the presented linking infrastructure. Challenges (such as usability, performance or handling of uncertainties) and future developments will be discussed, with a focus on spatial science.
WorldView Legion – Near-Realtime Access and Delivery for multispectral 30 cm Satellite Imagery
ABSTRACT. European Space Imaging (EUSI) stands at the forefront of Earth observation, offering state-of-the-art technological solutions for Very High Resolution (VHR) satellite imagery, 2D & 3D products, and geospatial applications. In this presentation, I will delve into EUSI's groundbreaking capabilities offering direct access to Maxar’s new WorldView Legion satellite constellation through its local ground station.
With a constellation of 8 satellites, WorldView Legion will be the first one to offer multiple intraday access for 30 cm imagery opening completely new possibilities for the Earth Observation community. In specific, time critical projects (e.g. the Copernicus Emergency Service) requiring ad hoc imagery collections and immediate delivery will take high benefit. Several use cases will be presented to show the new possibilities in shortening the timespan from a user’s imagery request to its delivery.
Learning points
- Learn more about EUSI’s portfolio of advanced technological solutions for Earth observation.
- Gain insights to the new WorldView Legion satellite constellation and its possibilities for multiple intraday imagery collections.
- Experience the possibilities of near real-time imagery delivery down to 15 minutes from its collection through the EUSI ground station.
- Use cases based upon WorldView Legion to enhance the success rate of imagery collections.
ABSTRACT. Since the 1990s, FME has been known as an indispensable ETL tool for converting and transforming different geodata in different formats in order to make them interchangeable and usable by different systems. FME quickly gained global recognition in the field of geoinformatics and continues to accompany many GIS professionals among us to this day.
With the integration of more and more data formats and the ability to "operate" different systems and automating data processes between them, FME has become increasingly comprehensive and can nowadays present itself as a complete enterprise data integration platform - with the unfair advantage of also being the world champion in geodata.
This pitch is intended to give the audience an insight into the technical development of FME, from a “simple” conversion tool for geodata to a complete enterprise data integration platform within an organization, that automates data processes and breaks down data silos.
Spatial Tools for Data Spaces, an agricultural show case
ABSTRACT. The concepts of EU Data Spaces are designed to create a secure and interoperable environment for data sharing across sectors within the European Union.
The fundament on Data Spaces has evolved over the past decade as part of the broader effort to harness data as a strategic asset. It began with early initiatives during the mid-2010s when the EU was actively shaping the Digital Single Market, aiming to break down data silos and promote interoperable data sharing without barriers. But such ideas based on EU activities for Open Data and the, within the Spatial Data Infrastructure Community well known INSPIRE Directive, was enforced by EC in 2007.
This evolving vision was formalized with the European Data Strategy in 2020, which called for the creation of sector-specific data spaces—covering areas like health, mobility, energy, and manufacturing—to foster technical innovation and collaboration. Subsequent proposals such as the Data Governance Act and the Data Act have further reinforced this framework, ensuring that data sharing occurs under clear, secure, and fair conditions while respecting privacy and sovereignty.
In essence, EU Data Spaces represent a culmination of years of policy development and practical experimentation, aimed at creating a trusted digital ecosystem that supports economic growth and cross-border data market place.
Nevertheless, the current status of data space architecture and its purpose can be viewed critically. It is immature in many respects and not sufficiently proven regarding technical feasibility and interaction of tools and services in a shared infrastructure. Moreover, the actual practical added value is still questionable despite all the effort and investment in innovation. This added value (MVP) is also aimed at data providers, who see high-quality data as a reinvestment.
With the publication of the EU Regulation in January 2025, an essential step towards implementing a Health Data Space has been taken.
In the present work, the primary focus is on the interoperability of data and services. The objective is to contribute twofold: the development and the necessary technological bridges. Implementing components and tools via a community or official standard, such as ISO or OGC, is particularly interesting. This is exemplified by an Austrian pilot project for an agricultural data cycle, 'REST-GDI-AGRA', which aims to bring Data Spaces to life in a targeted manner.
ABSTRACT. Die Robotik ist ein Teilbereich der Ingenieurwissenschaften, der sich mit der Entwicklung,
Konstruktion und Programmierung von Maschinen (Robotern) beschäftigt. Ein Roboter ist ein
komplexes System, das mit Sensoren und Aktoren ausgestattet ist, um Informationen aus
seiner Umgebung zu erfassen, Entscheidungen auf Basis vordefinierter Logiken
(Algorithmen) zu treffen und entsprechende Aktionen auszuführen. Dieser Prozess erfolgt in
der Regel autonom.
In den letzten Jahrzehnten hat die Robotertechnologie erhebliche Fortschritte gemacht und
spielt heute eine immer wichtigere Rolle in vielen Bereichen unseres täglichen Lebens – von
Haushaltsgeräten wie Staubsaugern über autonomes Fahren bis hin zu Anwendungen in
Industrie, Medizin und Raumfahrt.
Eine große Herausforderung in der autonomen Robotik besteht in einer kollisionsfreien
Bewegung, insbesondere in geschlossenen Räumen wie Fabrikhallen oder Wohnungen, in
denen GNSS (Global Navigation Satellite System) keine Anwendung findet.
Das Problem lässt sich wie folgt formulieren:
Wäre ein an einem unbekannten Ort untergebrachter Roboter in der Lage eine Karte zu
erstellen und seine Position und Orientierung eigenständig zu definieren?
Die Roboterlokalisierung bei gegebener Karte sowie das Kartieren aus Sensordaten bei
bekannter Position sind relativ einfach zu implementieren. Deutlich schwieriger ist es jedoch,
beide Aufgaben gleichzeitig zu lösen. Es ergibt sich ein klassisches Henne-Ei-Problem:
Einerseits wird eine Karte benötigt, um die Position des Roboters zu bestimmen, andererseits
ist eine bekannte Position erforderlich, um die Karte zu erstellen. Die Lösung liegt in der
Entwicklung eines Algorithmus, der gleichzeitig den Raum kartiert, die Position des Roboters
bestimmt und ihn durch verschiedene Objekte navigieren lässt – der sogenannte SLAM-
Algorithmus.
SLAM steht für Simultaneous Localization and Mapping (Simultane Lokalisierung und
Kartierung) und gehört zu den aktivsten Forschungsgebieten innerhalb der Robotik. Der
Algorithmus kombiniert Messdaten aus verschiedenen Sensoren, um eine Karte der
Umgebung zu erstellen und die eigene Position innerhalb dieser Karte zu bestimmen.
Verwendet werden dafür visuelle Sensoren (z. B. Kameras) sowie nicht-visuelle Datenquellen
wie Sonar, Radar oder LiDAR. Zudem kommen inertiale Messeinheiten (IMU) und Odometrie
zum Einsatz. SLAM-Algorithmen berechnen auf Basis dieser Informationen die bestmögliche
Schätzung der Roboterposition.
Im Rahmen dieses Workshops wird das grundlegende Konzept der Robotik und
Automatisierung erläutert. Anhand von Anwendungsbeispielen werden die Funktionsweise
von Lokalisierungsalgorithmen und Kartierungsmethoden demonstriert.
Die praktischen Beispiele werden mithilfe der Programmiersprache Python sowie dem
Framework ROS (Robot Operating System) umgesetzt. ROS stellt eine modulare, offene
Middleware für die Roboterprogrammierung dar. Es bietet eine Vielzahl an Tools,
Bibliotheken und Schnittstellen zur Ansteuerung von Sensoren und Aktoren, zur
Verarbeitung von Sensordaten. Durch die Verwendung von ROS können verschiedene
Komponenten eines Robotersystems effizient miteinander kommunizieren, was eine flexible
und skalierbare Entwicklung von Robotikanwendungen ermöglicht.
Dauer: 2 x 90 Minuten
Format: interaktiv mit theoretischen und praktischen Anwendungsbeispielen
Optional: für die interaktiven Teile des Workshops ist ein eigener Laptop von Vorteil
WS: RIEGL VZ-600i 3D Laser Scanner - Acquisition of high accurate static and kinematic data with just one device
ABSTRACT. With the RIEGL VZ-600i laser scanner it is possible to acquire both static and kinematic data with high accuracy. In just 25 seconds, the scanner finishes a static scan with a resolution of 6 mm at a distance of 10 metres and takes the corresponding images of the scan area. Several scan positions are quickly and automatically registered to each other on the scanner. The result is a highly accurate georeferenced point cloud with millions of points in which measurements can be taken with millimeter precision. With the new RIEGL VZ-i Series Kinematic App, the VZ-600i 3D laser scanner can acquire highly accurate kinematic data without the need for additional equipment. The scanner is simply mounted on any platform such as a car, backpack or boat and can acquire data from large areas in a relatively short time. Directly before data acquisition, quick adjustments to the settings can be made in the Kinematic App. GNSS correction data is recorded during kinematic scanning using either Real-Time Kinematic (RTK) or Post-Processed Kinematic (PPK) with a base station. In order to calculate an accurate trajectory, the data is acquired during kinematic scanning in a so-called ‘Rotating Frame’ mode, in which the scanner continuously rotates and captures 360° data of the environment. This mode serves as the basic mode for precise trajectory calculation. As an additional mode, the so-called ‘Fixed Frame’ mode can be executed, in which the scanner is fixed in one direction and continuously captures data without rotation. This enables a point cloud with evenly spaced points.
Using the One Touch Processing Wizard, the data is analyzed quickly and easily in the RIEGL RiSCAN PRO software after acquisition. The necessary processing steps are selected and processed automatically one after the other. The result is a highly accurate point cloud. With the corrected GNSS data, the resulting point cloud is already localized with centimeter accuracy. The accuracy within the point cloud is again in the millimeter range. This kinematic function extends the application possibilities of the RIEGL VZ-600i to a broad spectrum and the user has both functions, static and kinematic, available in just one device.
The aim of the workshop is to show interested parties a typical workflow for both static and kinematic data acquisition using live data acquisition on site. After the acquisition, the first automatic processing steps in RiSCAN PRO will be shown in order to obtain a finished, filtered and georeferenced point cloud. The workshop is intended to familiarize interested parties with static and kinematic 3D laser scanning and inform them about the latest innovative developments of the RIEGL VZ-600i 3D laser scanner.
The workshop lasts 75 minutes and is aimed at participants who would like to experience state-of-the-art 3D laser scanning data acquisition and analysis at first hand and learn about the latest developments in this field.
ABSTRACT. Abstract: Earth observation (EO) is a valuable source of information for both immediate humanitarian response and planning long-term socio-economic intervention. In a well-grounded partnership with Médecins Sans Frontières (MSF, Doctors Without Borders), the Christian-Doppler laboratory for geospatial and EO-based humanitarian technologies (CD lab GEOHUM) investigates and further develops newly emerging technologies at the interface of satellite Earth observation (EO) and geospatial information (GI) to support humanitarian operations. The CDL is dedicated to three different research areas, (1) information extraction from EO data, in particular deep learning-based information extraction for dwelling extraction and damage assessments; improving the analysis of Radar satellite data (esp. for flood mapping); exploring the potential of big EO data for land cover mapping, flood mapping and fire detection (2) Data integration, where EO primary data is combined with data and information from other sources, like OpenStreetMap, institutional actors, statistical data etc. A toolset for data quality control, data aggregation, spatial regionalisation, and validation allows to employ robust data assimilation strategies, resulting in more accurate information products on a high semantic level. Several diverse use cases are defined to test and apply this toolset (e. g. Malaria mapping, settlement and population analysis or climate-change related impact assessments). Finally, (3) The effectiveness of geospatial data in humanitarian operations depends not only on technological advancements but also on clear communication and ethical considerations. Especially for information products, which build upon a multitude of input data, understanding and communicating the uncertainties involved in the production process is fundamental. The best and most sophisticated information product is inappropriate if users do not understand the inherent uncertainties and therefore cannot make confident decisions based on it. Another important critical element for the operationalisation of tools and products and to ensure consistency is the reproducibility of the workflows. Also ethical challenges related to geospatial data use, particularly privacy concerns in humanitarian settings are being addressed. In this workshop we will provide methodological insights into the various research areas of the CDL and share practical experiences from MSF’s operational GIS support. Participants will gain a deeper understanding of the role of EO and GI in humanitarian response, the challenges of geospatial data integration and strategies to ensure effective use of GIS technologies in crisis settings.
Duration: 2x 75 minutes
Format: The first slot will feature several presentations, highlighting research performed in the CDL GEOHUM and operational mapping activities of MSF. The second slot will focus on the Missing Maps initiative for collaborative mapping and will include some practical components such as demonstrations of specific tools.
Target group: researcher, students, practitioner with an interest in humanitarian applications
Spatiotemporal Correlation Between Biophysical Parameters and Land Surface Dynamics in an Archaeological Region of Bangladesh
ABSTRACT. The Mainamati-Lalmai hilly region in southeastern Bangladesh is renowned for its enormous historical, cultural, and environmental values, including 15 well-preserved archaeological sites and ancient Buddhist monasteries from the 7th century. However, urbanization has profoundly transformed the landscape of this area over the last two decades. This study utilizes the Landsat 5 TM and 8 OLI imagery to analyze land use and land cover (LULC) changes, land surface temperature (LST), biophysical parameters (NDVI and NDBI) from 2004 to 2025, using geospatial software and Google Earth Engine platform. The land use of this research area has been classified into four categories: crop land, built-up land, forest cover, and waterbodies. The results show that built-up land has grown significantly while other land classes, mostly forest cover, have decreased more. The built-up land has expanded from 212.35 ha in 2004 to 1481.37 ha in 2025 at the cost of other land classes, in particular forest cover, that was overall reduced by 36.9% over the last 21 years. The LST in this area has risen by more than 6°C, with temperatures increasing across approximately 75% of the area since 2004 and the NDVI values have also seen a declining trend here. LST and NDVI show a strong negative correlation while LST and NDBI exhibit a strong positive correlation. The findings quickly highlight the rapid unplanned urbanization in this ecologically sensitive area, which is likely to have a massive impact on the archaeological sites as well. This study recommends taking urgent necessary steps for sustainable land management and urban planning strategies with specific laws to preserve the archaeological sites and the environment.
Evolution of the geoinformatics data structures used for the prediction of rock mass deformation systems in relation to the scientific concepts
ABSTRACT. The paper presents analyses and classifications of numerical geological and mining data structures used in calculations of rock mass deformation caused by underground mining exploitation. The analysis focusses on the evolution of the data structures
of these information systems used at KGHM Polska Miedź S.A. at different times to predict the effects of underground mining. In the results section, the author analyses the differences between the typical IT textual data structures used for spatial data processing and the more advanced spatial data structures adapted for spatial data analysis. The paper emphasizes that spatial analysis refers to the representation of the current state of objects in computer systems, while prediction involves modelling
future changes based on specific rules and algorithms. This paper concludes that the future of deformation forecasting lies in open and scalable geoinformatics systems that integrate a variety of data models and enable more efficient management of
spatial information.
The DANSER Project: a Mission Towards Sediment Balance Restoration in the Danube Basin
ABSTRACT. The Danube River, one of Europe’s most ecologically and economically significant rivers, faces critical challenges due to sediment imbalance caused by human
interventions such as flood protection measures, channelization, sediment excavation,
hydropower generation, and land-based activities. These disruptions have led to environmental stressors, increased flood risks, reduced navigability, and loss of biodiversity, and require a holistic and sustainable approach to sediment management. The DANube SEdiment Restoration project (DANSER), a Horizon Europe funded initiative, aims at restoring sediment balance and improving sediment flow and quality within the Danube River-Black Sea system. It employs an interdisciplinary strategy, combining advanced sediment monitoring technologies, hydrological modeling, biodiversity
assessments, and active and passive sediment management interventions. State of the art research methodologies will be demonstrated in thirteen pilot sites, chosen to represent the upper, middle, and lower Danube basin. Within this large-scale initiative, we will develop a Spatial Data Infrastructure (SDI) and digital platform to define mechanisms and protocols for data harmonization, filtering, stabilization, storage and flow. The SDI will ensure standards compliance (OGC, INSPIRE, ISO, W3C), seamless interoperability, device and operating system independence. The digital platform interface will support three main modules: (1) Spatio-temporal mapping interface, (2) Database management and visualization dashboard, and (3) Community (biunivocal) interactive modules. Capitalization of prior and new knowledge to be included in the digital portal will encompass scientific papers, sediment characterization datasets, Copernicus datasets and services, as well as previous projects and initiatives. By creating harmonized and interoperable SDI architecture, DANSER will enhance the understanding of sediment dynamics, promote evidence-based decision-making, and facilitate the replication of best practices across European river basins.
Anwendung des bi-parabolischen Temperature Dryness Vegetation Index (TVDI) zur Analyse der Bodenfeuchtigkeit im Ratschitschacher Moor
ABSTRACT. Da der Klimawandel rasend voranschreitet und der Schutz von Moorlandschaften einen Teil zum Schutz des Klimas beitragen
kann, untersucht die vorliegende Arbeit die Oberflächenfeuchtigkeit einer Moorlandschaft. Die Oberflächenfeuchtigkeit ist ein
häufig genutzter Parameter zur Analyse des Wasserhaushaltes und wurde in diesem Beitrag mittels des Temperature Dryness
Vegetation Index berechnet. Erste Ergebnisse veranschaulichen ein insgesamt gesundes Moor, wobei einzelne Bereiche weiter
beobachtet werden sollten.
WS: FME Workshop - "Spend more time using data, and less time fighting it"
ABSTRACT. Since the 1990s, FME has been known as an indispensable ETL tool for converting and transforming different geodata in different formats in order to make them interchangeable and usable by different systems. FME quickly gained global recognition in the field of geoinformatics and continues to accompany many GIS professionals among us to this day.
With the integration of more and more data formats and the ability to "operate" different systems and automating data processes between them, FME has become increasingly comprehensive and can nowadays present itself as a complete enterprise data integration platform - with the unfair advantage of also being the world champion in geodata.
This Accelerator-Workshop is intended to teach the audience how to get the most out of the platform by demonstrating how to automate data integration workflows and connect to 450+ formats – based on the motto: “Spend more time using data, and less time fighting it”.
WS: Advancing geospatial and EO technologies for humanitarian response
ABSTRACT. Abstract: Earth observation (EO) is a valuable source of information for both immediate humanitarian response and planning long-term socio-economic intervention. In a well-grounded partnership with Médecins Sans Frontières (MSF, Doctors Without Borders), the Christian-Doppler laboratory for geospatial and EO-based humanitarian technologies (CD lab GEOHUM) investigates and further develops newly emerging technologies at the interface of satellite Earth observation (EO) and geospatial information (GI) to support humanitarian operations. The CDL is dedicated to three different research areas, (1) information extraction from EO data, in particular deep learning-based information extraction for dwelling extraction and damage assessments; improving the analysis of Radar satellite data (esp. for flood mapping); exploring the potential of big EO data for land cover mapping, flood mapping and fire detection (2) Data integration, where EO primary data is combined with data and information from other sources, like OpenStreetMap, institutional actors, statistical data etc. A toolset for data quality control, data aggregation, spatial regionalisation, and validation allows to employ robust data assimilation strategies, resulting in more accurate information products on a high semantic level. Several diverse use cases are defined to test and apply this toolset (e. g. Malaria mapping, settlement and population analysis or climate-change related impact assessments). Finally, (3) The effectiveness of geospatial data in humanitarian operations depends not only on technological advancements but also on clear communication and ethical considerations. Especially for information products, which build upon a multitude of input data, understanding and communicating the uncertainties involved in the production process is fundamental. The best and most sophisticated information product is inappropriate if users do not understand the inherent uncertainties and therefore cannot make confident decisions based on it. Another important critical element for the operationalisation of tools and products and to ensure consistency is the reproducibility of the workflows. Also ethical challenges related to geospatial data use, particularly privacy concerns in humanitarian settings are being addressed. In this workshop we will provide methodological insights into the various research areas of the CDL and share practical experiences from MSF’s operational GIS support. Participants will gain a deeper understanding of the role of EO and GI in humanitarian response, the challenges of geospatial data integration and strategies to ensure effective use of GIS technologies in crisis settings.
Duration: 2x 75 minutes
Format: The first slot will feature several presentations, highlighting research performed in the CDL GEOHUM and operational mapping activities of MSF. The second slot will focus on the Missing Maps initiative for collaborative mapping and will include some practical components such as demonstrations of specific tools.
Target group: researcher, students, practitioner with an interest in humanitarian applications
ABSTRACT. Many GIS courses are methodology or tools based, but we know that GIS is data dependent and that not all geodata are equally valuable or appropriate.
The geodata ecosystem in Europe has evolved substantially over the past decade, so this seminar gives us an opportunity to ideate possible courses which would introduce students to the wide variety of geodata sources available. Sure, there's INSPIRE and some national SDIs, but today we can also access rich geodata sources that are "fit for purpose" coming from Copernicus and other space programs, from analysis ready data platforms such as Esri´s Living Atlas, from OSM and Overture, Mapillary, Citizen Science initiatives, other crowdsourced collections, EU and national census data, commercial providers (TomTom etc.), we can fly our drone, import BIM data, collect data in situ with our mobile devices, and the list goes on. Should we be teaching dedicated geodata courses? If so does a focus on European data sources make sense?
Join us to discuss this critical topic and perhaps some interesting collaboration will result from it.
WS: Analysing massive open human mobility data in R using spanishoddata, duckdb and flowmaps
ABSTRACT. ==== Introduction
Large-scale human mobility datasets provide unprecedented opportunities to analyze and gain insights into movement patterns. Such insights are critical for fields ranging from transport planning and epidemiology to socio-spatial inequality research and climate-change mitigation. Until recently, access to human mobility data was a privilege of a few researchers. Thanks to countries like Spain that pioneered to make high-resolution aggregated human mobility data open, such data is becoming increasingly accessible. Thanks to the Multi-MNO project by Eurostat ( https://cros.ec.europa.eu/landing-page/multi-mno-project ), similar mobility data may soon be widely available as part of official statistics across the European Union. However, the complexity and sheer volume of this data present practical challenges related to data acquisition, efficient processing, geographic disaggregation, network representation, and interactive visualization.
The proposed workshop addresses these challenges by showcasing end-to-end workflows that harness newly developed and state-of-the-art R packages and methods. Participants will learn how to acquire and manage multi-gigabyte mobility datasets with `spanishoddata` and `duckdb` R packages, combine and compare the mobility flows across space and time by creating informative aggregate mobility flows visualizations using `flowmapper` and `flowmapblue` R packages.
Spanish open mobility data is used as a case study. This data contains anonymized and grouped flows between more than 3500 locations in Spain with hourly intervals across more than 3 full years. The flows are further split by age and income groups, sex, and activity type (home, work, regular, irregular) at both origin and destination, thereby presenting a universe of opportunities for analysis and research questions to explore.
==== Audience
The target audience is anyone interested in human mobility data with applications in transport, epidemiology, socio-spatial inequalities, or similar fields. We expect the attendees to be familiar with RStudio/Positron/VScode or similar IDE and R language basics, including `tidyverse` packages such as `dplyr` and `ggplot2`. Attendees would also benefit from prior knowledge of the `sf` and the basics of working with spatial data in R, but this is not required.
The objective of this tutorial is to introduce attendees to approaches for analyzing large-scale open human mobility data using consumer-grade hardware (a basic laptop with 8GB of RAM is sufficient). Although our example focuses on Spain, these approaches (and the associated software, such as `DuckDB`) are universally applicable and will become increasingly relevant. Thanks to the Multi-MNO project by Eurostat, similar mobility data may soon be widely available as part of official statistics across the European Union, extending beyond Spain. Consequently, the AGIT community will gain insight into the future of open large-scale human mobility data and be well-prepared for its wider availability in additional countries.
==== Format
The tutorial will take 75 minutes and consist of several sections: (1) Getting the Open Human Mobility Data in a Reproducible Way, (2) Working with Large-Scale Human Mobility Data using DuckDB, (3) Visualization of Human Mobility Flows. The workshop will start with a 15-minute presentation explaining the data, main concepts and techniques, as well as a 10-minute demo of the main functions in all relevant packages. Participants will have 50 minutes of hands-on activities afterwards. Slides summarizing the key points will be provided to support understanding and improve the accessibility of the tutorial. The exercises will be thoroughly explained, similar to the online vignettes of the `spanishoddata` package at https://ropenspain.github.io/spanishoddata/ .
Due to the large size of the human mobility dataset (over 150 GB for three full years of data), we will provide pre-converted versions of the data in `DuckDB` format, covering different time periods. This will ensure that the workshop goes smoothly and that participants do not overload the Wi-Fi with large file downloads.
==== Equipment
Attendees will need to bring their own laptops. Reasonably recent hardware with at least 8GB of RAM is recommended. Laptops should have internet access, a recent R version installed, and some IDE (e.g., RStudio, VScode, Positron).
ABSTRACT. Data interoperability with other systems can only be guaranteed if the recipient understands the content in exactly the same way that it was intended when it was generated. This requires an agreement that can be achieved by providing product manuals that describe the function and significance of each unit or component. Historically, this was achieved by providing a comprehensive glossary. When data transfer was handled via physical media, providing a manual was not an obstacle. Depending on the complexity of the dataset, the recipient would work through it in detail, quickly gaining a certain level of expertise. In the era of data exchange via W3C interfaces, the addressee is formally unlimited. This requires multilateral compliance to provide structured information.
Semantic context plays an essential role in loss-free transmission of information; semantic artefacts must therefore be explicitly represented. W3C and other standardisation organisations, such as ISO, OGC and IEEE, offer semantic technologies for semantic interoperability.
The fundament of the semantic interoperability of spatial data, metadata and services are based on agreed schemas, data models, and harmonised or published controlled vocabulary, which covered as semantic artefact.
Considerable progress has been made by the spatial data community in content management and referencing semantic artefacts. A major driving force behind this has been the EU's INSPIRE Directive. However, operational management of registry systems is proving very difficult due to widely diverging requirements and technical developments. Within the REST-GDI-AGRAR project, the consortium set up a pilot data infrastructure and took this opportunity to test different registry systems.
The aim of this workshop is to bring together specialists from a range of different areas to discuss their experiences and expectations of systems and requirements. We will provide technical input so that we can use a shared knowledge base to discuss actual requirements and draw up a joint roadmap.
Workshop Format:
Duration: 90 minutes
Participants: 10–30 invited stakeholders (diverse roles and domains, GDI-DE, NHM, UBA, …)
1. Structure:
a. Welcome & Introduction
b. Presentation: Overview of the terminology service and registry goals (20 min)
c. Lightning Talks: Short inputs from 3–4 users or developers (30 min)
1. Breakout Sessions & Discussion: Synthesis of group results (30 min)
2. Next Steps & Feedback Collection (15 min)
Expected Outcomes:
1. A shared understanding of community needs
2. A prioritized list of functional and non-functional requirements
3. Community engagement roadmap