IRIS25: INTERNATIONALES RECHTSINFORMATIK SYMPOSION 2025
PROGRAM FOR SATURDAY, FEBRUARY 22ND
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 18A: Interdisciplinary Session on AI, Privacy & Law I
09:00
Der Einsatz von KI-Tools im Maschinenbau
PRESENTER: Elisa Girleanu

ABSTRACT. Der Vortrag befasst sich mit dem Einsatz eines typischen KI-Tools im Maschinenbau. Behandelt wird die Frage, unter welchen Voraussetzungen ein KI-Tool in den Hochrisiko-Bereich der KI-VO fällt, wobei der Fokus auf der Auslegung des Begriffes des „Sicherheitsbauteils“ im Zusammenspiel mit der Maschinen-VO liegt. Weiters wird analysiert, inwiefern Methoden der Retrieval Augmented Generation (RAG) als Training eines KI-Systems zu werten sind und in welchen Fällen Trainingsdaten vorliegen. Es folgt eine Darstellung der Qualitätsanforderungen an die KI-Entwicklungsdaten nach Art 10 KI-VO sowie ein kurzer Aufriss der datenschutzrechtlichen Anforderungen im Fall der Verarbeitung personenbezogener Daten durch das KI-Tool.

09:30
An Interdisciplinary Approach to Adopting Generative Artificial Intelligence in Legal Practice

ABSTRACT. This presentation provides an overview of an academic research project investigating the impact of generative artificial intelligence (genAI) and Large Language Models (LLMs) on the European legal industry. The central research question addresses the increasing interest from legal practitioners, law firms, and legal departments in adopting genAI (Strom, 2024). While these technologies are a prevalent topic among lawyers, the legal tech market is also experiencing rapid expansion, with new, domain-specific models for the legal sector (e.g., Harvey AI, Leya, CoCounsel, AnyLawyer) (Pierce, Goutos, 2023; Iqbal, 2023). However, the critical question remains: are such models genuinely beneficial for legal practice, and if so, in what ways? What are the main barriers to genAI adoption in legal practice, are they of legal or managerial nature? This study explores practical use cases where genAI and LLMs may add value for legal professionals, examining the benefits of their adoption alongside key challenges. For instance, while the most obvious advantage of genAI might be increased efficiency in time spent on legal tasks, the 'AI efficiency paradox' arises in the context of the billable hour pricing model (Palmer, 2024). In this scenario, the efficiency benefit might not seem essential. However, considering genAI as an always-available legal assistant could enhance the quality of legal work (Armour, Parnham, Sako, 2020) and in consequence attract more clients or justify higher fees over time. This, of course, requires risk-mitigation measures, such as ensuring high quality of implemented AI systems and thorough information verification (Villasenor, 2024). The benefits and challenges related to genAI adoption in legal work are truly interdisciplinary in nature. The example of efficiency gains versus the billable hour model indicates that genAI adoption is not only a legal question, yet also involves managerial considerations in running a legal practice. This interdisciplinary approach is at the heart of my research, which I aim to present at the Legal Informatics Symposium. Bibliography: Armour, J., Parnham, R., Sako, M. (2020). Augmented Lawyering, European Corporate Governance Institute - Law Working Paper 558/2020, https://papers.ssrn.com/sol3/papers.cfm ?abstract_id=3688896 Iqbal U. (2023). From Knowledge Management to Intelligence Engineering - A practical approach to building AI inside the law-firm using open-source Large Language Models, https://ceur-ws.org/Vol-3423/paper5.pdf Palmer M. (2024). The AI Revolution in Legal and the Billable Hour: Is the End Near?, https://www.2civility.org/the-ai-revolution-in-legal-and-the-billable-hour-is-the-end-near/ Pierce, N., Goutos, S. (2024), Why Lawyers Must Responsibly Embrace Generative AI, Berkeley Business Law Journal, Vol. 21, No. 2, https://papers.ssrn.com/sol3/papers.cfm?abstr act_id=4477704 Strom R. (2024). Law Firms Aren’t Behind the Generative AI Adoption Curve—Yet, https://news.bloomberglaw.com/business-and-practice/law-firms-arent-behind-the-generative-ai-adoption-curve-yet Villasenor J. (2024). Generative Artificial Intelligence and the Practice of Law: Impact, Opportunities, and Risks, Minnesota Journal of Law, Science and Technology, Volume 25, Issue 2, https://scholarship.law.umn.edu/cgi/viewcontent.cgi?article=1563&context=mjlst

09:00-10:30 Session 18B: E-Commerce II
09:00
Challenging Data Monopolies: The Essential Facilities Doctrine as a Means to Enhance Competition in the EU

ABSTRACT. Abstract: Data monopolies are a present challenge to the EU digital economy influencing multiple markets such as various online and social platforms and the emerging generative AI market as well. While many different approaches are being considered when dealing with them, the essential facilities doctrine gained recognition as a tool to combat data monopolies as well as finding a reflection in EU and national law. This article explores the limits of applying the doctrine as a solution to data monopolies in its almost pure form, such as is the case in the German Act Against Restraints on Competition, as well as in the form of a “reflection” in the Data Act and Digital Markets Act. Mainly exploring the questions of who should decide what data constitutes the essential facility and how different access regimes to data made available may work against the opening up of the digital market. The main contribution of this article is the critique of the essential facilities doctrine when used in a more concrete and practical setting as a tool to regulate data monopolies and thus helps move the discussion of regulating data monopolies forward while highlighting the specifics of data monopolies.

09:30
AI-Generated Outputs and Unfair Competition: Examples from China and the United States (online)

ABSTRACT. In recent years, AI outputs have emerged in creative industries (e.g., literature, music, visual arts, etc.) transforming creative processes, raising fascinating legal questions concerning their status. Until now, courts, copyright agencies, and scholars in China and the United States have addressed AI-generated outputs as a copyright issue. And so, the paper will employ a comparative legal analysis of the PRC Copyright Law and U.S. Copyright Act, including court decisions, copyright policies, and academic efforts in both countries. However, I will propose an alternative framework regarding AI-generated outputs based on the PRC Anti-Unfair Competition Law (AUCL), which can be regarded as an interim solution between copyright protection and the public domain. The paper will delve into how Article 2 to the AUCL, which addresses general principles of law such as fairness, equality, good faith, and business ethics, and the Supreme People’s Court Judicial Interpretation (2022), have been applied in Chinese court decisions to uphold fair competition practices and foster innovation. I will draw an analogy to online sports broadcasting in China, and discuss the benefits and limitations of the AUCL as a potential solution. Finally, this paper aims to introduce a quasi-copyright protection mechanism for AI-generated works in China and outline a path that could also be adopted by American legislators.

10:00
Unwahrheiten im Netz

ABSTRACT. Die Gegenwart unterscheidet sich von der 2. Hälfte des 20. Jahrhunderts dadurch, dass wir Informationen aus allen Richtungen beziehen können statt nur von wenigen Quellen. Ein wachsender Teil der Bevölkerung informiert sich über die Wirklichkeit im Netz bei einer grossen und heterogenen Vielfalt von Meduen: E-Mail Briefings und Newsletter, Nachrichten-Websites, Blogs, soziale Medien, Online-Magazine, Podcasts und Videokanäle, Foren, Diskussionsplattformen, etc. Fernsehen, Radio und Zeitungen spielen dagegen eine immer geringere Rolle als Nachrichtenquellen.

In dieser Situation kommt es vermehrt zur Wahnehmung, dass die Unwahrheiten im Netz ein Problem darstellen und etwas dagegen unternommen werden müsste. Dabei wird das Label Fakenews als Synonym oft für alles verwendet, was aufgrund der wahrgenommenen Unwahrheit Unbehagen verursacht. Dies gilt auch für alles, was mit Künstlicher Intelligent produziert wird.

In diesem Beitrag werden die anzutreffenden Phönomene analysiert und es wird eine Auslegeordnung für die unterschiedliche Phänomene präsentiert, welche es rmöglicht Ungleiches als ungleich und Zusammenhängendes als zusammenhängend zu erkennen. Dabei werden auch ästhetische Kriterien berücksichtigt. Anschliessend werden die Phönomene in die Alltagskultur und die Alltagsherausforderungen eingeordnet und es werden die Handlungsoptionen beleuchtet, welche zur Limitierung zur Verfügung stehen. Dabei werden auch die Machbarkeit, Nebenwirkungen und Risiken betrachtet. Darauf aufbauend wird eine Auslegeordnung für die Handlungsoptionen abgeleitet.

Der Beitrag basiert auf vorhergehender sozialwissenschaftlicher Forschung verschiedener Forschender. Beispielsweise werden im Bereich Deepfakes Ergebnisse aus einer TA-Swiss Studie zitiert und fliessen in die Ableitung der Auslegeordnung mit ein. Und im Bereich alternative Fakten werden die Forschungsresultate von Nils C. Kumar zitiert. Zudem werden geisteswissenschaftliche Arbeiten, beispielsweise zu Artefakt-Collagen aus Dutzenden von Komponenten und zu Folklore, mit einbezogen. Ziel ist es, das vorhandene vielfältige Wissen zusammenzuführen und durch Auslegeordnungen eine rationale Basis für den politisch-rechtlichen Diskurs möglicher staatlicher Interventionen zu liefern.

Um die deviante Diskussion über "Was ist Wahrheit" zu umschiffen, wird im Beitrag grundsätzlich von "Wahrgenommenen Unwahrheiten" gesprochen. Es wird zwar unterschieden zwischen "weitgehend überprüfbar", "nicht überprüfbar" und "partiell überprüfbar", aber nicht zwischen wahr und falsch. Die Rolle der KI wird sowohl in Bezug auf die Produktion von Media-Dokumenten (und das Erkennen ebendieser Produktion) als auch in Bezug auf deren Verbreitung im Netz betrachtet.

Adressierte Fragen sind insbesondere: Welche Möglichkeiten und Grenzen gibt es für regulatorische Massnahmen? Welche Optionen bestehen für Bildungsinitiativen? Was kann mit technischen Lösungen und erreicht werden und wie können diese regulatorisch gestützt werden? Wo braucht es die Strafverfolgung der Täter ergänzende Unterstützungsmassnahmen für die Opfer?

(Das Paper ist in Arbeit und wird im Dezember nachgereicht.)

09:00-10:30 Session 18C: IP-Recht III / IP Law III
09:00
Präventive SBOM-gestützte Lizenzchecks nach ISO-Norm - Erfahrungsbericht und Praxismodell

ABSTRACT. Der Beitrag behandelt den praktischen Umgang mit Open Source-Lizenzen im Alltag eines öffentlichen IT-Dienstleisters. Die Einhaltung der urheberrechtlichen Regeln wird dabei durch eine prozessgestützte Open-Source Lizenz-Compliance nach ISO/IEC 5230 unterstützt und ermöglicht. Gerade in einem öffentlichen Unternehmen ist es von Belang, auch bei der Verwendung von „kostenlosen“ Open Source-Modulen die Rechtmäßigkeit nachweisbar zu gewährleisten. Dazu dienen die Orientierung der Prozeduren an der ISO-Norm und ein entsprechendes Software-Qualitätsmanagement, das auf einen möglichen Complianceverstoß mit einer Build Failure reagiert.

Auf Umgehungsgefahren wird hingewiesen, ebenso auf die forensische Relevanz; ein Beispiel für eine SBOM-Datei rundet den Beitrag ab.

09:30
Verbatim Memorisation in Language Models and EU Copyright Law

ABSTRACT. Empirical studies suggest that - although technically not storing the raw training dataset - language models as statistical models assigning a probability to a sequence of words may be able to extract verbatim text sequences from the model's training data - so called memorization and regurgitation of training data. And thus if language models are trained on publicly available data, such data memorization might lead to infringement of copyright and database rights. Recently adopted set of two exceptions from copyright and database protection for purposes of text and data mining introduced by the CDSM Directive could emerge as pivotal when aiming to justify use of publicly available data to train artificial intelligence. However, the applicability text and data mining exceptions is limited as to the purpose of generating new information as well as to the scope of permitted actions permitting solely reproduction or extraction of protected content. Although language models adopt additional measures to prevent data memorisation and dissemination of verbatim snippets - such as de-duplication or output filters – these measures might not be bulletproof, especially due to jailbreaking which may manipulate AI models into bypassing such measures. Question remains, is there a meaningful solution to preventing copyright infringe-ment while not hindering training of language models on publicly available data?

10:00
Haftung und Verantwortung bei Sicherheitslücken in Open Source Software
PRESENTER: Katharina Bisset

ABSTRACT. Open Source Software (OSS) ist das Rückgrat moderner IT-Architekturen und spielt eine zentrale Rolle in zahlreichen kommerziellen und nicht-kommerziellen Anwendungen. Bisher hat sich die Praxis etabliert, dass Integratoren und OSS-Anbieter durch Lizenzbedingungen oder allgemeine Haftungsausschlüsse das rechtliche Risiko minimieren konnten. Dies ermöglichte eine agile Nutzung und Integration von OSS und reduzierte gleichzeitig das Risiko für die Anbieter.

Mit aktuellen Entwicklungen auf europäischer Ebene, wie dem Cyber Resilience Act und der neuen Produkthaftungsrichtlinie, ändert sich diese Risikolandschaft jedoch erheblich. Der Cyber Resilience Act sieht für sicherheitsrelevante Themen nur noch eine Ausnahme für OSS vor, wenn diese rein nicht-kommerziell genutzt wird. Für kommerzielle Anwendungen greift die Regelung hingegen vollumfänglich. Die Produkthaftungsrichtlinie definiert zudem Software als Produkt, was nicht nur umfangreiche Update-Pflichten nach sich zieht, sondern auch klare Haftungsszenarien schafft. Zudem bedingen die neuen Regelungen, dass man umfangreiche Dokumentationen benötigt, um nicht in Beweisproblematiken zu kommen.

Inhalt und Zielsetzung: Der Vortrag wird die wesentlichen Änderungen der gesetzlichen Rahmenbedingungen für OSS-Haftung und -Verantwortung im Kontext von IT-Sicherheitslücken beleuchten. Anhand praxisnaher Beispiele werden wir untersuchen, welche Auswirkungen dies auf verschiedene Anspruchsgruppen haben könnte. Themen sind u.a.: • Welche Pflichten gibt es, wenn in OSS eine Sicherheitslücke entdeckt wird? • Rolle und Wirkung von Lizenzbedingungen – helfen sie wirklich, Haftung zu minimieren? • Überblick über Anspruchsgruppen und ihre Rechte: Produktanbieter, Endanwender und Dritte (z.B. von Sicherheitslücken betroffene Drittparteien). • Herausforderungen durch neue Bedrohungsszenarien, insbesondere im Kontext moderner Technologien wie Large Language Models (LLMs), die neue Angriffsvektoren schaffen können. • Der Unterschied in der Risikowahrnehmung zwischen freien Entwicklern, institutionellen Foundations und kommerziellen Anbietern.

Erkenntnisgewinn: Teilnehmende werden ein klares Verständnis für die aktuellen Haftungsrisiken und neuen Verantwortlichkeiten im Umgang mit OSS erhalten. Sie gewinnen Einblicke, welche Vorsichtsmaßnahmen nunmehr notwendig sind, um rechtlichen Anforderungen gerecht zu werden, und wie sie sich vor potenziellen Haftungsansprüchen schützen können. Der Vortrag wird verdeutlichen, wie wichtig es ist, den rechtlichen Kontext von Open Source im kommerziellen Einsatz zu verstehen und welche präventiven Schritte erforderlich sind, um künftig das Risiko für Sicherheitslücken zu minimieren.

09:00-10:30 Session 18E: European E-Health Data Space III
09:00
Legislating for the European Health Data Space: considering trust in health records in multinational settings and research on economic data related to healthcare as selected problems of primary and secondary use

ABSTRACT. The European Commission tries to accelerate the integration and homogenisation of electronic health records with a regulation establishing the European Health Data Space. Once approved, this framework shall ease the resort to existing health records in cross-border healthcare (primary use) and reuse them in biomedical research and development (secondary use). Unsurprisingly, this techno-optimism faces considering personal data as an identity issue. Moreover, establishing this requires completing health informatics on the national level. Several issues deserve clarification. The contribution focuses on trust in sometimes incomplete, outdated and misleading health records in multilingual settings in medical practice and on accompanying economic data. One should also consider deploying artificial intelligence, among others, in completing patient summaries.

09:30
The Impact of Artificial Intelligence on the Twofold Nature of Biobanks
PRESENTER: Leijla Malici

ABSTRACT. Despite their growing relevance in biomedical research, specifically in precision medicine, biobanks still raise ethical, legal, and societal issues, mainly related to the definition of their purposes, ownership of biological materials, and protection of associated data. Biobanks are key instruments in advancing scientific research by providing access to large amounts of biological samples and associated data. The peculiar two-fold status of the samples generates uncertainty, primarily regarding ownership and custodianship. Moreover, the informed consent process presents significant challenges regarding the different uses of the samples and the difficulty of adequately informing donors about it, potentially undermining their fundamental rights. With the advent of AI, new challenges have arisen concerning the exploitation of not only single donors but also patient rights. In this contribution, we aim to address the main concerns related to the implementation of AI technologies in biobanking, focusing on balancing involved healthcare, public and private interests, and individual rights

10:00
REGULATING THE HEALTH DATA SPACE - SOME ELEMENTARY QUESTIONS

ABSTRACT. In the context of the European Union, we are used to talking about different kinds of infor-mation spaces. The basic idea is that the freedom of movement of the individual has been complemented by the freedom of movement of data and information. One of the new Europe-an spaces is the space for the movement of health data. Here we are dealing both with the transfer of sensitive personal data and, above all, with quality assurance. The transfer of an individual's health data from one scientific environment to another is an exceptionally de-manding operation. It is not only a question of ensuring the technological path of information as such. And the more general transfer of care data is also a very demanding transfer of da-ta. That is why the EU wants strict rules on the use of anonymised or pseudonymised health data for research, innovation and decision-making too. Regulation of the information space for health data is a broad issue. It has been described as the first thematic regulation of the data and information space in accordance with the EU data strategy. We are witnessing an extremely important stage of social development in terms of human dignity, one involving many tensions. For example, there has been and still is an undeniable conflict of values and objectives between our right to self-determination and the various re-uses of medical data.. Equally, data security needs to be highlighted in a new way in this context. In this brief presentation I will focus on the vital aspects of the transferability of data and information on the quality of care. This is, as I understand it, the necessary starting point for a new, com-prehensive set of regulations. The internationalisation of healthcare has made good progress on the linguistic level, but there is still work to be done to harmonise essential treatment in-formation. AI could give this a new impetus

11:00-12:30 Session 19A: Interdisciplinary Session on AI, Privacy & Law II
11:00
Understudied and Crucial: Human-Computer Interaction in the Legal Domain

ABSTRACT. Facing the increasing potential for malicious uses of AI, the transparency obligations introduced by the EU AI Act appear justified. However, the perception of AI-generated content plays a significant role in shaping its impact, an area that remains underexplored in the legal domain. Most existing research focuses on the accuracy and efficiency of AI for various tasks, often neglecting technology acceptance and human-computer interaction. This presentation aims to map current research within the legal field and beyond, proposing a research agenda to foster a more comprehensive understanding of AI's impact on law.

11:30
Ethical principles and their importance in the context of regulating Artificial Intelligence

ABSTRACT. The fourth Industrial Revolution, proposed by Klaus Schwab, argues that recognizing a new era implies highlighting some ruptures in society's modus operandi, with such impactful repercussions that they justify the classification of a new moment in the history of humanity. Thus, today's society has been experiencing an avalanche of new technologies, which have the ability to not only be good in their physical, digital or biological sphere, in isolation, but are being merged or connected with each other, and enhance their results and impacts. Among the new technologies, Artificial Intelligence is undergoing a moment of frenetic development, being used in practically all areas of knowledge, and due to its potential level of risk in some uses, the search for AI regulation has been stimulated and needs to be properly analyzed. In this sense, the question arises as to how ethical principles behave in the face of the advancement of Artificial Intelligence. To answer this question, it is necessary to examine the development of AI and its implications; demonstrate how ethical principles were a starting point in the work of regulating AI; highlight the role of regulating Artificial Intelligence; and finally analyze the importance of ethical principles in the context of regulating AI. To this end, the systemic method of Maturana and Varela was used, and bibliographic and documentary research techniques were used. It was possible to conclude that, although we do not have strong or general Artificial Intelligence, the potential level of risk posed by the current weak AI requires at least minimal regulation. In this regulatory process, ethical principles were very important to identify and bring together fundamental values of protection and guarantee, leading to the preparation of several national and international, public and private documents. The approval of the AI Act and other laws in progress signal the success of efforts to effectively and coercively guarantee minimum safety standards for society. However, since Artificial Intelligence is a technology with strong potential to change social standards, ethical principles reinforce its power to sustain and aggregate in the face of an uncertain future. It is concluded that Artificial Intelligence needs to be regulated, whether minimally or macro-regulated, in order to make the guarantees provided by law effective, but ethical principles must always be the driving force behind society's relationship with Artificial Intelligence, ensuring that the values we identify as important guide our present and future decisions.

12:00
HOLAI: A METHODOLOGY FOR HOLISTIC LEGAL AI. INTEGRATING INTERDISCIPLINARY, MULTILAYERED, MULTIFUNCTIONAL, CROSS-DOMAIN, AND ADPATIVE LEGAL INTELLIGENCE

ABSTRACT. A holistic approach to Legal AI is essential to unify research streams and enable compre-hensive support for real-world applications. We introduce HOLAI (Holistic Legal AI), a methodology built as a set of principles for development of a network of specialized micro-services across five key dimensions. First, HOLAI is grounded in a robust theoretical foun-dation, integrating computational, cognitive, and legal-theoretical insights. Second, it mod-els legal reasoning through hybrid approaches, combining machine-learning methods and a stratified approach to symbolic reasoning models.. Third, HOLAI prescribes integration a range of functionalities from information retrieval and legal research to decision-support and decision-automation tools. Fourth, it recommends how to accommodate various legal domains, adaptable to jurisdictional and regulatory differences. Finally, HOLAI postulates a dynamic auto-evaluation mechanism, ensuring continuous alignment with legal and ethi-cal requirements. This adaptive network enables a holistic response to the demands of modern legal practice, advancing flexible applications in Legal AI.

11:00-12:30 Session 19B: E-Commerce III
11:00
Mapping Compliance: A Taxonomy for Political Content Analysis under the EU's Digital Electoral Framework

ABSTRACT. The rise of digital platforms has transformed political campaigning, introducing complex regulatory challenges for transparency, content moderation, and the targeting of political advertisements. This paper presents a comprehensive taxonomy for analyzing political content in the EU’s digital electoral landscape, aligning with the requirements set forth in the Digital Services Act (DSA), the Transparency and Targeting of Political Advertising Regulation (TTPA), and the Commission’s Guidelines on the Mitigation of Systemic Risks for Electoral Processes (G-E–DSA). Using a legal doctrinal methodology, we construct a detailed codebook that enables systematic content analysis across user-generated and political ad content to assess compliance with regulatory mandates, including systemic risk identification, ad repository transparency, and sponsor disclosure requirements. Our taxonomy, grounded in legal provisions, is applied empirically through a sample-based annotation process on Very Large Online Platforms (VLOPs), evaluating the adherence of platforms to transparency and electoral integrity obligations. By bridging legal analysis with empirical methodology, this study contributes to the field of digital electoral regulation, providing a robust framework to guide platforms, advertisers, regulators, and auditors in navigating the EU’s evolving digital political landscape.

11:30
PRESENTATION: THE SELF-DRIVING STATE: AUTOMATED DECISION-MAKING IN MODERN GOVERNANCE

ABSTRACT. This presentation explores the concept of the "Self-Driving State" a transformative vision for future governance that leverages digital technology and artificial intelligence to automate state functions. It examines how Legal Digital Twins (digital representations of legislative texts) can be integrated into legislative, executive, and judicial structures to enable efficient, automated administrative processes. The Self-Driving State framework holds potential for enhancing transparency, accountability, and efficiency in governance. A case study on Austrian tax law demonstrates practical applications, providing insight into the implications of Self-Driving State for modern governance.

12:00
Wartezeitenabfrage für CR-/MRT-Untersuchungen im Internet: eine Chance für Patienten und Gesundheitssystem gleichermaßen?

ABSTRACT. Seit Anfang 2024 findet sich auf www.sozialversicherung.at eine Abfragemöglichkeit, wie lange es im Durchschnitt dauert, einen Termin für eine CT- oder MRT-Untersuchung bei einem externen Anbieter bildgebender Diagnostik auf Kosten des Krankenversicherungsträgers zu bekommen. Ein Vorgänger zu dieser Wartezeitenabfrage auf www.sozialversicherung.at fand sich von 2017 bis 2020 auf www.netdoctor.at. Auch wenn die Anzahl der monatlichen Abfragen auf www.sozialversicherung.at mit aktuell rund 700 überschaubar ist, so ist die Möglichkeit zur Internetabfrage zu begrüßen, weil sie Transparenz schafft und somit einen schnelleren Zugang zu bildgebender Diagnostik für den einzelnen Patienten ermöglicht.

11:00-12:30 Session 19C: IP-Recht IV / IP Law IV
11:00
INTELLIGENT STEWARDSHIP OF INDIGENOUS AND MINORITY COMMUNITIES’ CULTURAL HERITAGE OPEN DATA DATASETS

ABSTRACT. Data stewardship is generally understood as encompassing the collection, digitasation, maintenance, curation, storage, analysis, sharing, use and reuse of digital datasets. Since digitation processes begun in the GLAM (Galleries, Libraries, Archives, Museums) sector many issues and concerns have arisen in the context of digital datasets belonging to the Indigenous and Minorities Communities and the rights of Indigenous peoples in decision-making concerning their digital cultural heritage. Indigenous data governance includes both the stewardship and the processes necessary to implement Indigenous control over Indigenous data. However, legislation and practices endorsing open data and open science overlooks the rights and interests of Indigenous and Minorities Communities, failing to adequately consider and respect their rights and interests. This paper examines, from an interdisciplinarity perspective, how to develop an open data stewardship policy and protocols for the governance of the indigenous and minority communities’ cultural heritage datasets within the GLAM environment. It focuses particularly on the implementation and application of both FAIR (Findable, Accessible, Interoperable, Reusable) and CARE (Collective Benefit, Authority to Control, Responsibility, Ethics) principles in the GLAM sector. Furthermore, this paper reflects indeed on the issues and challenges posed for data governance and stewardship by the adoption, at EU level, of the Open Data Directive and the Artificial Intelligence Act. The objective is to shed light on how to develop a sustainable, resilient, and accountable data stewardship strategy coherent and consistent with the right of indigenous people and minority communities to foster a cultural governance and sovereignty over their data.

11:30
And for My Next Trick...: Private Ordering in the Videogames Industry and the Vanishing Act of Copyright Exhaustion

ABSTRACT. Digital distribution reigns supreme in the video game industry, with market data showing that digital "sales" constitute the vast majority of purchases. On the surface, this shift could appear to be driven solely by innovation and user convenience. However, the underlying reasons are more complex and are not necessarily driven by consumer autonomy seeking a more convenient purchasing method.

This submission examines the legal nature of video games and the ambiguous status of digital exhaustion under current CJEU case law. It explores how mechanisms such as prevalent private ordering and DRM systems have almost exclusively shaped the current equilibrium in the digital markets for video games.

The submission addresses specific examples of exhaustion-like solutions for the digital market with video games and other digital goods, demonstrating that the current balance shift is not a necessity coming with technological advance, but rather the unintended consequence thereof.

It argues that while digital transactions do not need to, and perhaps should not, mirror those in the physical world, the essential parameters of transactions for digital goods should not be dictated entirely by the rightsholder but by the norms of copyright law, which does not, at the moment, seem to be the case.

Unlike copyright laws, consumer laws seem to recognize the recent shift in balance favoring rightsholders and providers of digital goods and services. This submission contends that the concept of copyright exhaustion may indeed be outdated, but its disappearance warrants significant attention and concern within the current legislative framework.

The submission calls for a re-evaluation of copyright law to ensure it evolves alongside digital distribution practices, advocating for a balanced approach that safeguards rights of users of protected works while respecting the interests of rightsholders.

11:00-12:30 Session 19D: PANEL * AI in Law: Optimisation through Logic and Argumentation Theory? / KI im Recht: Optimierung durch Logik und Argumentationstheorie?
11:00
AI, Law, and Causality: Bridging Legal Uncertainty with Formal Reasoning and Argumentation (online)

ABSTRACT. AI-related legal disputes often hinge on proving causality, yet establishing a clear link between an AI system’s actions and harm remains a major challenge. The AI Liability Directive (AILD) and revised Product Liability Directive introduce measures like presumptive causal links and evidence disclosure, but tensions arise between these rules and the standardization-driven AI Act, particularly in assessing factual and legal causation. This talk explores how formal logic and argumentation theory can enhance causal analysis in AI liability cases and beyond. Logic-based methods provide structured reasoning tools to assess causation in complex disputes, addressing challenges like overdetermination, pre-emption, and omissions. Argumentation frameworks help model competing causal claims, clarify evidentiary burdens, and improve judicial reasoning. Applied to case studies in medical AI, autonomous vehicles, and employment discrimination, these approaches offer a more precise and transparent way to evaluate AI-related harm. Integrating logic and argumentation theory into legal AI systems can improve the coherence of liability rules, reduce legal uncertainty, and support courts, lawyers, and policymakers. This aligns with the panel’s broader inquiry into whether these theoretical approaches can optimise AI applications in law, ensuring more reliable and fair legal outcomes.

11:30
Mechanising Logical Systems in Proof Assistants

ABSTRACT. Modern general-purpose proof assistants like Coq, Isabelle, and Lean allow for the efficient and scalable implementation of logical systems based on formal syntax and adequate semantics, including those used in some theories of legal reasoning. In this tutorial talk, I will explain the basic concepts underlying the Coq proof assistants by showing how a simple logic can be modelled and how a corresponding executable decision procedure can be implemented. This is meant to provide a basis for discussion and follow-up talks describing particular industrial applications.

12:00
Certification of computable laws
PRESENTER: Edward Lowdell

ABSTRACT. Computable laws, i.e. laws which are subject to enforcement via automation of certain processes, are widespread - and so are the issues which arise with them. In particular, obtaining software from legal texts implies a series of translations that are often neglected and end up being poorly performed by the software engineers who provide the final software, sometimes even unconsciously. As legal texts can be ambiguous or even inconsistent, engineers are forced to make decisions which affect the meaning and final behaviour of the software, which makes it impossible to certify or homologate these tools. What we propose is a method that allows systematization of these translations, introducing intermediate steps between the legal text and the software.

A translation of the legal text into a formal-language specification allows the possibility of: 1) proving, using formal methods, the uniformity between the software and the specification, 2) explaining the correspondence between the legal text and the formal specification, making all the choices that have been made publicly available and understandable.

11:00-12:30 Session 19E: Panel: Softwaregestützte Priorisierung der Prozesse in der Verwaltungsdigitalisierung
Chair:
11:00
Softwaregestützte Priorisierung der Prozesse in der Verwaltungsdigitalisierung

ABSTRACT. In diesem Workshop probieren wir einen Softwareprototypen aus, der uns ermöglicht, die verschiedensten Digitalisierungsprojekte in eine sinnvolle Priorisierung zu sortieren. Der auf der RVI 23 vorgestellte und auf der IRIS 24 evaluierte Ansatz bezüglich Erfolgsindikatoren der Verwaltungsdigitalisierung ist nunmehr in eine Software gegossen und kann genutzt werden, um die Prozesspriorisierung auch sinnvoll zu automatisieren, wenn es mehrere OZG-/Fokus/interne Prozesse gibt. Damit lassen sich einem vermeintlichen Berg an Prozessen sinnvoll personelle und technische Ressourcen zuteilen. Testen Sie mit uns die Software, erzählen und generieren Sie mit uns Testfälle aus Ihrer Praxis und helfen uns dabei, einerseits die Software zu verbessern als auch Ihnen Einblicke in die Priorisierung Ihrer Prozesse zu geben.