View: session overviewtalk overview
09:00 | Der Einsatz von KI-Tools im Maschinenbau PRESENTER: Elisa Girleanu ABSTRACT. Der Vortrag befasst sich mit dem Einsatz eines typischen KI-Tools im Maschinenbau. Behandelt wird die Frage, unter welchen Voraussetzungen ein KI-Tool in den Hochrisiko-Bereich der KI-VO fällt, wobei der Fokus auf der Auslegung des Begriffes des „Sicherheitsbauteils“ im Zusammenspiel mit der Maschinen-VO liegt. Weiters wird analysiert, inwiefern Methoden der Retrieval Augmented Generation (RAG) als Training eines KI-Systems zu werten sind und in welchen Fällen Trainingsdaten vorliegen. Es folgt eine Darstellung der Qualitätsanforderungen an die KI-Entwicklungsdaten nach Art 10 KI-VO sowie ein kurzer Aufriss der datenschutzrechtlichen Anforderungen im Fall der Verarbeitung personenbezogener Daten durch das KI-Tool. |
09:30 | An Interdisciplinary Approach to Adopting Generative Artificial Intelligence in Legal Practice ABSTRACT. This presentation provides an overview of an academic research project investigating the impact of generative artificial intelligence (genAI) and Large Language Models (LLMs) on the European legal industry. The central research question addresses the increasing interest from legal practitioners, law firms, and legal departments in adopting genAI (Strom, 2024). While these technologies are a prevalent topic among lawyers, the legal tech market is also experiencing rapid expansion, with new, domain-specific models for the legal sector (e.g., Harvey AI, Leya, CoCounsel, AnyLawyer) (Pierce, Goutos, 2023; Iqbal, 2023). However, the critical question remains: are such models genuinely beneficial for legal practice, and if so, in what ways? What are the main barriers to genAI adoption in legal practice, are they of legal or managerial nature? This study explores practical use cases where genAI and LLMs may add value for legal professionals, examining the benefits of their adoption alongside key challenges. For instance, while the most obvious advantage of genAI might be increased efficiency in time spent on legal tasks, the 'AI efficiency paradox' arises in the context of the billable hour pricing model (Palmer, 2024). In this scenario, the efficiency benefit might not seem essential. However, considering genAI as an always-available legal assistant could enhance the quality of legal work (Armour, Parnham, Sako, 2020) and in consequence attract more clients or justify higher fees over time. This, of course, requires risk-mitigation measures, such as ensuring high quality of implemented AI systems and thorough information verification (Villasenor, 2024). The benefits and challenges related to genAI adoption in legal work are truly interdisciplinary in nature. The example of efficiency gains versus the billable hour model indicates that genAI adoption is not only a legal question, yet also involves managerial considerations in running a legal practice. This interdisciplinary approach is at the heart of my research, which I aim to present at the Legal Informatics Symposium. Bibliography: Armour, J., Parnham, R., Sako, M. (2020). Augmented Lawyering, European Corporate Governance Institute - Law Working Paper 558/2020, https://papers.ssrn.com/sol3/papers.cfm ?abstract_id=3688896 Iqbal U. (2023). From Knowledge Management to Intelligence Engineering - A practical approach to building AI inside the law-firm using open-source Large Language Models, https://ceur-ws.org/Vol-3423/paper5.pdf Palmer M. (2024). The AI Revolution in Legal and the Billable Hour: Is the End Near?, https://www.2civility.org/the-ai-revolution-in-legal-and-the-billable-hour-is-the-end-near/ Pierce, N., Goutos, S. (2024), Why Lawyers Must Responsibly Embrace Generative AI, Berkeley Business Law Journal, Vol. 21, No. 2, https://papers.ssrn.com/sol3/papers.cfm?abstr act_id=4477704 Strom R. (2024). Law Firms Aren’t Behind the Generative AI Adoption Curve—Yet, https://news.bloomberglaw.com/business-and-practice/law-firms-arent-behind-the-generative-ai-adoption-curve-yet Villasenor J. (2024). Generative Artificial Intelligence and the Practice of Law: Impact, Opportunities, and Risks, Minnesota Journal of Law, Science and Technology, Volume 25, Issue 2, https://scholarship.law.umn.edu/cgi/viewcontent.cgi?article=1563&context=mjlst |
09:00 | ABSTRACT. Der Beitrag behandelt den praktischen Umgang mit Open Source-Lizenzen im Alltag eines öffentlichen IT-Dienstleisters. Die Einhaltung der urheberrechtlichen Regeln wird dabei durch eine prozessgestützte Open-Source Lizenz-Compliance nach ISO/IEC 5230 unterstützt und ermöglicht. Gerade in einem öffentlichen Unternehmen ist es von Belang, auch bei der Verwendung von „kostenlosen“ Open Source-Modulen die Rechtmäßigkeit nachweisbar zu gewährleisten. Dazu dienen die Orientierung der Prozeduren an der ISO-Norm und ein entsprechendes Software-Qualitätsmanagement, das auf einen möglichen Complianceverstoß mit einer Build Failure reagiert. Auf Umgehungsgefahren wird hingewiesen, ebenso auf die forensische Relevanz; ein Beispiel für eine SBOM-Datei rundet den Beitrag ab. |
09:30 | Verbatim Memorisation in Language Models and EU Copyright Law ABSTRACT. Empirical studies suggest that - although technically not storing the raw training dataset - language models as statistical models assigning a probability to a sequence of words may be able to extract verbatim text sequences from the model's training data - so called memorization and regurgitation of training data. And thus if language models are trained on publicly available data, such data memorization might lead to infringement of copyright and database rights. Recently adopted set of two exceptions from copyright and database protection for purposes of text and data mining introduced by the CDSM Directive could emerge as pivotal when aiming to justify use of publicly available data to train artificial intelligence. However, the applicability text and data mining exceptions is limited as to the purpose of generating new information as well as to the scope of permitted actions permitting solely reproduction or extraction of protected content. Although language models adopt additional measures to prevent data memorisation and dissemination of verbatim snippets - such as de-duplication or output filters – these measures might not be bulletproof, especially due to jailbreaking which may manipulate AI models into bypassing such measures. Question remains, is there a meaningful solution to preventing copyright infringe-ment while not hindering training of language models on publicly available data? |
10:00 | Haftung und Verantwortung bei Sicherheitslücken in Open Source Software PRESENTER: Katharina Bisset ABSTRACT. Open Source Software (OSS) ist das Rückgrat moderner IT-Architekturen und spielt eine zentrale Rolle in zahlreichen kommerziellen und nicht-kommerziellen Anwendungen. Bisher hat sich die Praxis etabliert, dass Integratoren und OSS-Anbieter durch Lizenzbedingungen oder allgemeine Haftungsausschlüsse das rechtliche Risiko minimieren konnten. Dies ermöglichte eine agile Nutzung und Integration von OSS und reduzierte gleichzeitig das Risiko für die Anbieter. Mit aktuellen Entwicklungen auf europäischer Ebene, wie dem Cyber Resilience Act und der neuen Produkthaftungsrichtlinie, ändert sich diese Risikolandschaft jedoch erheblich. Der Cyber Resilience Act sieht für sicherheitsrelevante Themen nur noch eine Ausnahme für OSS vor, wenn diese rein nicht-kommerziell genutzt wird. Für kommerzielle Anwendungen greift die Regelung hingegen vollumfänglich. Die Produkthaftungsrichtlinie definiert zudem Software als Produkt, was nicht nur umfangreiche Update-Pflichten nach sich zieht, sondern auch klare Haftungsszenarien schafft. Zudem bedingen die neuen Regelungen, dass man umfangreiche Dokumentationen benötigt, um nicht in Beweisproblematiken zu kommen. Inhalt und Zielsetzung: Der Vortrag wird die wesentlichen Änderungen der gesetzlichen Rahmenbedingungen für OSS-Haftung und -Verantwortung im Kontext von IT-Sicherheitslücken beleuchten. Anhand praxisnaher Beispiele werden wir untersuchen, welche Auswirkungen dies auf verschiedene Anspruchsgruppen haben könnte. Themen sind u.a.: • Welche Pflichten gibt es, wenn in OSS eine Sicherheitslücke entdeckt wird? • Rolle und Wirkung von Lizenzbedingungen – helfen sie wirklich, Haftung zu minimieren? • Überblick über Anspruchsgruppen und ihre Rechte: Produktanbieter, Endanwender und Dritte (z.B. von Sicherheitslücken betroffene Drittparteien). • Herausforderungen durch neue Bedrohungsszenarien, insbesondere im Kontext moderner Technologien wie Large Language Models (LLMs), die neue Angriffsvektoren schaffen können. • Der Unterschied in der Risikowahrnehmung zwischen freien Entwicklern, institutionellen Foundations und kommerziellen Anbietern. Erkenntnisgewinn: Teilnehmende werden ein klares Verständnis für die aktuellen Haftungsrisiken und neuen Verantwortlichkeiten im Umgang mit OSS erhalten. Sie gewinnen Einblicke, welche Vorsichtsmaßnahmen nunmehr notwendig sind, um rechtlichen Anforderungen gerecht zu werden, und wie sie sich vor potenziellen Haftungsansprüchen schützen können. Der Vortrag wird verdeutlichen, wie wichtig es ist, den rechtlichen Kontext von Open Source im kommerziellen Einsatz zu verstehen und welche präventiven Schritte erforderlich sind, um künftig das Risiko für Sicherheitslücken zu minimieren. |
09:00 | Legislating for the European Health Data Space: considering trust in health records in multinational settings and research on economic data related to healthcare as selected problems of primary and secondary use ABSTRACT. The European Commission tries to accelerate the integration and homogenisation of electronic health records with a regulation establishing the European Health Data Space. Once approved, this framework shall ease the resort to existing health records in cross-border healthcare (primary use) and reuse them in biomedical research and development (secondary use). Unsurprisingly, this techno-optimism faces considering personal data as an identity issue. Moreover, establishing this requires completing health informatics on the national level. Several issues deserve clarification. The contribution focuses on trust in sometimes incomplete, outdated and misleading health records in multilingual settings in medical practice and on accompanying economic data. One should also consider deploying artificial intelligence, among others, in completing patient summaries. |
09:30 | The Impact of Artificial Intelligence on the Twofold Nature of Biobanks PRESENTER: Leijla Malici ABSTRACT. Despite their growing relevance in biomedical research, specifically in precision medicine, biobanks still raise ethical, legal, and societal issues, mainly related to the definition of their purposes, ownership of biological materials, and protection of associated data. Biobanks are key instruments in advancing scientific research by providing access to large amounts of biological samples and associated data. The peculiar two-fold status of the samples generates uncertainty, primarily regarding ownership and custodianship. Moreover, the informed consent process presents significant challenges regarding the different uses of the samples and the difficulty of adequately informing donors about it, potentially undermining their fundamental rights. With the advent of AI, new challenges have arisen concerning the exploitation of not only single donors but also patient rights. In this contribution, we aim to address the main concerns related to the implementation of AI technologies in biobanking, focusing on balancing involved healthcare, public and private interests, and individual rights |
10:00 | REGULATING THE HEALTH DATA SPACE - SOME ELEMENTARY QUESTIONS ABSTRACT. In the context of the European Union, we are used to talking about different kinds of infor-mation spaces. The basic idea is that the freedom of movement of the individual has been complemented by the freedom of movement of data and information. One of the new Europe-an spaces is the space for the movement of health data. Here we are dealing both with the transfer of sensitive personal data and, above all, with quality assurance. The transfer of an individual's health data from one scientific environment to another is an exceptionally de-manding operation. It is not only a question of ensuring the technological path of information as such. And the more general transfer of care data is also a very demanding transfer of da-ta. That is why the EU wants strict rules on the use of anonymised or pseudonymised health data for research, innovation and decision-making too. Regulation of the information space for health data is a broad issue. It has been described as the first thematic regulation of the data and information space in accordance with the EU data strategy. We are witnessing an extremely important stage of social development in terms of human dignity, one involving many tensions. For example, there has been and still is an undeniable conflict of values and objectives between our right to self-determination and the various re-uses of medical data.. Equally, data security needs to be highlighted in a new way in this context. In this brief presentation I will focus on the vital aspects of the transferability of data and information on the quality of care. This is, as I understand it, the necessary starting point for a new, com-prehensive set of regulations. The internationalisation of healthcare has made good progress on the linguistic level, but there is still work to be done to harmonise essential treatment in-formation. AI could give this a new impetus |
11:00 | Mapping Compliance: A Taxonomy for Political Content Analysis under the EU's Digital Electoral Framework PRESENTER: Marie-Therese Sekwenz ABSTRACT. The rise of digital platforms has transformed political campaigning, introducing complex regulatory challenges for transparency, content moderation, and the targeting of political advertisements. This paper presents a comprehensive taxonomy for analyzing political content in the EU’s digital electoral landscape, aligning with the requirements set forth in the Digital Services Act (DSA), the Transparency and Targeting of Political Advertising Regulation (TTPA), and the Commission’s Guidelines on the Mitigation of Systemic Risks for Electoral Processes (G-E–DSA). Using a legal doctrinal methodology, we construct a detailed codebook that enables systematic content analysis across user-generated and political ad content to assess compliance with regulatory mandates, including systemic risk identification, ad repository transparency, and sponsor disclosure requirements. Our taxonomy, grounded in legal provisions, is applied empirically through a sample-based annotation process on Very Large Online Platforms (VLOPs), evaluating the adherence of platforms to transparency and electoral integrity obligations. By bridging legal analysis with empirical methodology, this study contributes to the field of digital electoral regulation, providing a robust framework to guide platforms, advertisers, regulators, and auditors in navigating the EU’s evolving digital political landscape. |
11:30 | PRESENTATION: THE SELF-DRIVING STATE: AUTOMATED DECISION-MAKING IN MODERN GOVERNANCE ABSTRACT. This presentation explores the concept of the "Self-Driving State" a transformative vision for future governance that leverages digital technology and artificial intelligence to automate state functions. It examines how Legal Digital Twins (digital representations of legislative texts) can be integrated into legislative, executive, and judicial structures to enable efficient, automated administrative processes. The Self-Driving State framework holds potential for enhancing transparency, accountability, and efficiency in governance. A case study on Austrian tax law demonstrates practical applications, providing insight into the implications of Self-Driving State for modern governance. |
12:00 | Wartezeitenabfrage für CR-/MRT-Untersuchungen im Internet: eine Chance für Patienten und Gesundheitssystem gleichermaßen? ABSTRACT. Seit Anfang 2024 findet sich auf www.sozialversicherung.at eine Abfragemöglichkeit, wie lange es im Durchschnitt dauert, einen Termin für eine CT- oder MRT-Untersuchung bei einem externen Anbieter bildgebender Diagnostik auf Kosten des Krankenversicherungsträgers zu bekommen. Ein Vorgänger zu dieser Wartezeitenabfrage auf www.sozialversicherung.at fand sich von 2017 bis 2020 auf www.netdoctor.at. Auch wenn die Anzahl der monatlichen Abfragen auf www.sozialversicherung.at mit aktuell rund 700 überschaubar ist, so ist die Möglichkeit zur Internetabfrage zu begrüßen, weil sie Transparenz schafft und somit einen schnelleren Zugang zu bildgebender Diagnostik für den einzelnen Patienten ermöglicht. |
11:00 | AI, Law, and Causality: Bridging Legal Uncertainty with Formal Reasoning and Argumentation (online) ABSTRACT. AI-related legal disputes often hinge on proving causality, yet establishing a clear link between an AI system’s actions and harm remains a major challenge. The AI Liability Directive (AILD) and revised Product Liability Directive introduce measures like presumptive causal links and evidence disclosure, but tensions arise between these rules and the standardization-driven AI Act, particularly in assessing factual and legal causation. This talk explores how formal logic and argumentation theory can enhance causal analysis in AI liability cases and beyond. Logic-based methods provide structured reasoning tools to assess causation in complex disputes, addressing challenges like overdetermination, pre-emption, and omissions. Argumentation frameworks help model competing causal claims, clarify evidentiary burdens, and improve judicial reasoning. Applied to case studies in medical AI, autonomous vehicles, and employment discrimination, these approaches offer a more precise and transparent way to evaluate AI-related harm. Integrating logic and argumentation theory into legal AI systems can improve the coherence of liability rules, reduce legal uncertainty, and support courts, lawyers, and policymakers. This aligns with the panel’s broader inquiry into whether these theoretical approaches can optimise AI applications in law, ensuring more reliable and fair legal outcomes. |
11:30 | Mechanising Logical Systems in Proof Assistants ABSTRACT. Modern general-purpose proof assistants like Coq, Isabelle, and Lean allow for the efficient and scalable implementation of logical systems based on formal syntax and adequate semantics, including those used in some theories of legal reasoning. In this tutorial talk, I will explain the basic concepts underlying the Coq proof assistants by showing how a simple logic can be modelled and how a corresponding executable decision procedure can be implemented. This is meant to provide a basis for discussion and follow-up talks describing particular industrial applications. |
12:00 | Certification of computable laws PRESENTER: Edward Lowdell ABSTRACT. Computable laws, i.e. laws which are subject to enforcement via automation of certain processes, are widespread - and so are the issues which arise with them. In particular, obtaining software from legal texts implies a series of translations that are often neglected and end up being poorly performed by the software engineers who provide the final software, sometimes even unconsciously. As legal texts can be ambiguous or even inconsistent, engineers are forced to make decisions which affect the meaning and final behaviour of the software, which makes it impossible to certify or homologate these tools. What we propose is a method that allows systematization of these translations, introducing intermediate steps between the legal text and the software. A translation of the legal text into a formal-language specification allows the possibility of: 1) proving, using formal methods, the uniformity between the software and the specification, 2) explaining the correspondence between the legal text and the formal specification, making all the choices that have been made publicly available and understandable. |