View: session overviewtalk overview
09:15 | The Impact Of Quantum Technologies On Deterrence, Arms Control, Nonproliferation, and Verification PRESENTER: Ferenc Dalnoki-Veress ABSTRACT. Quantum information science and technologies (QIST) will have myriad impacts on deterrence, arms control, nonproliferation, and verification. Quantum computing and quantum communications will disrupt current protocols for secure data transfer and storage. Quantum simulations of chemical and biological processes are expected to cause additional challenges and opportunities for WMD risk reduction. Developments in quantum sensing and metrology are in an advanced status of development and will impact and disrupt current verification activities by enabling the development of new sensors. Quantum sensing could make it easier for military forces to track nuclear-armed submarines and mobile missiles, threatening a deterrence pillar. This paper will address the current status of these technologies, the expected development timeline, and the impact on security, with a focus on deterrence, arms control, nonproliferation, and verification. This project is funded by the United States Department of State. |
09:35 | Missile Defenses for Europe: Computer Modeling and Analysis ABSTRACT. After the end of the Intermediate-Range Nuclear Forces Treaty (INF), there is a growing threat from intermediate- and shorter-range missiles. Russia is deploying a new intermediate-range system, the US is developing one. Even before the treaty’s demise, China has been developing and fielding conventional and nuclear intermediate-range missiles in the Western Pacific. To counter this threat, along with political efforts, countries consider development and deployment of ballistic missile defenses. Detailed technical knowledge is required to support the debate on the effectiveness of such systems. In this contribution we will present a new computer model for the calculation and comparison of missile defense footprints. The model builds upon work from Jürgen Altmann in the 1980’s, takes into account technical characteristics of missiles and interceptors, and calculates and displays footprints corresponding to various scenarios of engagement. Using this computer program we analyze existing missile defense systems and their quantitative deployment parameters depending on the requirements set forth with regard to their effectiveness. |
09:55 | Adjusting the Wheel: Ethical Deliberation as a Method for Dual-Use Assessment in the ICT Development Process ABSTRACT. ICT development methods have changed from the linear water-fall model towards faster iterations which can even include ethical design approaches, such as Value Sensitive Design (VSD). To ensure such standards, principles, and Codes of Conduct have been formulated and operationalized. In the case of AI, sets of principles have been collected as "AI4People" (Floridi et al., 2018) or "Trustworthy AI" (EU Commission, 2019). In the case of AI, the frameworks help to include relevant aspects, to develop in a lawful, ethical, and robust way, but need to be translated into the application of the R&D projects. These frameworks are based on a set of norms, which are deliberately abstract and need translation for a particular context. On the other side, there are participatory design methods, such as VSD, which help to include values, which are important to the participants, and might fit a certain context well, but do not guarantee that certain norms are met. Thus, the question remains if dual-use risks can be fully addressed by these frameworks and methods, or if dual-use risks occur, even when ethical standards are met, and all stakeholder values are included. Thus, this paper summarizes the discourse on ethical ICT development frameworks, and participatory design methods, mapping the dual-use definitions, risk scenarios and stakeholders (Riebe, 2023) on them. Doing so, the paper asks if these frameworks and participatory methods already address ICT dual-use risks (Tucker, 2012), and if so, which of them. The results help to understand, how dual-use assessment can be done as a form of ethical deliberation as a combination of norms and a participatory and deliberate process (Gogoll, 2021), or if there is a methodological research gap. Sources: EU Commission (2019). Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai Floridi, Luciano, et al. (2021). An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Ethics, governance, and policies in artificial intelligence, 19-39. Gogoll, Jan et al. (2021). Ethics in the software development process: from codes of conduct to ethical deliberation. Philosophy & Technology, 1-24. Riebe, Thea (2023). Technology Assessment of Dual-Use ICTs – How to assess Diffusion, Governance and Design. Darmstadt, Germany: Springer Vieweg. Tucker, Jonathan B. (Ed.). (2012). Innovation, dual use, and security: managing the risks of emerging biological and chemical technologies. MIT press. |
09:15 | On the Intersection of Computer Science with Peace and Security Research: Experiences from Interdisciplinary Teaching in Peace Informatics PRESENTER: Christian Reuter ABSTRACT. Interdisciplinary research and teaching between computer science and peace and security studies is indispensable against the background of conflicts in cyberspace no longer constituting a fiction lying in the far future, but rather an acute possibility. Even though numerous established courses and textbooks exist in some disciplines, this does not apply to their intersection. This talk reflects the introduction of the interdisciplinary course “Information Technology for Peace and Security” for students of Computer Science, IT Security and Information Systems at the Technische Universität Darmstadt and Peace and Conflict Research at the TU Darmstadt in cooperation with the Goethe University Frankfurt. The challenges and solutions of interdisciplinary teaching are presented and the importance of this teaching is emphasised. |
09:25 | The Normative Power of the Factual: How State Practice Shapes Understandings About Direct Public Political Attribution of Cyber Operations ABSTRACT. An increasing number of states use direct public political attribution to call out inappropriate behavior in cyberspace attributable to another state. Shared understandings about conducting and communicating political attribution practices are essential to avoid misunderstandings and mitigate the risk of potential escalation between states. However, attribution remains only marginally addressed in the context of diplomatically negotiated cyber norms so far. This makes this field well suited to explore the formation of normative ideas through state practice as it leaves ample room for practical interpretation by states. Based on a selection of four case studies (Australia, Germany, Japan, and the United States), this paper identifies which cyber operations the selected states have publicly attributed, how the attribution was communicated and justified, to what extent other states were involved in the process, and how other states perceived the attribution. This analysis of established and emerging individual as well as collective state practice will permit new insights into how States currently perceive the respective normative framework, that is, formalized cyber norms, and conclusions as to what extent the observed State practice gives rise to new shared understandings about appropriate state behavior - practiced cyber norms - when it comes to direct public political attribution of cyber operations. |
09:45 | The Role of Cyber Ranges within European Cybersecurity Strategy: A Primer ABSTRACT. Over the course of the last decade, the European Union has emerged as an important player within the field of international cybersecurity. While regulatory policies are arguably the prime vehicle for implementing the EU’s cybersecurity strategy, other policy instruments such as cyber sanctions, information sharing and infrastructure development also offer important contributions. Cyber ranges (CRs) are another such tool that could facilitate greater cooperation and influence European policy debates, yet there has been little assessment of their strategic utility. The majority of CRs are run by, and for, research or commercial purposes, with a focus on meeting training and educational needs. Yet, CRs could further a cyber-skilled workforce more broadly, helping to build resilience within business as part of the EU’s strategy of enabling a cyber-skilled workforce, or bolstering defensive capabilities more broadly. CRs could also support such endeavours beyond operational and technical coordination, fostering that cooperation with partners and the multi-stakeholder community – another vital element of EU’s strategy – as a small number of states have already begun to do so. Lastly, CRs are being used to advance sovereign capabilities, offering challenges but also opportunities. For example, CRs could be used to substantiate and advance principles of responsible state behaviour within cyberspace, which would align with the proposed EU leadership on standards, norms, and frameworks in cyber matters. Our work gives an overview of how CRs are used, followed by a survey of those existing at the national level before delving into EU efforts to enable joint uses at the regional level. We then assess potential uses of CRs to achieve four core objectives of the EU’s cybersecurity strategy: bolstering a cyber-skilled workforce, ensuring high levels of cyber resilience across the continent, encouraging responsible behaviour in cyberspace, and extending solidarity to international partners and allies. |
10:45 | Narratives of "Tech Wars": Technological Competition, Power Shifts and Conflict Dynamics Between the US, China and the EU ABSTRACT. In the context of digitalization, technological change and competition are deeply entwined with questions of international security and power. In particular, leadership in digital technologies has become a key parameter of the growing geopolitical and geo-economic great power competition between the US, China, and the EU. The securitisation of such technologies can be seen in the widespread perception of an intensifying Tech War between the three actors. Against this background, the paper takes a social constructivist perspective to draw out the dominant interpretations of the competition for digital technological leadership between the US, China, and the EU. It uses a method of narrative analysis to explore the different meanings that are intersubjectively attributed to the technological competition and its implications for the power relationship between the three actors. The paper examines Artificial Intelligence (AI) as a paradigmatic case in which narratives of an “AI arms race” have proliferated in recent years. The empirical focus is on official strategy documents from the US, China, and the EU, which are supplemented with expert interviews to reconstruct the narrative dynamics and shifts in this field. Ultimately, this serves to identify the scope for cooperation between the three actors and to minimise risks to international security. |
10:55 | The Promise of Track-Two Diplomacy Amidst US-China Tech War ABSTRACT. This paper considers the possible role of ‘Track-Two’ diplomacy in the wake of a US-China tech war. At a time when official diplomatic engagements between the two countries prove challenging, unofficial Track-Two interactions offer an alternative and promising venue for exploring coordination and cooperation options. We take stock of Chinese Track-Two actors’ efforts in resolving growing confrontations with the US in cyber security and AI weaponisation – two fields unique in their greater focus on technical dimensions and the diversity of expertise. Building on insights from practice theory, communities of practice, and boundary work, we understand Track-Two diplomacy as a site of boundary work and those actors involved as ‘boundary workers’. An extensive analysis of documentary evidence and interviews with Chinese participants demonstrate that Track-Two actors engage in a complex process of inclusive (e.g., exploring common ground, transmitting insights, and boundary-spanning) and divisive (e.g., establishing differences, drawing boundaries, and strengthening prior beliefs) practices when interacting with their counterparts on the other side. These practices, while both bridging and establishing differences between the two sides, are conducive to fostering “Chinese” approaches to secure cyberspace and military AI applications. These approaches are essentially rooted in practical imperatives whose meanings are contextual-dependent, varying with actors’ social experience at the boundaries between the US and China. This paper contributes a new conceptual model to Track-Two scholarship and illuminates the potential of Track-Two initiatives in contributing to US-China Track-One diplomatic efforts and policymaking. |
11:15 | Trust in AI: Producing Ontological Security through Governmental Visions PRESENTER: Stefka Schmid ABSTRACT. With recent developments in artificial intelligence (AI) widely framed as a potential security threat both in the military and increasingly in the civilian realm, governments have turned their attention to devising regulation to govern AI, its development, and associated harms. In our comparative study of US, Chinese, and EU AI policies, we seek to go beyond purely instrumental understandings of AI as a technological capability, which serves nation states’ self-interests and the maintenance of their (supra)national security. In particular we are interested in the mobilisations and enactments of ‘trust’. Our specific interest therefore lies in the affective and emotional register that these policies tap into and elicit. Our analysis shows that across governmental documents, AI is perceived as a capability that enhances societal and geopoliticcal interests while its risks are framed as manageable. This echoes strands within the field of Human-Computer Interaction that draw on human-centered perceptions of technology and assumptions about human-AI relationships of trust, implying notions of interpretability and human control. Despite different innovation cultures and institutional settings, visions of future AI development in all three governmental visions are shaped by this (shared) understanding of human-AI interaction. Nonetheless, the policies differ and are reflective of each government’s interest in guaranteeing physical as well as ontological security. We therefore draw on Critical Security Studies and Science and Technology Studies, to ask how different identities play into the production of governmental AI visions and how these visions in turn (co-)produce identities and innovation policies.
|
11:35 | Maritime Critical Infrastructures Protection: Technical and Political Approaches Beyond the Military ABSTRACT. “Lifelines”, “Arteries”, “Super-Highways”, or “Backbone”: Anyone researching maritime infrastructures is likely to be struck by the abundance of metaphors used to describe them. On the one hand, these metaphors suggest a lack of familiarity with the subject, although the maritime space currently serves as the essential transit sphere for physical goods and non-material data. On the other hand, they allow for highlighting the enormous societal dependency that has intensified over a long time in a globalized, interdependent world. Though interdependence brought with it the promise of a more peaceful world, there are signs that it is increasingly becoming a security policy lever in the context of geopolitical tensions. Critical infrastructures (CI), while serving the basic needs of societies, have not been prone to this development and have increasingly been used as a platform for geopolitical interaction. Maritime CI in the energy (wind farms, pipelines, oil rigs), ICT (data cables), and transport (cargo shipping) sectors have all suffered sabotages or failures recently, reinforcing the need for better protection of offshore and subsea infrastructures. Although most reactions to the latest events primarily include national navies, the poster will present and discuss various technical and political approaches to make maritime CI more resilient beyond simple military surveillance. |
13:15 | New military technologies – fundamental challenges to the international system? PRESENTER: Jürgen Altmann ABSTRACT. In military research and development important states are pursuing paths that will likely lead to arms races and new levels of destabilisation. Autonomous weapon systems and wider military uses of artificial intelligence as well as cyber-war preparations are seen as central means for maintaining or achieving military superiority, in particular by faster action and reaction. Such "fighting at machine speed" puts into question the capability of human control to prevent escalation. Synthetic biology or human enhancement pose other fundamental problems. Many generic technologies with dual use are becoming more widely accessible; weapons could be very small and be produced in small facilities. If there were a different political climate, many dangerous developments could be contained for the medium term by (preventive) arms control with adequate verification. But given the overall geopolitical landscape, military motives for increased combat strength from new technologies seem to trump arms control efforts. In addition, the new technologies themselves make verification more difficult than ever as a degree of intrusiveness would be needed that would be difficult to accept for armed forces as well as civil society. Both factors may render verified arms control impossible in the long-term future. So, is the old dictum, that arms control is impossible when needed, true after all? At the end of the conference, the panel is to look back and discuss several fundamental problems, with a view toward tasks for natural-science as well as political-science peace research:
|