Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten in TTLF
Quantum Technology Impact Assessment (EU AI Alliance, European Commission)

Brussels, 20 April 2023—The emergence of powerful new capabilities in large AI models, such as Generative Adversarial Networks (GANs), underscores the critical need to continuously improve and update technology impact assessment tools, ensuring they keep pace with rapid technological development. As defined in recent scholarship, technology impact assessment is the systematic process of monitoring and determining the unintended, indirect, or delayed societal impacts of a future technological innovation. Crucially, it is also about capitalizing on opportunities and enabling responsible innovation from the outset.

An article by Stanford Law’s Mauritz Kop on this topic is also featured on the European Commission's Futurium website.

Shaping the Quantum Innovation Process

Quantum Impact Assessments (QIAs) are emerging as vital practical tools to facilitate the responsible adoption of quantum technologies. There are several related approaches to this assessment: (1) interactive QIA, which seeks to influence and shape the innovation process; (2) constructive QIA, where social issues guide the design of the technology from its earliest stages; and (3) real-time QIA, which connects scientific R&D with social sciences and policy from the start, before a technology becomes locked-in.

Often taking the form of codes of conduct, best practices, roadmaps, and physics de-risking tools, QIA instruments can be used by governments, industry, and academia. These soft law toolsallow stakeholders to explore how current technological developments affect the world we live in and to proactively shape the innovation process toward beneficial, societally robust outcomes.

Exploratory Quantum Technology Assessment

Implementing interdisciplinary, expert-based QIAs can help raise awareness about the ethical, legal, socio-economic, and policy (ELSPI) dimensions of quantum technology, including quantum-classical hybrid systems. For instance, QIAs cultivate a deeper understanding of the potential dual-use character of quantum technology, where beneficial applications (such as quantum sensing for medical diagnostics) can exist alongside potentially harmful ones (such as the same sensors being used for autocratic surveillance).

Building on the foundational work of the 2018 AI Impact Assessment developed by ECP | Platform voor de InformatieSamenleving chaired by Prof. Kees Stuurman, this work presents a prototype of a QIA instrument: the Exploratory Quantum Technology Assessment (EQTA). This pioneering initiative was made possible through a collaboration between the Dutch Ministry of Economic Affairs & Climate Policy, Quantum Delta NL (QDNL), and ECP. The EQTA will be presented by Eline de Jong and Mauritz Kop at the inaugural Stanford Responsible Quantum Technology Conference in May 2023.

Guidance for Responsible Quantum Technology Implementation

The EQTA provides a comprehensive, practical step-by-step plan that encourages stakeholders to initiate a dialogue to clarify which ethical, legal, and social aspects are important in the creation and application of quantum systems and their interaction with classical technologies. This structured approach helps make the use of quantum technology—as well as the data and algorithms that power it—more transparent and accountable from an early stage.

Looking forward, establishing a risk-based legal-ethical framework in combination with standardization, certification, technology impact assessment, and life-cycle auditing of quantum-driven systems is crucial to stewarding society towards responsible quantum innovation. Mauritz Kop’s research group has written more on this framework in their seminal article Towards Responsible Quantum Technology (Harvard).

Meer lezen
Mauritz Kop Consults Senator Mark Warner on AI & Quantum Technology Policy

Washington D.C., January 4, 2022—As the United States Congress grapples with the complex challenges of regulating artificial intelligence and quantum technology, leading policymakers are seeking expert guidance to inform a robust and forward-thinking national strategy. On January 4, 2022, Mauritz Kop, a distinguished scholar in the field of technology law and governance, was consulted by the legal team of U.S. Senator Mark Warner (D-VA) to provide strategic insights on both AI and quantum technology policy.

This consultation highlights the growing recognition in Washington of the need for deep, interdisciplinary expertise to navigate the geopolitical, economic, and security dimensions of these transformative technologies. Senator Warner's team reached out to Kop based on his influential scholarship, including his extensive work at Stanford on the EU AI Act and the need for a strategic democratic tech alliance, his advisory role for the European Commission led by Ursula von der Leyen on the AI Act and Data Act, and his foundational article in the Yale Journal of Law & Technology proposing a comprehensive legal-ethical framework for quantum technology.

Senator Mark Warner: A Leader on Technology and National Security

Senator Mark Warner's engagement on these issues is both significant and timely. As the Chairman of the Senate Select Committee on Intelligence, he is at the forefront of addressing the national security implications of emerging technologies. His work involves overseeing the U.S. Intelligence Community and ensuring it is equipped to handle the threats and opportunities of the 21st century, where technological competition with nations like China is a central concern.

The Senate Select Committee on Intelligence has a broad mandate that includes analyzing intelligence on the technological capabilities of foreign powers and assessing the vulnerabilities of U.S. critical infrastructure. Senator Warner has been a vocal proponent of developing a national strategy for AI and quantum to maintain the United States' competitive edge and to ensure that these technologies are developed and deployed in a manner consistent with democratic values. This consultation with Mauritz Kop reflects the Senator's commitment to drawing on leading academic research to shape sound, bipartisan policy.

AI Policy: A Transatlantic, Risk-Based Approach that Lets Innovation Breathe

A key focus of the consultation was Kop's analysis of the European Union's AI Act. His Stanford publications argue for a balanced, pro-innovation regulatory model that can serve as a blueprint for international cooperation. Good governance and sensible legislation should incentivize desired behavior and simultaneously create breathing room for sustainable, beneficial innovation to flourish.

Quantum Governance: Establishing a Legal-Ethical Framework

The discussion also delved into the governance of quantum technology, drawing on Kop's seminal work in the Yale Journal of Law & Technology. Recognizing that quantum is rapidly moving from the theoretical to the practical, he stressed the urgency of establishing a legal-ethical framework before the technology is widely deployed and locked-in.

The consultation with Senator Warner's office represents a critical intersection of academic scholarship and high-level policymaking. As the United States charts its course in the era of AI and quantum, the insights provided by experts like Mauritz Kop are invaluable in ensuring that the nation's strategy is not only competitive but also responsible, ethical, and firmly rooted in democratic principles.

Meer lezen
De Wet op de Artificiële Intelligentie

Een bewerkte versie van deze bijdrage is gepubliceerd op platform VerderDenken.nl van het Centrum voor Postacademisch Juridisch Onderwijs (CPO) van de Radboud Universiteit Nijmegen. https://www.ru.nl/cpo/verderdenken/columns/wet-artificiele-intelligentie-belangrijkste-punten/

Nieuwe regels voor AI gedreven producten, diensten en systemen

Op 21 april 2021 presenteerde de Europese Commissie haar langverwachte Wet op de Artificiële Intelligentie (AI). Deze concept Verordening geeft regels voor de ontwikkeling, commodificatie en gebruik van AI gedreven producten, diensten en systemen binnen het territorium van de Europese Unie. Het was bemoedigend te zien dat het team van President Ursula von der Leyen een belangrijk aantal van onze strategische aanbevelingen op het gebied van de regulering van AI heeft overgenomen, danwel zelfstandig tot dezelfde conclusies is gekomen.

Doelstellingen wettelijk kader voor AI

De concept Verordening biedt horizontale overkoepelende kernregels voor kunstmatige intelligentie die op alle industrieën (verticals) van toepassing zijn. De wet beoogt de hoge maatstaven van het EU Trustworthy AI paradigma te codificeren, dat voorschrijft dat AI wettig, ethisch en technisch robuust dient te zijn en daartoe 7 vereisten hanteert.

De Wet op de Artificiële Intelligentie heeft de volgende 4 doelstellingen:

1. ervoor zorgen dat AI-systemen die in de Unie in de handel worden gebracht en gebruikt, veilig zijn en de bestaande wetgeving inzake grondrechten en waarden van de Unie eerbiedigen;

2. rechtszekerheid garanderen om investeringen en innovatie in AI te vergemakkelijken;

3. het beheer en de doeltreffende handhaving van de bestaande wetgeving inzake grondrechten en veiligheidsvoorschriften die van toepassing zijn op AI-systemen, verbeteren;

4. de ontwikkeling van een eengemaakte markt voor wettige, veilige en betrouwbare AI-toepassingen vergemakkelijken en marktversnippering voorkomen.“

Risico gebaseerde aanpak kunstmatig intelligente applicaties

Om deze doelstellingen te realiseren combineert de concept Artificial Intelligence Act een risk-based approach op basis van de pyramid of criticality, met een modern, gelaagd handhavingsmechanisme. Dit houdt onder meer in dat er voor AI applicaties met een verwaarloosbaar risico een licht wettelijk regime geldt, en onacceptabel risico applicaties verboden worden. Tussen deze 2 uitersten gelden er naarmate het risico toeneemt strengere voorschriften. Deze variëren van vrijblijvende zelfregulerende soft law impact assessments met gedragscodes, tot zwaar, multidisciplinair extern geauditeerde compliance vereisten inzake kwaliteit, veiligheid en transparantie inclusief risicobeheer, monitoring, certificering, benchmarking, validatie, documentatieplicht en markttoezicht gedurende de levenscyclus van de toepassing.

Handhaving en governance

De definitie van hoog risico AI applicaties binnen de diverse industriële sectoren is nog niet in steen gehouwen. Een ondubbelzinnige risicotaxonomie zal bijdragen aan rechtszekerheid en biedt belanghebbenden een adequaat antwoord op vragen over aansprakelijkheid en verzekering. Om ruimte voor innovatie door SME’s waaronder tech-startups te waarborgen, worden er flexibele AI regulatory sandboxes geïntroduceerd en is er IP Action Plan opgesteld voor intellectueel eigendom. De concept Verordening voorziet tenslotte in de installatie van een nieuwe handhavende instantie op Unieniveau: het European Artificial Intelligence Board. De EAIB zal op lidstaatniveau worden geflankeerd door nationale toezichthouders.

Meer lezen
The Right to Process Data for Machine Learning Purposes in the EU

Harvard Law School, Harvard Journal of Law & Technology (JOLT) Volume 34, Digest Spring 2021

New interdisciplinary Stanford University AI & Law research article: “The Right to Process Data for Machine Learning Purposes in the EU”.

Download the article here: Kop_The Right to Process Data-Harvard

Data Act & European data-driven economy

Europe is now at a crucial juncture in deciding how to deploy data driven technologies in ways that encourage democracy, prosperity and the well-being of European citizens. The upcoming European Data Act provides a major window of opportunity to change the story. In this respect, it is key that the European Commission takes firm action, removes overbearing policy and regulatory obstacles, strenuously harmonizes relevant legislation and provides concrete incentives and mechanisms for access, sharing and re-use of data. The article argues that to ensure an efficiently functioning European data-driven economy, a new and as yet unused term must be introduced to the field of AI & law: the right to process data for machine learning purposes.

The state can implement new modalities of property

Data has become a primary resource that should not be enclosed or commodified per se, but used for the common good. Commons based production and data for social good initiatives should be stimulated by the state. We need not to think in terms of exclusive, private property on data, but in terms of rights and freedoms to use, (modalities of) access, process and share data. If necessary and desirable for the progress of society, the state can implement new forms of property. Against this background the article explores normative justifications for open innovation and shifts in the (intellectual) property paradigm, drawing inspiration from the works of canonical thinkers such as Locke, Marx, Kant and Hegel.

Ius utendi et fruendi for primary resource data

The article maintains that there should be exceptions to (de facto, economic or legal) ownership claims on data that provide user rights and freedom to operate in the setting of AI model training. It concludes that this exception is conceivable as a legal concept analogous to a quasi, imperfect usufruct in the form of a right to process data for machine learning purposes. A combination of usus and fructus (ius utendi et fruendi), not for land but for primary resource data. A right to process data that works within the context of AI and the Internet of Things (IoT), and that fits in the EU acquis communautaire. Such a right makes access, sharing and re-use of data possible, and helps to fulfil the European Strategy for Data’s desiderata.

Meer lezen