Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten in Artificial Intelligence
Quantum Event Horizon: Addressing the Quantum-AI Control Problem through Quantum-Resistant Constitutional AI

What happens when AI becomes not just superintelligent, but quantum-superintelligent? QAI agents with both classical and quantum capabilities? How do we ensure we remain in control?

This is the central question of my new article, where I introduce the concept of the Quantum Event Horizon to frame the urgency of the QAI control problem. As we near this point of no return, the risk of losing control to misaligned systems—machines taking over or seeing them weaponized—becomes acute.

A metaphorical Quantum Event Horizon can be thought of as an inflection point, or quantum governance 'tipping point' beyond which our ability to steward advanced quantum technology and AI towards beneficial outcomes for all of humanity, may vanish.

Simple guardrails are not enough. The solution must be architectural. I propose a new paradigm: Quantum-Resistant Constitutional AI, a method for engineering our core values into the foundation of QAI itself. This is a crucial discussion for policymakers, researchers, builders, and industry leaders.

Navigating the Quantum Event Horizon

This paper addresses the impending control problem posed by the synthesis of quantum computing and artificial intelligence (QAI). It posits that the emergence of potentially superintelligent QAI agents creates a governance challenge that is fundamentally different from and more acute than those posed by classical AI. Traditional solutions focused on technical alignment are necessary but insufficient for the novel risks and capabilities of QAI. The central thesis is that navigating this challenge requires a paradigm shift from reactive oversight to proactive, upfront constitutional design.

The core of the argument is framed by the concept of the ‘Quantum Event Horizon’—a metaphorical boundary beyond which the behavior, development, and societal impact of QAI become computationally opaque and practically impossible to predict or control using conventional methods. Drawing on the Collingridge dilemma and the Copenhagen interpretation, this concept highlights the risk of a "point of no return," where technological lock-in, spurred by a "ChatGPT moment" for quantum, could cement irreversible geopolitical realities, empower techno-authoritarianism, and present an unmanageable control problem (the risk of machines taking over). Confronting this requires a new philosophy for governing non-human intelligence.

Machines Taking Over

The urgency is magnified by a stark geopolitical context, defined by a Tripartite Dilemma between the existential safety concerns articulated by figures like Geoffrey Hinton, the geopolitical security imperative for rapid innovation voiced by Eric Schmidt, and the builder’s need to balance progress with safety, as expressed by Demis Hassabis. This dilemma is enacted through competing global innovation models: the permissionless, market-driven US system; the state-led, top-down Chinese system; and the values-first, deliberative EU model. In this winner-takes-all race, the first actor to achieve a decisive QAI breakthrough could permanently shape global norms and our way of life.

An Atomic Agency for Quantum-AI

Given these stakes, current control paradigms like human-in-the-loop oversight are inadequate. The speed and complexity of QAI render direct human control impossible, a practical manifestation of crossing the Quantum Event Horizon. Therefore, governance must be multi-layered, integrating societal and institutional frameworks. This includes establishing an "Atomic Agency for Quantum-AI" for international oversight and promoting Responsible Quantum Technology (RQT) by Design, guided by principles such as those outlined in our '10 Principles for Responsible Quantum Innovation' article. These frameworks must be led by robust public governance—as corporate self-regulation is insufficient due to misaligned incentives—and must address the distributive justice imperative to prevent a "Quantum Divide."

Towards Quantum-Resistant Constitutional AI

The cornerstone of our proposed solution is Quantum-Resistant Constitutional AI. This approach argues that if we cannot control a QAI agent tactically, we must constrain it architecturally. It builds upon the concept of Constitutional AI by designing a core set of ethical and safety principles (a 'constitution') that are not merely trained into the model but are formally verified and made robust against both classical and quantum-algorithmic exploitation. By hardwiring this quantum-secure constitution into the agent's core, we can create a form of verifiable, built-in control that is more likely to endure as the agent's intelligence scales.

Self-Aware Quantum-AI Agents

Looking toward more speculative futures, the potential for a Human-AI Merger or the emergence of a QAI Hive Mind—a networked, non-human consciousness enabled by quantum entanglement—represents the ultimate challenge and the final crossing of the Quantum Event Horizon. The foundational governance work we do today, including projects like Quantum-ELSPI, is the essential precursor to navigating these profound transformations.

In conclusion, this paper argues that for the European Union, proactively developing and implementing a framework centered on Quantum-Resistant Constitutional AI is not just a defensive measure against existential risk. It is a strategic necessity to ensure that the most powerful technology in human history develops in alignment with democratic principles, securing the EU’s role as a global regulatory leader in the 21st century.

Meer lezen
Music Law and Artificial Intelligence: From Cloned Artists to AI-Generated Works

The rise of artificial intelligence (AI) in the music industry is sparking a revolution, profoundly changing how music is created. This development raises complex legal questions concerning AI and copyright, including related rights. How can we protect the creative rights of artists and composers while simultaneously allowing room for technological innovation? In this comprehensive yet accessible legal overview, we explore key issues regarding AI and music. These include whether AI can legally train on copyrighted materials without consent, TDM exceptions, how various rights organizations (such as Buma/Stemra and Sena) approach AI, the status of AI-generated musical works, the threshold of human creativity required, protection against AI voice cloning via privacy laws and moral rights, contractual implications, new obligations under the EU AI Act, differences between European and American law, and ongoing lawsuits. This article is tailored for artists, composers, music publishers, labels, voice actors, producers, and AI companies seeking clarity on their legal standing.

AI Training on Protected Music and Video Materials: Legal Framework and Debate

Can an AI model in the Netherlands and the EU train on copyrighted material (such as music or video) without permission from the rights holders? Generally, using protected material beyond private use or citation requires permission. Scraping or using data for AI training without permission is typically considered infringement unless a specific legal exception applies.

Buma/Stemra’s Opt-Out Policy

In the Netherlands, Buma/Stemra explicitly uses its opt-out rights, requiring prior consent for TDM on its repertoire, thus ensuring fair compensation for composers and lyricists.

EU AI Act: Transparency Obligations and System Monitoring

The EU AI Act, effective from August 2025, introduces important transparency requirements, obliging generative AI model developers to:

• Disclose training data used, including copyrighted music or texts.

• Maintain policies ensuring compliance with EU copyright law.

• Respect explicit opt-out signals from rights holders during training.

The Act doesn't prohibit using protected material for training outright but enforces transparency and compliance through oversight and penalties.

Composition, Lyrics, and Master Recordings: Different Rights Regimes

Music rights in the Netherlands broadly split into:

Copyright: Protects compositions and lyrics, managed by organizations like Buma/Stemra.

Neighboring Rights: Protect recordings and performances, managed by Sena.

AI-Generated Compositions and Lyrics

Completely AI-generated works often fail to meet traditional copyright criteria, as human creativity is essential.

Neighboring Rights

It remains uncertain whether AI-generated performances and recordings attract neighboring rights, as these typically rely on human involvement.

Copyright Status of AI-Generated Music

In the U.S., fully AI-generated works explicitly do not receive copyright protection. While Europe hasn't clarified explicitly, the prevailing legal view aligns with this stance—AI-generated works likely fall into the public domain unless there's significant human creativity involved.

Hybrid Creations

Music combining human and AI input may qualify for copyright protection depending on the human creative contribution's significance.

AI Voice Cloning: Personality Rights and Privacy

AI voice cloning technology poses challenges regarding personal rights and privacy. Artists may invoke:

• Privacy rights under EU law (Article 8 ECHR).

• Personality rights.

• Potential trademark and image rights analogously.

The EU AI Act mandates transparency in AI-generated content, aiming to mitigate unauthorized use and deepfake concerns.

Music Contracts in the AI Era

Existing music contracts require updates addressing AI-specific matters:

• Explicit licensing terms for AI training.

• Ownership clarity of AI-generated content.

• Liability assignment for copyright infringements involving AI.

Conclusion: Balancing Innovation and Rights—Be Prepared

The intersection of AI and music law presents both opportunities and challenges. Stakeholders should proactively:

• Clearly define rights in AI-generated music contractually and update existing music contracts.

• Specify permissions (licenses) and restrictions (opt-out) regarding AI training explicitly.

• Seek specialized music & AI legal advice to navigate evolving regulations.

By strategically addressing these issues, artists, companies, and AI developers can legally and effectively harness AI innovations, maintaining both creative and commercial control.

Meer lezen
Towards an Atomic Agency for Quantum-AI

Stanford, CA May 5, 2025 — Today, Mauritz Kop published interdisciplinary research proposing “A Principled Approach to Quantum Technologies”, and the establishment of an “Atomic Agency for Quantum-AI” on the website of the European Commission. The Atomic Agency essay analyzes emerging AI and quantum technology (including their increasing complementarity and interdependence embodied in quantum-AI hybrids) regulation, export controls, and technical standards in the U.S., EU, and China, comparing legislative efforts anno 2025 to strategically balance the benefits and risks of these transformative technologies through the lens of their distinct innovation systems. The Principled Approach paper posits that quantum technology's dual use character brings with it the need to balance maximizing benefits with mitigating risks. In this spirit, the paper argues that quantum technology development should best be guided by a framework for Responsible Quantum Technology, operationalized by a set of tailored principles to optimize risk-benefit curves.

Towards an Atomic Agency for Quantum-AI

The article analyzes emerging regulation, export controls, and technical standards for both quantum and AI (including their increasing complementarity and interdependence embodied in quantum-AI hybrids) in the U.S., EU, and China and offers novel conceptual frameworks to steward these technologies towards shared global benefit.

Key Takeaways:

1. Converging Need for Responsible Governance: Despite distinct innovation philosophies (market-driven in the U.S., values-based in the EU, and state-driven in China), there's a growing international consensus on the necessity for principled and responsible technology governance for both AI and quantum technologies.

2. The 'Washington Effect' vs. 'Beijing Effect': The U.S.'s security-centric policies are creating a de facto "Washington effect," potentially setting global rules for quantum law but risking premature regulation. Conversely, China's push for state-aligned standardization (e.g., via the Digital Silk Road) signals a "Beijing effect," which could export autocratic norms and fragment global interoperability, a trend exacerbated by unilateral export controls.

3. Potential U.S., EU and China Visions on a Quantum Governance Act: Given the distinct innovation ecosystems and strategic priorities of the United States, the European Union, and China, it is instructive to envision how each might hypothetically structure a dedicated legislative framework for quantum technologies. The research outlines hypothetical "Quantum Governance Acts" for each, reflecting their respective governance philosophies and innovation models, while also considering pathways towards greater international alignment based on shared values:

a. United States: Removing Barriers for U.S. Quantum Technology Act (deregulation, industrial standards-centric approach, Safeguarding through Advancing quantum technology, prioritizing market dynamism, national & economic security, and defense).

b. European Union: EU Quantum Act (harmonized regulation rooted in fundamental rights and societal benefit based on New Legislative Framework while incorporating elements from European Chips Act, renewed focus on defense via “European DARPA”).

c. China: Comprehensive Quantum Law (Safeguarding state control while Advancing state goals, blending elements of authoritarian governance with surveillance capitalism, integration of civilian and military sectors, self-reliance, exporting state norms & values through technical standards).

4. Global Challenges & Opportunities for Alignment: Faced with planetary challenges like disease, inequality and climate change, aligning on Responsible Quantum Technology (RQT) norms and standards is a critical global opportunity. The article cautions against a simplistic zero-sum game or Cold-War redux narrative for quantum competition, arguing it hinders vital international cooperation.

5. Quantum-Relativistic Innovation Theory of Everything: Philosophical thought experiment to understand innovation dynamics by drawing analogies from quantum mechanics (uncertainty, superposition at micro-level) and general relativity (context, structure at macro-level), theories about the fundamental nature of reality.

6. Smart Regulation and RQT by Design: Effective governance must move beyond mere restrictions to actively incentivize responsible behaviors, promoting "Responsible Quantum Technology (RQT) by design" through flexible instruments like Quantum Impact Assessments (QIA), RQT by design metrics, adaptive, modular legislation, & regulatory sandboxes.

7. Harmonized "Quantum Acquis Planétaire": The article advocates for a global body of Quantum Law ("Quantum Acquis Planétaire"), complemented by sector-specific practices. Such a quantum acquis would be anchored in universal ethical values and translated into foundational standards and agile legal guardrails. This requires inter-continental policymaking and strategic "recoupling" between major players like the U.S. and China, based on incentives and shared values (“what connects us” – e.g. human dignity, security, well-being).

8. An "Atomic Agency for Quantum-AI": A central proposal is the establishment of an international agency modeled after the International Atomic Energy Agency (IAEA). This body would aim to enforce a global acquis, deter a quantum arms race, ensure non-proliferation of dual-use quantum-AI technologies via safeguards implementation (inspired by nuclear governance), and potentially oversee a global UN Quantum Treaty.

9. Need for International Collaboration & Research Platforms: Realizing ambitious goals like fault-tolerant quantum centric supercomputing, and scalable topological qudits unlocking higher-dimensional quantum systems leveraging multi-level logic, requires collective global expertise and collaborative research platforms akin to CERN or ITER, challenging protectionist measures that stifle necessary cooperation. Immediate global actions should focus on leveraging quantum for the UN Sustainable Development Goals (SDGs), mitigating a 'Quantum Divide,' promoting quantum literacy, and building a skilled quantum workforce.

The research underscores the urgent need for robust global quantum-AI governance structures and calls for a shift from purely competitive dynamics towards pragmatic cooperation and the codification of a harmonized global framework.

Meer lezen
Law, Ethics and Policy of Quantum & AI in Healthcare and Life Sciences published at Harvard, Stanford and European Commission

A collaborative research initiative by scholars from Stanford, Harvard, and MIT, published by the Petrie-Flom Center at Harvard Law School, the Stanford Center for Responsible Quantum Technology, and the European Commission, delves into the complex regulatory and ethical landscape of integrating quantum technologies and artificial intelligence (AI) into the healthcare and life sciences sectors. This series of policy guides and analyses, authored by an interdisciplinary team including Mauritz Kop, Suzan Slijpen, Katie Liu, Jin-Hee Lee, Constanze Albrecht, and I. Glenn Cohen, offers a comprehensive examination of the transformative potential and inherent challenges of this technological convergence.

Regulating Quantum & AI in Healthcare and Medicine: A Brief Policy Guide

This body of research, examining the entangled legal, ethical, and policy dimensions of integrating quantum technologies and AI into healthcare, is articulated across a series of publications in leading academic and policy forums. These works collaboratively build a comprehensive framework for understanding and navigating the future of medicine. A related policy guide was also published on the European Commission's Futurium platform, further disseminating these findings to a key international policymaking audience. The specific publications include:

1. A Brief Quantum Medicine Policy Guidehttps://blog.petrieflom.law.harvard.edu/2024/12/06/a-brief-quantum-medicine-policy-guide/

2. How Quantum Technologies May Be Integrated Into Healthcare, What Regulators Should Considerhttps://law.stanford.edu/publications/how-quantum-technologies-may-be-integrated-into-healthcare-what-regulators-should-consider/

3. EU and US Regulatory Challenges Facing AI Health Care Innovator Firmshttps://blog.petrieflom.law.harvard.edu/2024/04/04/eu-and-us-regulatory-challenges-facing-ai-health-care-innovator-firms/

4. Regulating Quantum & AI in Healthcare: A Brief Policy Guidehttps://futurium.ec.europa.eu/en/european-ai-alliance/document/regulating-quantum-ai-healthcare-brief-policy-guide

by Mauritz Kop, Suzan Slijpen, Katie Liu, Jin-Hee Lee, Constanze Albrecht & I. Glenn Cohen

Forging the Future of Medicine: A Scholarly Perspective on the Law, Ethics, and Policy of Quantum and AI in Healthcare

The research posits that the fusion of AI with second-generation quantum technologies (2G QT)—which harness quantum-mechanical phenomena like superposition and entanglement—is poised to revolutionize precision medicine. This synergy of quantum computing, sensing and simulation with artificial intelligence promises hyper-personalized healthcare solutions, capable of tackling intricate medical problems that lie beyond the grasp of classical computing. The potential applications are vast, spanning from accelerated drug discovery and development workflows and enhanced diagnostic imaging to rapid genome sequencing and real-time health monitoring. For instance, quantum simulations could model molecular interactions to create more effective pharmaceuticals, while quantum dots may offer novel platforms for targeted cancer therapies and treatments for neurodegenerative conditions by overcoming the blood-brain barrier.

However, the authors caution that these groundbreaking advancements are accompanied by significant ethical, legal, socio-economic, and policy (ELSPI) implications. The emergence of Quantum Artificial Intelligence (QAI), Quantum Machine Learning (QML), and Quantum Large Language Models (QLLM) is expected to amplify these ELSPI concerns. The dual-use nature of these technologies, such as their potential application in gain-of-function research, necessitates a principled and human-centric governance approach.

Meer lezen
Hoover Institution Invites Mauritz Kop to Speak on Quantum, Democracy and Authoriarianism

Professor Mauritz Kop Addresses Quantum Technology's Role in the Era of Digital Repression at Hoover Institution Workshop

Palo Alto, CA – April 22, 2024 – Professor Mauritz Kop, Founding Director of the Stanford Center for Responsible Quantum Technology (RQT), delivered insightful opening remarks at a breakout session on Quantum Technology as part of the two-day closed door workshop, "Getting Ahead of Digital Repression: Authoritarian Innovation and Democratic Response." The workshop, held on April 22-23, 2024, at Hoover Institution, Stanford University, was a collaborative effort by the National Endowment for Democracy’s International Forum for Democratic Studies, Stanford University’s Global Digital Policy Incubator, and the Hoover Institution’s China’s Global Sharp Power Project.

The event convened leading researchers and advocates to map how digital authoritarians are innovating globally and to identify new strategies for ongoing knowledge-sharing and cooperation to confront this deepening challenge. The agenda focused on understanding how autocrats leverage emerging technologies—from AI and digital currencies to quantum technology—for social control, censorship, and to export their governance models.

Guardrails Against Digital Authoritarianism

Professor Kop's address served as a crucial discussion starter for the breakout session, which aimed to brainstorm how advances in quantum technology might alter the dynamics of the struggle against digital authoritarianism and to explore potential guardrails. His remarks underscored the profound societal impact of quantum technologies and the imperative for proactive, principles-based governance to ensure they are developed and deployed responsibly, safeguarding human rights and democratic values on a global scale.

Meer lezen
Princeton University’s Liechtenstein Institute on Self-Determination’s visits Stanford Center for Responsible Quantum Technology

On Wednesday, January 22, 2024, the Stanford Center for Responsible Quantum Technology (RQT) had the distinct pleasure of hosting a delegation from Princeton University’s Liechtenstein Institute on Self-Determination (LISD) for a dynamic discussion on transatlantic technology policy at Stanford Law School. The event was marked by great energy and a profound exchange of ideas, reflecting the shared commitment of both institutions to navigating the complex intersection of emerging technology, global governance, and societal values.

Princeton LISD leadership and their talented International Policy Associates at Stanford RQT

As the host of the meeting, Professor Mauritz Kop had the opportunity to welcome the LISD leadership and their talented International Policy Associates. He provided an overview of the mission of the Stanford RQT, explaining our framework for ensuring that quantum technologies are developed and integrated into society in a manner that is safe, ethical, and equitable. This introduction served as a foundation for a rich and meaningful conversation, where the Princeton fellows posed insightful questions. The discussion traversed a wide spectrum of pressing topics, including the challenges of regulating the suite of quantum technologies, the discovery of quantum use cases in healthcare, the complexities of export controls and quantum materials supply chains, and the crucial interface between AI and quantum computing. Furthermore, we explored the application of universal democratic values and culturally sensitive ethics to these new domains, comparing the innovation systems of the US, EU, and China, and considering the potential for technology to drive both scarcity and abundance on a planetary scale.

Freeman Spogli Institute for International Studies (FSI) and Stanford Graduate School of Business (GSB)

The engagement at the RQT was a cornerstone of the LISD delegation's broader visit to Northern California, which included several high-level meetings across Stanford University. The group’s thank-you message noted their "incredibly enriching visit" and the comprehensive nature of their discussions. Their agenda also included conversations with preeminent scholars Professor Francis Fukuyama and Professor Michael McFaul at the Freeman Spogli Institute for International Studies (FSI), and a meeting with Dean Jon Levin of the Stanford Graduate School of Business (GSB). According to the delegation, these dialogues covered a range of vital geopolitical issues, from the war in Ukraine and a possible attack on Taiwan to Stanford's pivotal role in innovation, highlighting the "multifaceted impact of technology on democracy." The visit also provided an opportunity for the delegation to connect with Princeton alumni in the region.

Collaborating on global challenges and opportunities presented by quantum technologies

The esteemed LISD delegation was led by its senior leadership, including Sophie Meunier, the Acting Director of LISD and a Senior Research Scholar at the Princeton School of Public and International Affairs; Nadia Crisan, the Executive Director of LISD; Jonathan Fredman, a Non-Resident Fellow and former senior leader at the Central Intelligence Agency; and Jana-Alessa Peper, the Institute Assistant who coordinates the IPA program. They were accompanied by the LISD’s International Policy Associates, a cohort of students pursuing careers in foreign policy and international business, who brought a diverse array of academic and professional interests to the discussion.

The dialogue reinforced the critical importance of building bridges between institutions and disciplines. The exchange of perspectives between Stanford’s technology and policy experts and Princeton’s specialists in international affairs and self-determination created a unique and valuable synergy. We extend our sincere thanks to the entire LISD delegation for a stimulating and productive session that has undoubtedly planted the seeds for future collaboration in addressing the global governance challenges and opportunities presented by quantum and other advanced technologies.

Meer lezen
Why Quantum Computing Is Even More Dangerous Than Artificial Intelligence (Foreign Policy)

Washington DC, August 21, 2022. Foreign Policy just published an article about regulating quantum technology authored by Vivek Wadhwa and Mauritz Kop. https://foreignpolicy.com/2022/08/21/quantum-computing-artificial-intelligence-ai-technology-regulation/

United States and other democratic nations must prepare for tomorrow's quantum era today

To avoid the ethical problems that went so horribly wrong with AI and machine learning, democratic nations need to institute controls that both correspond to the predicted power of the emerging suite of second generation quantum technologies, and respect & reinforce democratic values, human rights, and fundamental freedoms. In fact, the quantum community itself has issued a call for action to immediately address these matters. We argue that governments must urgently begin to think about regulation, standards, and responsible use—and learn from the way countries handled or mishandled other revolutionary technologies, including AI, nanotechnology, biotechnology, semiconductors, and nuclear fission. Benefits and increased quantum driven prosperity should be equitably shared among members of society, and risks equally distributed. The United States and other democratic nations must not make the same mistake they made with AI—and prepare for tomorrow's quantum era today.

Meer lezen
Intellectual Property in Quantum Computing and Market Power: A Theoretical Discussion and Empirical Analysis (Oxford University Press)

Delighted to see our article ‘Intellectual Property in Quantum Computing and Market Power: A Theoretical Discussion and Empirical Analysis’ -co-authored with my talented friends Prof. Mateo Aboy, PhD, SJD, FIT and Prof. Timo Minssen- published in the Journal of Intellectual Property Law & Practice (Oxford University Press), the flagship IP peer-reviewed OUP Journal, edited by Prof. Eleonora Rosati. Thanks to the JIPLP team for excellent editorial support! Our article: https://academic.oup.com/jiplp/article/17/8/613/6646536

This piece is the sisterpaper of our Max Planck @ Springer Nature published article titled ‘Mapping the Patent Landscape of Quantum Technologies: Patenting Trends, Innovation and Policy Implications’, which we wrote in parallel. The IIC quantum-patent study can be found here: https://link.springer.com/article/10.1007/s40319-022-01209-3. Our teamwork was absolutely gratifying and we hope it will inform strategic, evidence based transatlantic policy making.

IP and Antitrust Law

Please find a short synopsis of our work below:

We are on the verge of a technological revolution associated with quantum technologies, including quantum computing and quantum/artificial intelligence hybrids. Its complexity and global significance are creating potential innovation distortions, which could not have been foreseen when current IP and antitrust systems where developed.

Potential IP Overprotection

Using quantitative methods, we investigated our hypothesis that IP overprotection requires a reform of existing IP regimes for quantum tech, to avoid or repair IP thickets, fragmented exclusionary rights and anticommons concerns, lost opportunity costs, and an unwanted concentration of market power.

Perhaps counter-intuitively, we found that there appear to be (at least so far) no such overprotection problems in the real-world quantum computing field to the extent that their consequences would hinder exponential innovation in this specific branch of applied quantum technology, as more and more quantum patent information enters the public domain.

Patents versus Trade Secrets and State Secrets

However, developments taking place in secrecy, either by trade secrets or state secrets, remains the Achilles heel of our empirical approach, as information about these innovations is not represented by our dataset, and thus cannot be observed, replicated or generalized.

Interplay between IP and Antitrust Law: Open or Closed Innovation Systems

Policy makers should urgently answer questions regarding pushing for open or closed innovation systems including the interplay between IP and antitrust law, taking into account dilemma’s pertaining to equal/equitable access to benefits, risk control, ethics, and overall societal impact. Crucially, intellectual property in quantum technology has a national safety and (cyber)security dimension, often beyond the IP toolkit.

Meer lezen
Montreal World Summit AI 2022 Features Mauritz Kop Keynote on EU AI Act

Montreal, Canada – May 4, 2022 – Today, at the prestigious World Summit AI Americas held at the Palais des congrès, Mauritz Kop, TTLF Fellow at Stanford Law School and Director of AIRecht, provided a concise overview of the proposed EU Artificial Intelligence Act. He was a featured panellist in a critical discussion titled, "Does the proposed EU Artificial Intelligence Act provide a regulatory framework for AI that should be adopted globally?". The summit, themed "AI with impact: for crisis response and business continuity and recovery," brings together leading AI brains and enterprise leaders.

Mr. Kop joined fellow distinguished panellists Professor Gillian Hadfield from the University of Toronto and Dr. José-Marie Griffiths, President of Dakota State University and former NSCAI Commissioner. The session was moderated by Meredith Broadbent, Former Chairman of the U.S. International Trade Commission and Senior Adviser at CSIS.

Novel Legal Framework for AI

During the panel, Mr. Kop outlined the main points of the novel legal framework for AI presented by the European Commission on April 21, 2021. He explained that the EU AI Act sets out horizontal rules applicable to all industries for the development, commodification, and use of AI-driven products, services, and systems within the EU's territory.

A core component of the Act is its sophisticated ‘product safety framework’, which is constructed around four distinct risk categories in a "pyramid of criticality". This risk-based approach dictates that AI applications with unacceptable risks are banned, while lighter legal regimes apply to low-risk applications. As the risk level increases, so do the stringency of the rules, ranging from non-binding self-regulation and impact assessments for lower-risk systems to potentially heavy, externally audited compliance requirements throughout the lifecycle of high-risk AI systems.

EU "Trustworthy AI" Paradigm

Mr. Kop emphasized that the Act aims to codify the high standards of the EU’s "trustworthy AI" paradigm, which mandates that AI systems must be legal, ethical, and technically robust, all while respecting democratic values, human rights, and the rule of law. A crucial aspect highlighted was the requirement for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. This pre-market conformity regime also extends to the machine learning training, testing, and validation datasets used by these systems. Only after a declaration of conformity is signed and the CE marking is affixed can these high-risk systems enter and be traded on the European markets.

Enforcement will be managed by a new Union-level body, the European Artificial Intelligence Board (EAIB), supported by national supervisors in each Member State, similar to the GDPR's oversight structure. Mr. Kop noted the seriousness of non-compliance, with potential fines reaching up to 6% of a company's global turnover.

Balancing regulation with innovation, the EU AI Act also introduces legal sandboxes. These are designed to provide AI developers with "breathing room" to test new inventions and foster a flourishing AI ecosystem in Europe.

Meer lezen
Mauritz Kop Lecturer AI Regulation and Intellectual Property Law at CEIPI, University of Strasbourg

Strasbourg, France – We are pleased to feature insights from a lecture on "Intellectual Property and Ownership of AI Input and Output Data" delivered by Professor Mauritz Kop at the Centre for International Intellectual Property Studies (CEIPI), University of Strasbourg. This session was part of the University Diploma in Artificial Intelligence and Intellectual Property.

Rights and responsibilities pertaining to AI and data

Professor Kop, a Fellow at Stanford University and a strategic IP lawyer, shared his expertise on the rights and responsibilities pertaining to AI and data, offering both theoretical perspectives and practical tips at the current state of technological and legal development. The lecture aimed to equip attendees with a bird's-eye view of the intertwined key elements of this multidimensional topic.

AI, data governance, and intellectual property law

Professor Kop's session underscored the dynamic interplay between AI advancement, data governance, and intellectual property law. It highlighted the necessity for legal professionals to be "double or triple educated" to navigate this complex field and for ongoing efforts to create legal frameworks that foster responsible innovation while addressing societal and ethical considerations.

The lecture concluded by stressing that AI literacy and awareness, continuous learning, and proactive legal strategies are essential for all stakeholders in the AI ecosystem.

Meer lezen
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

Download the article here: Kop_EU AI Act: The European Approach to AI

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
Cyber Week 2021 Tel Aviv University Israel

AIRecht Director Mauritz Kop will speak at Cyber Week 2021 Tel Aviv University Israel, and participate in the Panel 'Debating Collective Cyber Defense for Democracies'. He will present his Stanford essay ‘Democratic Countries Should Form a Strategic Tech Alliance’ on July 22nd at 20:00 Israel time, see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3814409

Cyber Week 2021 hosts a range of distinguished speakers from across the globe, including the Prime Minister of Israel Naftali Bennett, see: https://cw2021.b2b-wizard.com/expo/speakers

Debating Collective Cyber Defense for Democracies

Line-up and speakers of the ‘Debating Collective Cyber Defense for Democracies’ panel (notice the strong Dutch@Stanford representation):

Keynote: Ambassador Heli Tiirmaa-Klaar, Ambassador-at-Large for Cyber Diplomacy at the Estonian Ministry of Foreign Affairs

Lectures by:

Prof. Chris Demchak, Strategic and Operational Research Department, U.S. Naval War College

Dr. Lior Tabansky, Ph.D., (Moderator), Head of Research Development, Blavatnik Interdisciplinary Cyber Research Center, Tel Aviv University

Mauritz Kop, Stanford Law School TTLF Fellow, Founder of MusicaJuridica, and Strategic Intellectual Property Lawyer at AIRecht

Marietje Schaake, International Policy Director at the Cyber Policy Center; International Policy Fellow at the Institute for Human-Centered Artificial Intelligence, Stanford University

See the complete agenda at: https://cw2021.b2b-wizard.com/expo/agenda

Democratic Countries Should Form a Strategic Tech Alliance

Kop’s essay titled ‘Democratic Countries Should Form a Strategic Tech Alliance’ concludes that to prevent authoritarianism from gaining ground, democratic governments should do four things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum, 6G and Virtual Reality (VR), (3) win the race for 4IR technology supremacy, and (4) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

REGISTER for the conference following the link: https://cw2021.b2b-wizard.com/expo/home

Meer lezen