• The AI Verdict
  • Posts
  • Key AI-legal developments across US, EU, and India: patent, copyright, investment, regulation, and risk management

Key AI-legal developments across US, EU, and India: patent, copyright, investment, regulation, and risk management

Welcome back to the newsletter dedicated to informing you of the evolving relationship of artificial intelligence and law.

What you need to know this week:

  1. US Supreme Court rejects computer scientist's bid to list AI as inventor on patents

  2. EU proposes comprehensive copyright rules for generative AI in new AI Act draft

  3. Law firms rush to invest in legal AI startup ‘Harvey’ as more lawyers use generative AI tools.

  4. India chooses not to regulate AI, citing innovation potential and unique economic position

  5. A comparison of US and EU approaches to AI risk management

Let’s jump in. [5-7 min read]

  • Stephen Thaler, a computer scientist, petitioned the Supreme Court to recognize his artificial intelligence program, called DAUBUS, as an inventor on patents. Thaler challenged rejections of his patent applications for a food container and a light beacon by the US Patent and Trademark Office. The USPTO had rejected the applications on the grounds that an AI system cannot be listed as an inventor.

  • Thaler argued that since DAUBUS autonomously generated the ideas for the food container and light beacon, it should be recognized as the inventor. However, the USPTO position is that inventors must be human beings under current patent laws. Thaler's argument was that the term "individual" in the Patent Act should be broadly interpreted to promote innovation, but courts in the European Union, the United Kingdom, and Australia have rejected similar arguments.

  • The UK's Supreme Court heard the debate in March but has yet to issue a decision. Thaler has only won a case in a South African court.

  • The US Court of Appeals for the Federal Circuit ruled in August 2022 that inventors must be human, and the US Supreme Court's rejection of Thaler's petition upholds that decision.

  • Federal Circuit judges expressed skepticism during oral arguments about how the plain meaning of "individual" could include an AI entity and how there could be no human involvement in an invention.

  • The court ultimately found that there was "no ambiguity" in the Patent Act that inventors must be people, and the term "individual" cannot be stretched to include AI entities. The US Patent and Trademark Office waived its right to respond to Thaler's appeal.

  • See the full case docket here

  • The European Union is proposing new copyright rules for generative AI, which could be the world's first comprehensive laws governing the technology.

  • Under the proposed regulations, companies deploying generative AI tools, such as ChatGPT or image generator Midjourney, will have to disclose any copyrighted material used to develop their systems.

  • The EU is drafting the AI Act to regulate emerging artificial intelligence technology, with a focus on perceived risk levels. This includes areas of concern such as biometric surveillance, spreading misinformation, or discriminatory language.

  • High-risk AI tools will not be banned, but companies using them will need to be highly transparent in their operations. This is to protect citizens' rights and foster innovation while regulating AI proportionately.

  • The EU's proposal is seen as a tactful approach to regulating AI technology, and the region has been at the forefront of AI regulation. However, it's important to note that this development may have implications for companies and organizations worldwide that use generative AI tools.

  • Legal AI startup Harvey has raised $21 million in fresh investor cash led by Sequoia Capital, with OpenAI Startup Fund, Conviction, SV Angel, and Elad Gil also participating in the funding round.

  • Harvey, which builds custom large language models for law firms using OpenAI's GPT-4, has more than 15,000 law firms on a waiting list to start using its services.

  • Global law firm Allen & Overy and accounting giant PricewaterhouseCoopers are among the growing number of firms adopting legal AI tools, such as Harvey's platform, to automate document drafting and research.

  • While some firms are developing AI capabilities in-house, others are still evaluating how and whether to use the technology, with cautious and thoughtful testing and guardrails needed to protect confidential client data and avoid errors.

"This is an arms race, and you don't want to be the last law firm with these tools. It's very easy to become a dinosaur these days."

Daniel Tobey, chair of DLA Piper's AI practice

  • India is not planning any new regulations affecting artificial intelligence, which sets it apart from other global voices calling to keep AI in check.

  • Indian attorneys believe that regulating AI at this point would hinder meaningful innovation, as the country sits at a unique economic position on the brink of propelling itself into the realm of tech bellwethers.

  • The power of unchecked AI in a polarized democracy like India, just a year away from a highly contentious 2024 General Election, could set the stage for a flurry of litigation in the country, which is not yet known to be as litigious as its Western analogues.

  • India already has data privacy regulations, with comprehensive federal laws like the Digital Personal Data Protection Act and the Digital India Act in the works, and this may be enough for now to address data privacy concerns brought by some European regulators.

  • The U.S. federal government’s approach to AI risk management is risk-based, sectorally specific, and highly distributed across federal agencies, but it has not created an even or consistent federal approach to AI risks. Despite the February 2019 executive order and the ensuing Office of Management and Budget (OMB) guidance, most federal agencies have still not developed the required AI regulatory plans.

  • The Biden administration has revisited the topic of AI risks through the Blueprint for an AI Bill of Rights (AIBoR), which includes a detailed exposition of AI harms to economic and civil rights, five principles for mitigating these harms, and an associated list of federal agencies’ actions. However, the AIBoR is nonbinding guidance, and most federal agencies are only able to adapt their pre-existing legal authorities to algorithmic systems, making it difficult to enforce all of the principles expressed by the AIBoR.

  • The EU has a complex and multifaceted approach to AI risk management which includes a tiered system of regulatory obligations for different digital environments with a different degree of emphasis on AI.

  • GDPR has two important articles related to algorithmic decision-making, including a requirement that algorithmic systems should not be allowed to make significant decisions without human supervision and that individuals have the right to "meaningful information" about the logic of algorithmic systems.

  • The AI Act will be a critical component of the EU's approach to AI risk management and will include different regulatory obligations for consumer products and AI used for impactful socioeconomic decisions, which will need to meet standards of data quality, accuracy, robustness, and non-discrimination, as well as implementing technical documentation, record-keeping, risk management, and human oversight. Entities that sell or deploy covered high-risk AI systems will face fines as high as 6% of annual global turnover if they do not comply with these requirements.

  • The US and EU share similar principles for trustworthy AI, but the EU has more comprehensive and centrally coordinated regulatory coverage, with clearer enforcement powers and a focus on transparency and information sharing.

  • The US invests more funding in AI research, which may lead to the development of new technologies that mitigate AI risks, but their ability to enforce rules remains unclear, and they may need to pursue novel litigation without explicit legal authority.

  • TL;DR: The United States and the European Union have different approaches to managing the risks associated with artificial intelligence (AI). The US approach is sector-specific and dispersed across different agencies, whereas the EU has a more comprehensive and centrally coordinated regulatory coverage. The EU's AI Act will enforce regulatory obligations for different AI applications and entities, whereas the US approach lacks clear enforcement powers, but invests more funding in AI research.

  • Read a full report here

This week’s AI legal-tech spotlight

  • AI Revise

    • LegalOn's AI Revise is an AI contract editing tool that uses legal knowledge and technology to assist legal teams in making contract revisions quickly and efficiently. It is purpose-built for legal contract review and can fix nuanced contract risks specific to each contract type.

  • PatentPal

    • PatentPal simplifies patent search and analysis with AI-driven tools. It includes patent search, analysis, and monitoring, with the ability to generate reports and compare patents. The platform's AI algorithms identify relevant patents and analyze their strengths and weaknesses. Users can also visualize and analyze patent landscapes and trends.

Thanks for reading. See you next Friday.