- The AI Verdict
- Posts
- European Parliament's approval, Beijing's lead, and managing AI risks with insurance
European Parliament's approval, Beijing's lead, and managing AI risks with insurance
Welcome back to the newsletter dedicated to informing you of the evolving relationship of artificial intelligence and law.
What you need to know this week:
Let’s jump in. [2-3 min read]
The European parliament has voted in favor of adopting a comprehensive law on AI, with 84 votes in favor, 7 against, and 12 abstentions.
The proposed law aims to regulate AI technology and protect Europeans from potential risks, making it the first major regulator to address AI.
The next step for the AI Act is total adoption, expected during the 12-15 June session.
The law categorizes AI applications into three groups based on their risks, with the most extreme category leading to a ban, high-risk applications subject to specific legal requirements, and others left largely unregulated.
Governments worldwide are increasingly considering AI regulation, and recent events, such as OpenAI's ChatGPT and discussions at the White House, highlight the growing concerns and efforts to address AI's challenges and opportunities.
China is ahead of the US in enacting rules for artificial intelligence (AI) with officials closing consultation on a second round of generative AI regulation.
The US falling behind in AI regulation puts it at a disadvantage because it risks losing its ability to shape global AI standards and regulations, which could limit the growth and competitiveness of US AI companies in global markets.
Beijing's speedy regulation achieves three goals at home: tighter central government control of debate, building up hybrid corporate entities that are meshed with the Chinese Communist Party, and boosting trust in AI, which drives consumer uptake and spurs growth.
Chinese authorities now have six years of experience building up AI regulatory knowhow since they launched a Next Generation Artificial Intelligence Development Plan in 2017. They're using regulation as a form of industrial policy, in addition to traditional subsidies.
China's AI regulations govern what businesses in China do with AI, but very few in the West believe they will restrain the Chinese government's absolute power in any real way. U.S. efforts to regulate AI take aim at both business and government.
By moving quickly on regulation, Beijing is creating a foundation for AI exports across the Global South and countries participating in its Belt and Road Initiative.
China's top priority is to minimize social disruption as it deploys AI, and Beijing has also learned from sanctions on Huawei and U.S. chips export controls that China needs to be ready to innovate alone.
Generative AI tools used in business can create unique risks of loss that need to be managed effectively. These risks include:
AI "hallucination," where the AI tool provides plausible-sounding answers that are factually incorrect or misleading.
Infringing AI training data, where AI models are trained using unlicensed sources, resulting in copyright litigation and other legal issues.
Sabotage by AI-displaced employees, where highly skilled employees who lose their jobs to AI technology may try to sabotage the technology or the business that adopted it, causing losses to the employer and third parties.
Businesses can manage these risks by investing in commercial insurance policies that cover the unique risks associated with generative AI. Some types of policies to consider include:
Cyber policies, which can cover risks ranging from first-party digital asset loss to third-party liability for data breaches, including those related to AI-specific risks.
Property policies, which may "silently" cover damage from AI-related causes, including tangible harm to owned property. These policies may also provide valuable business interruption coverageif a qualifying event disrupts the company's operations.
Technology errors and omissions policies, which can respond to claims for copyright violations, as well as AI-generated erroneous advice. These policies may also cover media wrongful incidents, including trademark/copyright infringement and plagiarism.
Crime policies and fidelity bonds, which can respond to conduct by disgruntled employees whose jobs are made redundant by AI, such as sabotage of computer systems or diversion of automated payments.
Insurers are devising new policy wording to cover emerging technology risks, and some are already offering AI specialty policies. However, they may also introduce new exclusions to address unanticipated and unpriced risks. As businesses deploy more generative AI tools, coverage renewals in all lines of insurance will require more careful attention to wording details, so that insurance programs mesh to cover unique AI risks effectively.
This week’s AI legal-tech spotlight
LegalSifter: LegalSifter is an AI-driven legal tech company that helps users understand and sign contracts. It uses natural language processing and machine learning to provide AI-powered advice on possible changes and negotiation potential. The company sees its approach as "combined intelligence," where humans and technology work together to bring affordable legal services to users. LegalSifter plans to expand its staff to up to 100 employees by the end of the year.
Lexis Connect: LexisNexis and Microsoft have created a new tool called Lexis Connect, which uses AI to help legal teams manage cases and documents. It includes a chatbot called Ask Legal, which can answer non-legal staff's questions, freeing up lawyers' time. The tool is currently in a free trial phase, allowing companies to test it and provide feedback. Its aim is to help legal teams find relevant documents and track projects through a visual timeline. It integrates with Microsoft Teams, making it easy for teams to use.
Thanks for reading. See you next Friday.