- The AI Verdict
- Posts
- White House takes action on AI risks, EU set to enforce major AI law, and new bill proposes transparency in political AI ads
White House takes action on AI risks, EU set to enforce major AI law, and new bill proposes transparency in political AI ads
Welcome back to the newsletter dedicated to informing you of the evolving relationship of artificial intelligence and law.
What you need to know this week:
White House announces measures to address AI risks, meets with top tech leaders
EU likely to establish world's first major AI law this year according to tech regulation chief
Legal departments cautious of generative AI while vendors and law firms are more eager to integrate
Congresswoman introduces bill to mandate disclosure of AI in political advertising
Let’s jump in. [3-5 min read]
The White House announced a series of measures to address the challenges of artificial intelligence, driven by concerns about the technology's potential risks for misinformation, discrimination and privacy.
The US government will release draft policy guidance for federal agencies to follow when developing, procuring, and using AI systems.
Leading AI developers will participate in a public evaluation of AI systems at the AI Village at DEFCON 31. DEFCON is an annual cybersecurity conference where hackers, experts, and government officials gather to discuss emerging tech and cybersecurity. The evaluation will assess how AI models align with the principles and practices outlined in the Biden-Harris Administration's AI Bill of Rightsand AI Risk Management Framework.
The National Science Foundation will invest $140 million in AI research centers to apply AI to tackle issues like climate change, agriculture, and public health.
The White House is concerned about the possible use of AI-created deepfakes and misinformation that could undermine the democratic process, job losses linked to rising automation, biased algorithmic decision-making, physical dangers arising from autonomous vehicles, and the threat of AI-powered malicious hackers.
Administration officials met with the CEOs of Google, Microsoft, OpenAI, and Anthropic to emphasize the importance of ethical and responsible AI development. President Joe Biden later underscored that companies have a fundamental responsibility to make sure their products are safe and secure before deployment or public release.
The European Union is likely to achieve a political agreement this year to establish the world's first major artificial intelligence law, according to Margrethe Vestager, the bloc's tech regulation chief.
The EU's Artificial Intelligence Act is expected to be voted on by a committee of lawmakers on May 11 after a preliminary deal was reached by members of the European Parliament on Thursday.
The EU AI Act aims to mitigate the risks of societal damage from emerging technologies while remaining "pro-innovation," according to Vestager.
Although the EU AI Act is anticipated to be passed this year, lawyers have suggested that it could take a few years for it to be fully enforced.
The G7 digital ministers agreed to adopt "risk-based" regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.
3. Legal departments cautious of generative AI while vendors and law firms are more eager to integrate
Legal tech vendors are eager to incorporate generative AI, but corporate legal departments are more cautious due to concerns about risk and privacy.
While many legal departments are open to using generative AI, they are likely to adopt a slower approach than their legal cohort due to concerns around risk and privacy.
Vendors and outside counsel serving corporate legal departments need to disclose their use of GPT and be willing to educate legal departments about the technology.
Legal departments tend to do a deeper drill-down on their tech stacks than law firms, so vendors have more hoops to jump through to integrate generative AI into their solutions. The expensive generative AI integration process might not be worth it if vendors fail a conservative legal department's vetting process.
Rep. Yvette Clarke has introduced a bill that would require political groups to disclose the use of AI-generated content in political ads.
The bill would update federal campaign finance laws to include disclosure requirements for the use of AI in political ads.
Clarke expressed concern about the potential for AI-generated content to manipulate and deceive people on a large scale, which could have consequences for national and election security.
The bill comes amid increased attention on the impact of chatbots and AI on the spread of misinformation and the American workforce.
Read the full text of the proposed bill here
This week’s AI legal-tech spotlight
Luminance is an AI-powered legal process automation tool that streamlines the processing of legal documents. It uses AI technology to provide analysis of large amounts of data, including contract review and due diligence. The software can identify key information and flag potential risks.
LexisNexis launched Lexis+ AI, a platform using its legal data and AI to enable research and drafting. With conversational search, summarization, and document generation, Lexis+ AI aims to transform legal work. Partnering with law firms for feedback, LexisNexis, a leader in legal AI, responsibly develops solutions considering impact and preventing bias.
Thanks for reading. See you next Friday.