- The AI Verdict
- Posts
- 44% of all legal work in U.S replaceable according to Goldman Sachs report, Italy bans ChatGPT, and more
44% of all legal work in U.S replaceable according to Goldman Sachs report, Italy bans ChatGPT, and more
A report by Goldman Sachs on March 26th revealed that legal work is the second most susceptible work category to be disrupted by AI, after administrative tasks, with an estimated 44% of legal tasks replaceable by AI in the US.
The report provided examples of legal tasks that could be easily replaced by generative AI. These included “review[ing] documents and proposed actions for compliance with legal, regulatory, and corporate standards; provid[ing] arguments and scenarios for and against compliance in unclear cases.”
The report estimated that 45% of clerical support workers, 34% of professionals, 31% of technicians and associate professionals, and 29% of managers in the EU could be exposed to AI automation.
Bennett Borden, the chief data scientist at DLA Piper, recently stated that any lawyers refusing to understand generative AI are “like the dinosaurs the day before the meteor hit: they’re extinct.” In light of this recent report, Borden encourages lawyers to understand the risks and values of the technology, plan for disruption, and take advantage of it like the transition from steam to electricity.
The report found that AI would not fully replace the majority of the working population, but could expose 300 million full-time jobs to automation globally.
Read the full report here
Italy has banned ChatGPT temporarily due to privacy concerns. Italy’s data protection authority accused OpenAI of violating GDPR by unlawfully collecting personal data and not having an age-verification system to prevent minors from accessing illicit material.
Italy is the first country to ban ChatGPT, while it is unavailable in China, North Korea, Russia, and Iran due to OpenAI’s decision not to make it accessible.
OpenAI must stop processing personal data of people in Italy and stop serving users in Italy until it complies with the ban within 20 days. If OpenAI fails to comply, it could face fines up to €20 million or 4% of global revenue, whichever is higher.
Legal action from EU-level is unlikely while lawmakers update the A.I. Act proposed by the European Commission two years ago, but national data protection watchdogs may take action.
The Center for Artificial Intelligence and Digital Policy has lodged a complaint with the FTC to block OpenAI's commercially distributed GPT-4 software, calling it "biased, deceptive" and failing to meet the commission's AI standards.
The policy group claims that GPT-4 poses risks of disinformation, cybersecurity threats, and is a risk to privacy and public safety.
The complaint argues that OpenAI's product satisfies none of the requirements set forth by the FTC for transparent, explainable, fair, and empirically sound AI, and there should be independent oversight of commercial AI products in the US.
OpenAI has made 11 plug-ins for GPT-4 available for routine consumer services, but the center claims the company has failed to take reasonable steps to avert risks.
The issue of regulating AI has also garnered international attention, with the European Union considering a proposed Artificial Intelligence Act to regulate AI applications, products, and services.
Read the full complaint here
More than 1,000 technology leaders and researchers, including Elon Musk and Steve Wozniak, have signed a letter calling for the pause of the development of advanced AI systems, warning that they present profound risks to society and humanity. The letter was released on Wednesday by the non-profit organization Future of Life Institute.
The push to develop more powerful chatbots, such as GPT-4 developed by OpenAI, has led to a race that could determine the next leaders of the tech industry. However, these tools have been criticized for getting details wrong and spreading misinformation.
The open letter called for a pause in the development of AI systems more powerful than GPT-4 to introduce shared safety protocols for AI systems. The development of powerful AI systems should advance "only once we are confident that their effects will be positive and their risks will be manageable."
Sam Altman, the CEO of OpenAI, did not sign the letter. Gary Marcus, an entrepreneur and academic, said that persuading the wider tech community to agree to a moratorium would be difficult. However, swift government action is also a slim possibility because lawmakers have done little to regulate artificial intelligence.
Experts are worried that these systems could be misused to spread disinformation with more speed and efficiency than was possible in the past. Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online and describe ways to make dangerous substances from household items.
Read the full letter here