- The AI Verdict
- Posts
- Regulations begins to take hold on employers' use of AI
Regulations begins to take hold on employers' use of AI
State legislatures, including California, are taking notice of the rapid proliferation of artificial intelligence across industries and sectors, and several states have introduced bills to create government task forces to study AI and regulate its use.
California has been steadily building its foundation for over a year and is positioning itself as a key regulator of AI in employment.
The California Civil Rights Council has proposed regulations covering the disparate treatment and impact of AI in employment, and it would be unlawful for an employer to use selection criteria (such as a qualification standard, employment test, automated-decision system, or proxy) that has an adverse impact on, or constitutes disparate treatment of, applicants or employees under the FEHA (Fair Employment Housing Act) unless shown to be job-related and consistent with business necessity.
The council's proposed regulations define a "proxy" as a technically neutral characteristic or category correlated with one of the protected classes under the FEHA, while an "automated-decision system" is defined as a computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision-making that impacts applicants or employees.
Assembly Bill 331, proposed in January, would require developers and deployers of automated-decision tools to conduct an "impact assessment" of such tools, including a summary of the data collected and processed by the tool; an adverse impact analysis based on sex, race or ethnicity; and a description of efforts to address algorithmic discrimination and evaluate its validity or relevance.
The results of an impact assessment would need to be maintained for two years, and any significant updates to automated-decision tools would likewise warrant an impact assessment.
Employers may provide evidence of anti-bias testing or similar proactive efforts to avoid unlawful discrimination to support a defense available under the council's proposed regulations. However, the regulations do not presently provide any guidance on, or examples of, "anti-bias testing."
Read the whole article from Gibson Dunn here.
Policy: New York City to enforce AI bias audit law for employment decisions: what employers need to know
The New York City Department of Consumer and Worker Protection (DCWP) will begin enforcing the AI bias audit law on July 5, 2023, which prohibits the use of automated decision tools in employment decisions unless certain requirements are met.
Employers must determine if they are using AI tools in screening for hiring or promotion in New York City and engage an independent auditor to conduct a bias audit of any AI tool used.
Employers must publish the results of the bias audit on their website, and provide notice of the use of AI in hiring/promotion decisions to applicants and employees residing in New York City.
Employers who violate the AI law could face fines between $500 and $1,500 per violation, per day.
Employers using AI tools should familiarize themselves with the tools being used, create policies governing the use of AI tools, monitor, audit and conduct risk assessments of the tool, train key personnel on applicable AI laws and internal policies, and be cognizant of existing AI restrictions in other jurisdictions.
Read the full article from WilmerHale here.
Read the full provision here.
On March 29, 2023, the UK government published a white paper on AI entitled "A pro-innovation approach to AI regulation."
The white paper sets out a new "flexible" approach to regulating AI that aims to build public trust in AI and make it easier for businesses to grow and create jobs.
The AI industry in the UK is prospering, with over 50,000 individuals employed and a contribution of £3.7 billion to the economy in the previous year. The UK has twice as many companies offering AI products and services as any other European country, and hundreds more are established every year.
The approach is designed to achieve three objectives: drive growth and prosperity, increase public trust in AI, and strengthen the UK's position as a “global leader in AI.”
The UK government will avoid heavy-handed legislation and instead empower existing regulators to prepare tailored, context-specific approaches that suit how AI is used in each specific sector.
Regulators are to consider five principles to facilitate the safe and innovative use of AI in their industries: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Read the full white paper here.
A survey by the Berkeley Research Group, Relativity, and ACEDS found that there is a split in attitudes around AI and machine learning between law firms and legal departments, and a lack of education is the biggest obstacle to adoption.
Among the 242 respondents of the survey, there were 35 attorneys, 25 consultants, 119 litigation/practice support professionals, and 61 paralegals. According to the results, more than 80% of the respondents reported using Artificial Intelligence Modelling Language (AIML) technology in the past year, and 86% anticipated their organization would continue to use AIML technology in the next 12 months. Out of the respondents who answered affirmatively about their organization's use of AIML technology, 35% expect to use it a lot, 43% expect to use it some, and 14% expect to use it a little. Only 7% of the respondents did not anticipate using AIML technology in their job in the next year.
60% of respondents cited a lack of education as the biggest obstacle to AI and machine-learning adoption, especially when negotiating ESI protocols while using tools such as technology-assisted review (TAR).
The corporate focus group responded that 40% of the time, outside counsel makes decisions around AI and machine-learning adoption and use, whereas legal departments are the decision-makers 30% of the time, leading to a divide between the level of AI and machine learning understanding in legal departments and law firms.
40% of respondents cited costs as an obstacle, listing it as the second-most-reported barrier to adoption.
Despite the hurdles, 86% of all respondents anticipated that their organization would utilize AI and machine-learning tech in the next 12 months, whereas 14% answered no to the question, with 82% saying their organizations already use the tech.
Read the full report here.