Skip to Main Content
 
Thought Leadership

Will Federal Initiatives Stave Off Disparate State-Level Regulation of Artificial Intelligence?

 

Published:

February 26, 2024

Related Service:

Artificial Intelligence 
 
Articles

Earlier this month, Secretary of Commerce Gina Raimondo announced the creation of a consortium intended to bring together artificial intelligence (AI) creators and users, academic institutions, government and industry researchers, and civil society organizations to support the development and deployment of safe and trustworthy AI. The U.S. Artificial Intelligence Safety Institute Consortium (AISIC) includes over 200 members selected from academia, advocacy organizations, private industry, and the public sector.

Noticeably absent from the initial cadre of AISIC members is the presence of state governments or state legislative organizations. While this oversight might be remedied as AISIC’s membership grows over time, it is important to recognize that state and local governments are not standing still while Washington contemplates our AI future. To the contrary, state and local governments are moving ahead in fits and starts to regulate AI as it relates to a spectrum of applications, including labor and employment and consumer finance.

These early forays into AI regulation bear some resemblance to the evolution of data privacy law over the past decade. In the two decades since California passed its data breach disclosure law, the other 49 states followed suit. However, the fifty state laws have unique thresholds and requirements that are a boon to data privacy lawyers. Since the passage of the California Consumer Privacy Act in 2018, no fewer than 13 U.S. states have enacted data privacy legislation, with nearly two dozen additional bills working their way through the legislative process. All of this has transpired in the absence of comprehensive federal law.

It has been oft-remarked how the U.S.’s constitutional framework embodies—in U.S. Supreme Court Justice Louis Brandeis’ formulation—a “laboratory for democracy,” providing states with significant latitude to craft different regulatory and legal regimes to solve public policy challenges. But laboratories can get messy, and while scientists generally want the results of their experiments to remain in the lab, AI, cyber and data privacy laws are not confined by geography. The evolving patchwork of state data privacy law can be extremely difficult for business enterprises operating in multiple jurisdictions to navigate. AI legislation seems poised to travel down this same path.

The Development of AI State Laws

States are moving quickly to address AI. While some legislators are striving to coordinate their efforts across state lines, other legislative bodies are carving out their own paths. Nevertheless, many of the laws being contemplated at the state level have tended toward the regulation of AI in the labor and employment setting. For example, six states have introduced bills recently to address the use of algorithmic decision-making in the hiring and promotion processes. This is in addition to perhaps the most high-profile example of these auto-employment decision measures: New York City’s Local Law 144, which went into effect last year to regulate the use of auto-employment decision tools (AEDTs) to screen applicants for employment or employees for promotional opportunities. Local Law 144 makes it unlawful to use an AEDT to screen candidates or employees for an employment decision unless (1) the AEDT is subject to an annual bias audit by an independent auditor before use and (2) the results of the most recent bias audit and the AEDT’s distribution date are published on the employer’s website.

The spirit—and in some cases, the substance—of proposed state legislation resembles Local Law 144. Illinois, Maryland, Massachusetts, New Jersey, New York, Vermont all have bills in process. These pieces of proposed legislation are various in nature, but several of them share features worth noting, such as requirements on employers to notify and inform employees and job candidates of their use of predictive data analytics and other AI-powered tools; the introduction of “bias audits” by certified third parties; and various data privacy requirements regarding employee data.

More broadly, several states have introduced a clutch of legislation to tackle so-called algorithmic discrimination, a broad category which comprises biases that can emerge within systems that aggregate, process, and interpret data. These biases in turn can allegedly result in inequities in the treatment of individuals or groups based on certain characteristics like race, gender, or income. Research performed by Husch Blackwell’s Data Privacy team reveals that no fewer than 20 such pieces of legislation across 13 states have been introduced.

While most of the proposed legislation is broadly conceived, there are significant points of intersection with specific industries. Most obviously, several bills target those who develop and deploy AI systems and mandate certain audits and disclosures in connection with their products and services. Other bills seek to regulate how the financial industry uses AI. For instance, Oklahoma’s HB3577 would require health insurers to disclose whether AI-based algorithms are used or will be used in the insurer’s utilization review process. Additionally, many of the bills contain language that—while avoiding overt mention of the financial industry—is easily applied to consumer financial services, among other areas.

AISIC to the Rescue?

According to National Institute of Standards and Technology (NIST), the agency under which AISIC operates, AISIC will initially focus on enabling the development and deployment of safe and trustworthy AI systems through the operationalization of NIST’s AI Risk Management Framework (AI RMF) and addressing the challenges identified under NIST’s AI RMF roadmap. Like its Cybersecurity Framework, adoption of the NIST AI RMF by organizations is voluntary, and it was designed to be applicable to all industry sectors and to give AI stakeholders and users approaches that increase the trustworthiness of AI systems.

NIST intends for AISIC to leverage NIST’s history of collaborating with the private and public sectors to develop reliable and practical measurement and standards-oriented solutions. Specifically, NIST expects the AISIC to:

  • Establish a knowledge and data-sharing space for AI stakeholders
  • Engage in collaborative and interdisciplinary R&D through a Research Plan
  • Prioritize research and evaluation requirements and approaches that allow for a more complete and effective understanding of AI’s impacts on society and the US economy
  • Identify and recommend approaches to facilitate the cooperative development and transfer of technology and data between and among AISIC members
  • Identify mechanisms to streamline input from federal agencies on topics within their direct purviews
  • Enable assessment and evaluation of test systems and prototypes to inform future AI measurement efforts

To the extent that state legislatures and municipal entities are enacting legislation, it would be wise for AISIC to engage with state governments so that AISIC’s data, best practices, and frameworks can inform state legislative processes. Also, appreciating the “laboratory” aspect of the states, engagement between states and AISIC can also be a two-way proposition, where the experiences of various states can inform AISIC’s efforts with the goal that a national consensus and approach can emerge from the disparate processes underway.

Professional:

Erik Dullea

Partner