January 17, 2024
Key Takeaways: Regulatory Initiatives Concerning Automated Decision Making Technologies and Generative AI in California
From Google and Nvidia to OpenAI and Anthropic, many of the world’s leading technology companies call California home. It’s no surprise then, that California has established itself as both a national and global leader in information technology innovation.
However, strong leadership and oversight that align fundamental human values and civil rights with the various stages of technological design, development, and deployment are critical to ensuring that the risks of high-impact technologies are mitigated while benefits are maximized. This is especially true for AI and data-driven decision making technologies.
To this end, California has released several important regulatory initiatives that all converge on one point: the implementation and development of robust safeguards and civil rights relating to the design, development, and deployment of AI and Automated Decision Making Technologies (ADMTs).
This post will briefly illustrate the key takeaways from the following regulatory initiatives:
- California Privacy Protection Agency’s Automated Decision Making Technology Regulations (CPPA ADMTR)
- California Assembly Bill No.331 (AB 331)
- California Governor Gavin Newsom’s Executive Order N-12-23 (EO N-12-23)
The CPPA ADMTR targets companies that develop and deploy ADMTs within consequential decision making contexts—decisions that can significantly impact consumers in various ways, such as access to critical goods and services, employment processes, and financial services. Overall, the initiative promotes transparency and accountability regarding the function, use, and real-world impacts of these technologies, establishing a series of consumer-specific rights such as the right to opt-out, the right to access, and the right to know your information.
The AB 331 mirrors several core components of the CPPA ADMTR, especially in terms of transparency and accountability through the provision of consumer rights and protections. However, the scope of this initiative is broader in that ADMT developers and deployers include businesses, state and local government agencies, partnerships, and individuals. Moreover, the initiative also emphasizes the importance of regular impact assessments—ADMT risk evaluations—and compliance reporting, particularly when these technologies undergo significant changes, updates, or improvements.
The EO N-12-23 differs substantially from the CPPA ADMTR and AB 331, targeting a series of both localized and systemic risks specific to state and local government agency’s development and deployment of generative AI (GenAI) systems. The proposed requirements range from GenAI impact assessments on vulnerable communities to the provision of AI training and upskilling opportunities to state employees, and the evaluation of GenAI risks to critical infrastructure, to name a few. In a nutshell, the EO N-12-23 strives to hold the state of California accountable with respect to the real-world impacts of GenAI on the state as whole and its citizens.
- Key Definitions:
- Automated Decision Making Technology (ADMT): “any system, software, or process — including one derived from machine-learning, statistics, or other data-processing or artificial intelligence — that processes personal information and uses computation as a whole or a part of a system to make or execute a decision or facilitate human decision making.” ADMTs also include technologies used for profiling.
- Key actors: businesses should be held accountable for their use of ADMTs whereas consumers should be protected from inequitable decision outcomes.
- The right to opt-out: consumers have the right to opt-out of ADMTs when they are leveraged to drive or supplement consequential decisions. Businesses must also describe this right to consumers and provide a means by which consumers can verify that opt-out requests have been fulfilled. However, to exercise this right, consumers must verify their identity, except in cases where ADMTs are leveraged for behavioral advertising.
- The right to access: consumers have the right to access information on how ADMTs are leveraged to drive or supplement consequential decisions. Businesses must also provide plain language descriptions concerning the intended purpose and function of ADMTs as they relate to consumers.
- The right to know: consumers have the right to know why a given ADMT-driven or supplemented decision was arrived at. When such decisions result in the denial of goods and services, businesses must clearly communicate the process by which the decision outcome was reached. Consumers can also request that human decision makers replace ADMTs where appropriate.
- Additional information and plain language descriptions: consumers should be able to easily access additional information regarding the use of ADMTs. To this end, businesses must provide plain language explanations of the following: key parameters and logic of ADMTs, the intended output of ADMTs, how the intended output drives decision making and the scope of human involvement in this process, and whether ADMTs have been certified for validity, reliability, and fairness.
- Protections for minors: consumers below the age of 16 are, by default, opted-out from behavioral advertising. For consumers below the age of 13, opt-in consent from parents or guardians should be acquired.
- Submitting complaints: the provision of a simple method by which consumers can submit complaints concerning a business’ use of ADMTs.
- When the right to opt-out is not required: when businesses leverage ADMTs for security purposes, to preserve physical safety or prevent malicious behavior, or when alternative means of processing are unavailable, the right to opt-out is not required.
- Key Definitions:
- AI: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing a real or virtual environment.”
- ADMT: “a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions.”
- Key actors: developers and deployers of ADMTs include individual persons, partnerships, state or local governments, and businesses.
- Publicly accessible explanations (i.e., the rights to access and know): developers and deployers must provide publicly accessible explanations regarding the function and/or role of ADMTs in decision making and illustrate the steps taken to prevent algorithmic discrimination. In cases where ADMTs are leveraged in high-stakes decision making contexts, such as hiring and termination procedures, housing, and the provision of health and financial services, explanations concerning how and why decision outcomes were reached must be provided.
- Prevention of algorithmic discrimination: ADMTs should never be used for algorithmic discrimination—when decisions are made on the basis of differential factors, such as race or gender.
- The right to opt-out: developers and deployers must allow consumers to opt-out of ADMTs and request an alternative decision making medium, such as a human.
- For deployers: deployers must state, in simple terms, the intended purpose of ADMTs and notify individuals when such tools are being used.
- For developers: developers must communicate the limitations and potential drawbacks of their ADMTs to deployers. Developers should also describe their training data and demonstrate how their tools have been evaluated for validity and explainability.
- Annual impact assessments: developers and deployers must conduct annual ADMT impact assessments, sharing assessment outcomes with the California Civil Rights Department within 60 days—a failure to do so within the stated period results in a $10,000 fine.
- Governance programs and civil action: developers and deployers should implement governance programs and appoint key personnel to manage and uphold these programs. Violations of the bill’s provisions are also grounds for civil action, either at the level of public attorneys or the California Attorney General.
- Changes, updates, or improvements: when ADMTs undergo significant changes, updates, or improvements, developers and deployers must adapt their safety parameters to account for these factors. Developers must also conduct another impact assessment.
- Submitting complaints: employees can submit complaints concerning their employers’ use of ADMTs, and employers should address complaints in a timely manner.
- Key Definitions:
- GenAI: AI systems that can generate “novel text, images, and other content.”
- Key Actors: California state and local government agencies developing and deploying GenAI as it affects the welfare of the state and its citizens.
- Comprehensive reporting: relevant government agencies must generate a report, for presentation to the governor’s office, illustrating the most significant risks and benefits of GenAI for the state of California. This report should be regularly assessed to account for novel AI innovations, and assessments should include civil society, academia, government, and industry experts.
- Threats to critical infrastructure: by 2024, relevant government agencies should conduct a collaborative risk analysis on GenAI threats to California’s critical infrastructure.
- Continuous monitoring: by 2024, GenAI systems must be continually monitored for unintended behaviors and threats to human control.
- Development and deployment guidelines: by 2024, relevant government agencies should establish GenAI development and deployment guidelines, focusing on pre-identified high-risk use-cases. Guidelines should also include metrics measuring GenAI impacts on processes conducted by the state.
- Impact assessments: by 2024, relevant government agencies should assess the impacts of GenAI on vulnerable communities, targeting equitable outcomes and pre-identified high-risk use-cases.
- Inventory of high-risk use-cases: relevant government agencies should create and uphold an inventory of GenAI high-risk use-cases.
- Regulatory sandboxes: by March of 2024, relevant government agencies should create safe environments in which novel GenAI technologies can be tested and evaluated before deployment.
- Training opportunities: state employees should have access to AI training opportunities that focus on how to leverage GenAI responsibly and effectively and how to identify GenAI risks.
We at Lumenova AI firmly believe in the tremendous potential of AI while also recognizing the significance of responsible AI practices. From day one, our goal has been to assist enterprises like yours with automating, simplifying, and streamlining the entire AI Governance process.
With Lumenova AI you can:
- Launch governance initiatives.
- Establish policies and frameworks.
- Assess model performance.
- Pinpoint potential risks and vulnerabilities.
- Consistently monitor and report on discoveries.
Our platform follows a structured yet flexible workflow to help enterprises govern the entire AI lifecycle. Find out how it works by requesting a product demo today.