Ensuring a future in which AI benefits are indiscriminately and efficiently distributed, fundamental rights, democracy, and national security are protected and preserved, and AI risks are quickly identified and mitigated, in light of exponential advancements in AI innovation and safety research, is a monumental yet crucial task. Obviously, there are several mechanisms through which the US can wilfully enact this future for its people, such as via private sector commitments to responsible AI (RAI) development and deployment, continued engagement and collaboration with industry, safety, academic experts, and civil society, and sustained private and public investment in AI research and development. Guaranteeing these mechanisms will come to fruition, however, is another matter entirely, and this is where regulation comes in.
While there are currently over 400 AI bills under development, ranging across 41 US states, barely a handful of states have, to date, enacted comprehensive and enforceable AI legislation that doesn’t simply target specific kinds of AI systems, impacts, or risks, deployment or use contexts, or sectors, domains, and industries. On a state-by-state level, AI regulation is still deeply fragmented, which suggests that certain regulations that are especially well-designed and thought out will more heavily influence the national AI regulatory strategy established by the federal government.
More concretely, the US, by contrast to the EU, is pursuing a vertical regulatory approach, whereby in the absence of federally enforceable standards, AI regulation is largely left up to individual states. Whether this is the right strategy remains to be seen, but one thing is clear: highly successful regulation at the state level will likely produce “trickle-up” effects, informing and influencing the federal government’s AI regulation strategy. Most would expect that these kinds of effects would come from states with high population densities and a significant foothold in the AI innovation ecosystem—states like California and New York—and while they certainly are, some states, most notably Colorado and Connecticut, have emerged as surprising AI regulation leaders.
The regulation this post will examine, Connecticut’s Senate Bill 2: An Act Concerning Artificial Intelligence (SB 2), is quite possibly the most comprehensive piece of US-based enforceable AI legislation enacted to date. Possessing a far broader scope than any of its counterparts, applying to the private and public sectors, and laying the groundwork for sustained RAI innovation, SB 2, drawing inspiration from many components of the EU AI Act (for a deep dive into the AI Act, see Lumenova’s blog series) is poised to produce profound implications for the US regulatory landscape.
Consequently, this post will begin by breaking down the key actors and technologies targeted by SB 2, after which it will explain the host of obligations proposed by this regulation, followed by its enforcement structure and provisions. Before concluding, we’ll also briefly examine some additional provisions proposed by SB 2, mainly in the form of certain targeted requirements, AI-specific oversight bodies, and government-related AI initiatives.
Key Actors and Technologies Targeted
SB 2 primarily targets AI developers and deployers. Developers are understood as any person who develops or makes substantial modifications to an AI system, intending to conduct business in the state, whereas deployers are more simply defined as any person who deploys an AI system in the state, which includes third parties contracted by deployers. Under SB 2, AI developers and deployers are held accountable while consumers, who are broadly defined as any state resident, are protected.
However, SB 2, being a risk-centric legislation, focuses on high-risk and general-purpose AI systems. High-risk AI (HRAI) systems are defined according to whether they play a significant role in consequential decision-making contexts like determining access to critical goods and services, facilitating employment outcomes, or aiding in criminal justice procedures, to name a few examples. General-purpose AI systems (GPAI), on the other hand, are defined according to their generalist capabilities repertoire, versatile performance across different tasks, and ability to integrate with downstream systems or applications.
Importantly, SB 2 also covers generative AI (GenAI) systems, which are foundation models—large AI models that require lots of data and compute—capable of producing synthetic digital content across audio, visual, and text domains.
Core Obligations
In this section, we’ll begin by examining developer obligations, followed by those of deployers. However, we also note that within these obligations, are nested several important consumer rights such as:
- The right to opt out of data processing.
- The right to correct or update any personal data used for processing purposes.
- The right to request a human decision-maker in lieu of an AI system.
- The right to a pre-use notice stating whether an AI system will play a significant role in consequential decision-making.
At the end of this section, we’ll also separately break down the requirements for AI-generated deceptive synthetic media content.
Beginning on February 1st, 2026, AI developers must:
- Protect consumers from the risks of algorithmic discrimination, adhering to a standard of reasonable care—acting as an ordinary rational person would to prevent potential harm in real-world circumstances.
- Provide deployers with the following kinds of documentation:
- An intended use, intended purpose, intended outputs, and intended benefits statement.
- Summaries of training data, model limitations, and foreseeable risks of algorithmic discrimination.
- Statements outlining what performance evaluation measures were taken before a system is made available for deployment.
- Descriptions of the data governance, data lineage, and bias mitigation measures taken to address potential algorithmic discrimination risks.
- Descriptions of the measures taken to manage foreseeable algorithmic discrimination risks linked to deployment practices.
- Explanations of monitoring and use protocols for when a system is leveraged for consequential decision-making.
- Any additional information the deployer might require to make sense of system outputs and monitor system performance for risks of algorithmic discrimination.
- Provide deployers, or any third parties they contract, with all the information they require to successfully deploy a model, which includes documentation on model and dataset cards, and any information necessary to conduct an impact assessment. A developer with a dual function as a deployer doesn’t need to provide this documentation unless they plan to make their model available to a non-affiliated deployer.
- Make the following information publicly available via their website or in a public use case inventory, regularly updating it to ensure continued relevance:
- The array of high-risk AI systems the developer makes available for deployment.
- The methods the developer leverages to address potential algorithmic discrimination risks and any modification made to a system to mitigate such risks.
- Within 90 days of an AI system undergoing a substantial modification, update the publicly available information listed above.
- If they’re developing GenAI or GPAI models that generate or manipulate synthetic content, provided that the content isn’t clearly artistic or creative:
- Authenticate and label synthetic content so that it’s easily detectable by consumers at the first point of exposure (i.e., watermarking).
- Ensure that technical solutions for synthetic content verification and authentication are reliable, robust, effective, and interoperable.
- Describe implementation costs for technical solutions and demonstrate how they correspond with state-of-the-art approaches.
- Exemptions for AI-generated content:
- AI-generated text, provided that it’s created to inform the public on matters of public interest, has been reviewed by a human evaluator, isn’t evidently misleading, or is controlled by an editor who is responsible for publishing it.
- If an AI system is leveraged to edit a written text but doesn’t make any significant changes to the text.
- If an AI system is leveraged to identify or investigate criminal endeavors and actions.
Beginning on February 1st, 2026, AI deployers must:
- Establish, implement, and document a comprehensive risk management system that targets preventable algorithmic discrimination risks in HRAI systems. This risk management system can cover multiple HRAI systems, must be regularly assessed, reviewed, and updated throughout the AI lifecycle, and adhere to the following standards:
- Alignment with the NIST AI Risk Management Framework and ISO 42001, or any other nationally and internationally recognized risk management frameworks.
- Display adequate consideration for the size and complexity of a deployer.
- Describe the intended use cases of the AI system in question, its application scope, and its overall nature.
- Describe the kind of data processed by the system, both in terms of volume and sensitivity.
- Administer and document impact assessments that include the following components:
- Statements describing the intended purpose, use, benefits, and deployment context.
- An analysis of preventable algorithmic discrimination risks, the nature of such risks and their impacts, and the steps taken to mitigate them.
- Descriptions of data categories and system outputs.
- If a system has been fine-tuned, descriptions of data categories leveraged for post-training enhancements.
- Descriptions of the metrics utilized to measure system limitations.
- Descriptions of the measures taken to ensure transparency and consumer disclosure during active use.
- Descriptions of post-deployment monitoring, human oversight, and user protection measures.
- In response to substantial modifications made to an HRAI system, conduct and document another impact assessment, and explain the degree to which the use of the system corresponded with the intended use description as stated by developers. If a deployer or third-party contractor administers an impact assessment to comply with some other existing regulation, impact assessment requirements are fulfilled, provided that the assessment administered corresponds with the impact assessment requirements listed above.
- Maintain a record of their impact assessments for a period of three years following initial system deployment.
- If a system is deployed in a consequential decision-making context:
- Inform consumers that the system will be a factor in consequential decision-making via a pre-use notice.
- If they also function as a controller—determining the purpose for which personal data is processed—inform consumers of their right to opt out of having their data processed by the system, and promptly comply with consumer-submitted requests within 45 days. If the deployer can’t comply for technical reasons, they must explain to the consumer why these reasons prevent them from complying with their request.
- Provide consumers with a statement outlining the intended purpose of the system.
- Provide consumers with their contact information.
- Provide consumers with a simple and clear explanation of the HRAI system they are interacting with.
- If an AI system leveraged for consequential decision-making produces a detrimental decision outcome, provide the affected consumer with:
- A disclosure outlining the reasons for which the decision was made.
- An explanation of the role and contribution of the AI system in facilitating the decision.
- An explanation of the data processed, and its lineage, by the system to make the decision.
- A means through which to correct or update any personal data the system processes.
- The ability to appeal the decision and request a human decision-maker in lieu of the AI system.
- Annually evaluate and review whether potential algorithmic discrimination risks stemming from deployed HRAI models are managed appropriately or likely to materialize.
- Make the following kinds of information publicly accessible, regularly updating it where appropriate, and providing consumers with clear instructions on how to access it:
- An inventory of currently deployed HRAI systems.
- A description of the measures taken to mitigate algorithmic discrimination risks.
- A detailed description of the data, its quantity, and lineage, collected by the deployer for processing purposes.
- Inform each consumer who interacts with an AI system that they’re interacting with an AI system. If it’s obvious that consumers are interacting with an AI system, no such disclosure is required.
- Deployers are exempt from impact assessment requirements if:
- They have fewer than 50 employees.
- They don’t use their own data for training purposes.
- The system is used in line with its intended use and if it learns via data collected from external sources.
- Developer-administered and provided impact assessments for the system are disclosed, including statements required to fulfill impact assessment obligations.
As for AI-generated deceptive synthetic media content, relevant requirements are explained below:
- Individual persons can’t distribute any deceptive synthetic content within 90 days preceding an election or primary if:
- They are aware of the fact that the content they’re distributing is deceptive synthetic content.
- The content contains a public official or figure and is distributed without their consent.
- The distributor exhibits an overt lack of consideration regarding the harm such content could cause to a public official or figure.
- Deceptive synthetic media content can be distributed if:
- It takes the form of an image with a clear written disclaimer stating that the image has been manipulated. If an image is an edited or manipulated version of an existing image, the distributor must provide a citation to the original source material.
- It’s an audio clip, with an easily comprehensible read disclaimer, at the beginning and end of the audio clip, stating that the audio has been manipulated. For audio content that is over 60 seconds long, such disclaimers must be provided at 30-second intervals. If an audio clip is an edited or manipulated version of an existing audio clip, the distributor must provide a citation to the original source material
- It’s a video that contains a clear written disclaimer, present throughout its duration, stating that it was manipulated. Where a synthetically generated video edits or manipulates an existing video, a citation to the original source material must be provided.
- It’s broadcast as a newscast, interview, news documentary, or live coverage of a news event, with AI-generated content disclaimers included or live statements in the case of live coverage.
- It’s published as an online newspaper, magazine, or periodical with AI-generated content disclaimers.
- Advertisers must file a sworn statement with the State Election and Enforcement Commission, under penalty of perjury, certifying that the content they broadcast or communicate doesn’t contain any deceptive synthetic media, a copy of which must be kept for a minimum of four years by the broadcasting entity.
- Any person who unlawfully distributes deceptive synthetic media, provided an intent to cause harm isn’t present, is subject to a Class C misdemeanor. If there’s a clear intent to cause harm, the distributor will be guilty of a class A misdemeanor. If the distributor committed a violation less than 5 years prior to committing another offense, they will be subject to a class D felony.
- Any person who unlawfully disseminates an AI-generated or manipulated intimate image (see SB 2 directly for a definition) or video of another person who is clearly identifiable is subject to a class A misdemeanor. If this person intentionally distributes the intimate image to more than one person, they are subject to a class D felony.
Enforcement
The Attorney General (AG) is exclusively responsible for enforcing the provisions of SB 2. Upon written request, the AG can require that AI developers, deployers, or third-party contractors disclose all relevant documentation, such as impact assessments, risk management protocols, or a record of compliance measures taken, if it is deemed necessary by the AG to conduct an investigation.
Moreover, if the AG determines that a developer or deployer can resolve a violation before a civil action is initiated, the AG must notify the developer or deployer of the potential violation, providing them with a 60-day grace period in which they can address it. However, in determining whether to allow a developer or deployer to cure a violation, the AG can consider the number of prior violations, the size and complexity of the organization, the kind of business conducted by the organization, the risk of public harm and safety, and whether a violation occurred due to human error.
Fortunately, developers and deployers can limit the possibility that a violation results in civil actions if:
- The violation is discovered or identified as a consequence of user or deployer feedback.
- The violation is identified via adversarial testing or red-teaming, provided that such procedures adhere to the standards outlined by NIST.
- The violation is cured within 60 days of discovery.
- Notice is provided to the AG that potential harms resulting from the violation have been mitigated appropriately.
- The developer or deployer in question is already adhering to the latest versions of the NIST AI RMF and/or ISO 42001, or some other nationally or internationally recognized risk management standard.
Before initiating civil action, the AG must formally consult with the executive director of the Commission on Human Rights and Opportunities (CHRO) to understand whether any violation complaints have already been submitted to the commission. If a complaint has been submitted and resolved, the AG may instigate civil action, and must also provide, in a publicly accessible manner via their website, instructions for how such a complaint can be submitted to the CHRO.
While we discussed these issues in the previous section, it’s also worth reemphasizing that SB 2 designates certain actions associated with the dissemination of deception synthetic media or AI-generated intimate imagery as crimes or misdemeanors.
Additional Provisions
Now that we’ve covered core obligations and enforcement provisions, we’ll dive into some of the more non-descript components of SB 2, beginning with legislative leaders, which are defined below according to Title 4 of the CT general statutes:
- Definition: “(1) For members of the majority party of the Senate, the president pro tempore of the Senate; (2) for members of the minority party of the Senate, the minority leader of the Senate; (3) for members of the majority party of the House of Representatives, the speaker of the House of Representatives; (4) for members of the minority party of the House of Representatives, the minority leader of the House of Representatives.”
Legislative leaders, via a formal request submitted to the executive director of the Connecticut Academy of Science and Engineering (CTASE), can establish a member of the academy as a regulatory liaison for the AG’s office and the Department of Economic and Community Development (DECD). This liaison’s responsibilities include:
- Developing a compliance tool through which AI developers and deployers can evaluate their adherence to SB 2’s provisions.
- Developing an impact assessment tool for deployers and third-party contractors that ensures any administered impact assessments correspond with regulatory requirements.
- Collaborate with stakeholders at the University of Connecticut’s Law School to help small businesses and startups comply with SB 2.
- Informing the design and development of controlled and supervised safety testing environments for AI systems.
- Facilitating and supporting the establishment of a human oversight office and consultation program between various state, business, and stakeholder groups.
- Examining and evaluating AI adoption efforts, with a focus on identifying relevant challenges and needs, understanding existing and emerging AI regulations, and how businesses source AI talent.
- Developing and implementing a plan for the state government regarding the provision of high-quality compute resources to businesses and researchers.
- Examining the potential benefits of a state-wide unified healthcare research collaborative that fosters RAI best practices, AI awareness and education, and considers salient patient privacy risks.
- Informing the design and development testing hubs and safeguards for AI misuse scenarios, risk assessment standards, performance evaluations, and funding for state-level AI oversight.
In addition to establishing a CTASE member as a DECD point of contact, SB 2 also supports the development of a legislative AI advisory council, composed of 23 members, whose primary responsibility is to report on and make recommendations to the General Law Committee and DECD commissioner regarding all matters of interest concerning AI. Moreover, all state agencies must conduct and report the results of a GenAI study and pilot projects focalized on improving state agencies’ efficiency and productivity, and the Office of Workforce Strategy is further required to implement AI training initiatives into workforce training programs.
In the interest of making AI training initiatives accessible to state residents, SB 2 also requires that the Board of Regents for Higher Education (BOR) establishes the Connecticut Citizens Academy and certificate programs centralized on AI training, awareness, and RAI best practices, offering a variety of accreditations for completed training.
Finally, SB 2 requires the Department of Public Health (DPH) to research and develop AI governance standards for healthcare whereas the DECD must establish grant programs aimed at ameliorating potential health inequalities while fostering support initiatives for integrating algorithms into workforce AI training protocols.
Conclusion
SB 2, as a standalone AI legislation, represents a major step in the right direction by outlining clear and enforceable developer and deployer obligations, focusing on the most salient AI risks for consumers, and laying the groundwork for AI training, upskilling, and awareness initiatives at both the legislative and citizen levels. However, SB 2, despite its comprehensiveness, still falls short in several areas, including:
- Identifying and addressing all known systemic and localized risks linked to HRAI, GenAI, and GPAI systems, as well as narrow AI systems leveraged for high-risk purposes such as workplace monitoring or biometric identification in public places.
- Defining sufficiently targeted risk management, safety testing, and cybersecurity protocols. For reference, the NIST AI RMF is a framework, whereas ISO 42001 is, in fact, a concrete standard. Depending on which of these resources developers and deployers follow for risk management purposes, interoperability problems could emerge.
- Accounting for malicious threat actors that may leverage GenAI systems to create and orchestrate sophisticated adversarial attacks on private or public entities and/or critical physical and digital infrastructures.
- Accounting for possible high-risk and high-impact dual-use cases linked to rapidly advancing foundation models.
- Establishing targeted guidelines and best practices for AI use cases in scientific, academic, and/or industry research and development. This is distinct from regulating AI R&D, which is a separate issue altogether.
- Implementing robust safety and security protocols through which government-procured AI assets are protected and preserved.
- Excessive focus on the private sector with not enough attention paid to transparency, fairness, and accountability measures for AI government actors.
It’s probable that most of these shortcomings will be addressed in future AI legislation developed in CT, but for now, they represent serious concerns that shouldn’t be taken lightly. Regardless, they still stress the importance of proactive compliance management, and even if they aren’t eventually covered by state regulation, federal regulation will likely capture them sooner or later.
Going a step further, in the unlikely event that federal regulation also fails to capture these shortcomings, organizations would still be wise to address them through internal AI policies that demonstrate a core commitment to RAI best practices—organizations that adopt an intensive approach to AI risk management and safety will be much more capable of adapting to changes in the AI ecosystem, capitalizing on novel high-value AI use cases, sourcing AI talent, responding to emerging regulations, and maintaining a trustworthy reputation.
For readers interested in exploring additional AI legislation briefs, governance concepts and approaches, and regulatory analyses, we suggest that you visit Lumenova AI’s blog, where you can also explore related content on GenAI and RAI topics.
Alternatively, for readers who are beginning to develop AI risk management and/or governance frameworks, strategies, or protocols, we invite you to check out Lumenova AI’s RAI platform and book a product demo today.