The Gist
- Transparent guidelines. SB205 mandates transparency and accountability in AI systems.
- Risk assessment. Marketers and managers must understand AI risks and compliance.
- Legal influence. SB205 sets a precedent for AI regulation and consumer protection.
The United States has yet to establish any federal legislation regarding artificial intelligence safeguards. With the rise in recent AI-related lawsuits garnering national attention, states have stepped forward with developing digital laws for their own regions.
Let's examine Colorado's new AI law.
The legislation most poised to influence national AI safeguards is Colorado’s SB205, a consumer-protection measure that guides companies developing high-risk AI systems in establishing prevention measures against discrimination. It is hailed as the first AI legislation of its kind, aimed directly at AI-enhanced processes where consumer protection is a priority. In May Colorado Gov. Jared Polis signed SB205 into law.
The combined emphasis on AI fairness and decision transparency makes SB205 a significant step toward responsible AI development and deployment. But is its framework comprehensive enough to merge the interests of consumers and the firms discovering AI capabilities?
Colorado's AI Law: Defining the Risk of Discrimination
SB205 mandates that both developers of high-risk AI systems and entities that deploy such systems demonstrate reasonable processes to avoid algorithmic discrimination. To appreciate the scope and value of the AI law, marketers and managers must understand what defines "high risk" and "algorithmic discrimination."
Algorithmic discrimination means having an algorithm-based system that ultimately results in unlawful differential treatment or impact on a stakeholder based on actual or perceived age, color, disability, ethnicity, genetic information, language barriers, national origin, race, religion, reproductive health, sex, veteran status or other classifications.
As for high risk, SB205 defines it as a possibility of harm within a given system that makes a substantial consequential decision for a given stakeholder. For example, a hiring process that includes AI as part of its decision-making protocol would mean a job applicant is a stakeholder. Their risk is an unfair decision in an AI-based application system that overlooks their qualified skills.
In his post, Gerry McGovern gives many examples of what the dire consequences of an unchecked high-risk AI decision system can produce. There have been many precedents in adopting a cautious approach to leveraging algorithmic decisions, such as the image recognition dilemma I noted in 2020.
Related Article: AI, Privacy & the Law: Unpacking the US Legal Framework
Outlining Steps for AI Safeguards Within the Customer Experience
SB205 establishes the remedies that entities developing AI systems must consider for compliance. These remedies are treated as steps of "reasonable care" — tasks that indicate the developer complied with the key provisions of the act.
In short this means the provider of a high-risk AI decision system must provide the following when deploying the system into their operations:
Disclosure Statement
AI providers must give a detailed statement to those who will be using their high-risk AI systems. This statement should include important information about how the system works and its potential impacts.
Impact Assessment Information
AI providers must also supply the necessary information and documentation that allows users to assess the impact of the high-risk AI system. This means providing the tools or data needed to understand how the system might affect people or operations.
Public Summary
AI providers must make a public statement that summarizes the types of high-risk AI systems they have developed or significantly modified. This summary should include how they handle potential risks of algorithmic discrimination, which means unfair treatment or bias in the system's decisions.
Risk Disclosure
If an AI provider learns of any known or reasonably foreseeable risks of algorithmic discrimination, they must inform the attorney general and the users or other developers of the high-risk system within 90 days. This includes risks discovered by the provider or credible reports from users.
These guidelines inform companies and government agencies on how they should ensure transparency and accountability in the development and use of high-risk AI systems. They set up a correction process for a customer or patron who believes the technology has treated them unfairly. The AI law allows them a process to have issues investigated, to establish correction to relevant data or to manage a filed complaint.
Related Article: AI Copyright Infringement Quandary: Generative AI on Trial
Policymakers Work to Create AI Safeguards as AI Lawsuits Emerge
SB205 is arriving at an inflection point as more AI-related lawsuits appear in state and federal courts. These cases stem from the debate about using intellectual property with AI. SB205 is not meant to address deepfakes or fraud, so neither SB205 nor legislation inspired by it will set a precedent on how courts resolve these cases.
However, the rulings from the IP cases, which will take months or even years to reach judgment, are likely to influence how training data, copyright and permission of data usage are viewed legally. These decisions could impact legislation inspired by Colorado’s AI law.
The passage of SB205 also arrives as the world watches how the influences of the EU AI Act are regarded. The EU AI Act is a significant AI law for regulating artificial intelligence. Various companies and organizations, including GitHub, Hugging Face, Creative Commons and Open Future have criticized it, aiming to gain clearer definitions from the European Parliament regarding processes mentioned in the Act, such as AI components, limited real-world testing for AI projects and proportional requirements for different foundation models. The EU Act sets a global precedent in regulating AI risks while encouraging innovation.
That precedent will filter into the Colorado AI law.
Related Article: AI Development & Ethics: AI Is Designed to Lie
Time to Adjust for CX Leaders
SB205 will take effect on Feb. 1, 2026, giving companies time to understand, develop and vet processes to evaluate and use reasonable care to avoid algorithmic discrimination.
Other states are considering legislation regarding AI usage within customer experiences. Oklahoma, Massachusetts, New Jersey, Illinois and California all have pending legislation. As policymakers work to pass these laws, customer experience leaders should develop a work plan on how their operations will need to adjust.
Impacted leaders of AI-related projects can work with developer teams to conduct a proper risk assessment. This involves identifying consumer interaction with any AI system as a stakeholder activity and recognizing the risk from that activity to the consumer. These steps form the mitigation measures and safeguards around artificial intelligence that must be established.
Marketers should also follow this AI law's evolution. Despite approving it by signature, Gov. Polis remains open to amending the AI law if compliance begins to stifle business innovation among Colorado tech companies.
However, the Colorado AI law is a good start to forming a framework that permits innovation while protecting consumers' rights against the outputs of AI-based decisions.