As part of an effort to inspire companies to create and implement ethically responsible AI and protect public rights in the digital age, the US Office of Science and Technology Policy (OSTP) released a set of guidelines Oct. 4. In the 73-page document, focused on five principles, The Blueprint for an AI Bill of Rights is intended to serve as a guide for the design, use and deployment of automated systems.

Will it affect brands and their ability to create marketing and customer experiences?

Yes, said Mark Levy, former VP of digital experiences at Comcast and now publisher of the DCX Newsletter and Podcast and a CX management consultant for MaxxoMedia. The AI Bill of Rights could have a significant impact on how brands and their customers interact with each other — and he believes it should

“As AI becomes more prevalent in our everyday lives, we need to ensure that it is being used ethically and responsibly,” Levy said. “By creating a clear set of guidelines for how AI should be used, we can help ensure that this technology is not abused or misused by companies and governments.”

As for what affect the guidelines will have, Levy said brands will simply need to be more transparent about how they collect and use AI in their products and services. “This also means," he added, "that brands will need to make sure that their AI-driven systems are able to explain why they took certain actions or made certain recommendations. And customers will be able to hold brands accountable if they feel that their rights have been violated.”

Related Article: What's Next for Artificial Intelligence in Customer Experience?

The Five Principles of the AI Bill of Rights

While these guidelines are not mandatory or binding, the White House does hope they will motivate significant organizational changes related to the internal applications of ethical AI.

The blueprint highlights five principles that include:

  1. Protection from “unsafe or ineffective systems”
  2. Equitably designed algorithms and systems that are not discriminatory
  3. Protection from “abusive data practices via built-in protections” and “agency over how data about you is used.”
  4. Notice and explanation that informs people “an automated system is being used” and provides information related to “how and why it contributes to outcomes that impact.”
  5. Human alternatives, consideration and fallback providing that people “should be able to opt out, where appropriate and have access to a person who can quickly consider and remedy problems you encounter.”

“This blueprint is much needed considering the ways in which companies have utilized technology like AI for their own gains without considering what value — if any — they're actually offering for consumers,” said Raj De Datta, founder and CEO of Bloomreach. “For brands that have used AI in order to drive better experiences at scale, ultimately benefiting consumers, I don't think this blueprint will have a significant or negative impact on their marketing efforts or overall customer experience.”

Related Article: Powering Customer Experience Through Conversational AI, Analytics and Good Data

Why the AI Bill of Rights and Why Now?

In a process that took place over the course of a year and incorporated discussion panels, meetings, listening sessions and a publicly accessible email address, the OSTP sought input from citizens from various sectors of communities nationwide including industry leaders, developers, policymakers and other experts on “the issue of algorithmic and data-driven harms and potential remedies.”

The resulting handbook, From Principles to Practice, is now available as a guide for those who wish to put the principles suggested into practice. The White House blueprint follows other AI guidelines issued over the past few years from various corporations, government entities and individual states.

Google’s AI Principles were published in June 2018 and updated again in 2020, the same year the Department of Defense (DOD) adopted its own set of Ethical Principles for Artificial Intelligence. In November 2021, the Recommendation on the Ethics of Artificial Intelligence was adopted at UNESCO’s General Conference by 193 member states.

In 2022, the National Conference of State Legislatures noted that at least 17 states introduced AI bills or resolutions. Four states passed legislation enacting it including Colorado, Illinois, Vermont and Washington, while task forces to study AI were commissioned in Illinois and Vermont.

Learning Opportunities

Who’s Responsible for Upholding Organizational AI Ethics?

In a study of 1,200 worldwide executives in varied industries, the IBM Institute for Business Value found that more than half of responding organizations have publicly endorsed common principles of AI ethics and call trustworthy AI a “strategic differentiator” for organizations.

But who’s in charge keeping a company AI ethical? According to 80% of respondents, “a non-technical executive” — primarily the CEO — is their organization’s primary advocate for AI ethics, a role only 15% attributed to the CEO in 2018. And of those CEOs surveyed, 79% said they are prepared to embed AI ethics into their AI practices — up from 20% in 2018.

Following the CEO, respondents also pointed to other C-level executives, board members, general counsels, privacy officers and risk and compliance officers as being most accountable for AI ethics.

The study also revealed the following data:

  • More than three-quarters of business leaders surveyed this year agree AI ethics is important to their organizations, up from about 50% in 2018.
  • More than half of respondents say their organizations have taken steps to embed AI ethics into their existing approach to business ethics.
  • More than 45% of respondents say their organizations have created AI-specific ethics mechanisms, such as an AI project risk assessment framework and auditing/review process.
  • Less than a quarter of responding organizations have operationalized AI ethics, and fewer than 20% of respondents strongly agreed that their organization’s practices and actions match (or exceed) their stated principles and values.

“As many companies today use AI algorithms across their business, they potentially face increasing internal and external demands to design these algorithms to be fair, secured and trustworthy; yet there has been little progress across the industry in embedding AI ethics into their practices,” Jesus Mantas, global managing partner at IBM Consulting, said in a statement. “Our IBV study findings demonstrate that building trustworthy AI is a business imperative and a societal expectation, not just a compliance issue. As such, companies can implement a governance model and embed ethical principles across the full AI life cycle.”

How Do You Implement Ethical AI?

The study provides business leaders with a set of recommended actions to enhance AI ethics, including:

  • A cross-functional, collaborative approach and a holistic set of skills across all stakeholders involved in the AI ethics process.
  • Establishing both organizational and AI lifecycle governance to operationalize the discipline of AI ethics with an approach to incentivizing, managing and governing AI solutions across the full AI lifecycle, from establishing the right culture to nurture AI responsibly, to practices and policies to products.
  • Reaching beyond the organization for partnership by expanding your approach and identifying and engaging key AI-focused technology partners, academics, startups, and other ecosystem partners to establish “ethical interoperability.”

“Every brand should be mindful of how even well-intentioned efforts may cross the lines of what's laid out in the details of these guidelines,” De Datta said. “Having a strong internal understanding of where and how you use AI, as well as prioritizing greater transparency with customers as you deploy this technology, is going to be key for brands navigating these new guidelines.”

Andrew Frank, a distinguished VP Analyst in the Gartner Marketing practice, said from a marketing perspective, the most impactful thing about these guidelines will be to get more marketers to focus on the ethical issues that AI is raising in their practices.

“The rapid acceleration of AI capabilities like text-to-image generation and automated personalization promises marketers and consumers more rewarding experiences. However, many AI capabilities also have the potential to propagate harmful stereotypes and encourage destructive behaviors,” Frank said. “Understanding and addressing these harmful, unintended side-effects presents a profound new challenge for marketers who are not used to considering things like how bias can affect large data models. Guidelines are obviously not sufficient to solve these kinds of problems, but at least they’re shining a light on them.”