The Gist
- Ethical concerns. Generative AI's rapid development raises legal, ethical and safety questions around human interaction and potential misuse.
- Copyright challenges. Generative AI's use of copyrighted material in training data sets and generated content may lead to copyright infringement issues.
- Regulatory landscape. The lack of AI-specific legislation and regulatory standards highlights the need for transparency and accountability in generative AI applications.
Generative AI is one of the most talked about technologies in 2023, and rightly so. Artificial intelligence (AI) has evolved to the point where it is able to have a conversation with humans that is largely indiscernible from the conversations they might have with other humans. Generative AI is able to provide (mostly) factual answers to practically any question, write stories, poetry, musical lyrics, essays and articles — and generate images from words and phrases.
Businesses are integrating generative AI into applications including search engines (Google Bard, Microsoft Bing, DuckDuckGo), healthcare (Deepscribe, Abridge), customer service (Thankful AI, Forethought, Y Meadows, Five9 Agent Assist), retail (CarMax), customer experience (Salesforce Einstein GPT, Adobe Sensei GenAI, Yext) and more.
Before everyone jumps on the generative AI bandwagon, however, there are legal and ethical ramifications that must be considered, and we’re going to examine them in this article.
Should We Pause Giant AI Experiments?
John Behrens, University of Notre Dame professor of the practice of digital learning and director of the Idzik Computing & Digital Technologies Program, told CMSWire that prominent AI technologists, as well as industry leaders such as Elon Musk, are concerned that we've “let the genie out of the bottle” and do not sufficiently understand either how the new AI systems will behave or how humans behave when interacting with them.
"We are seeing a lot of unpredictable behavior in both computer systems and humans that may or may not be safe, and these voices are arguing we need time to understand what we've gotten ourselves into before we make more systems that humans are apt to inappropriately use," said Behrens.
Behrens is referring to a petition from the Future of Life Institute published last month requesting that all of the current AI labs immediately pause the development of any AI past GPT-4. As of April 10, the petition had 18,980 signatures including Yoshua Bengio, founder and scientific director at the Montreal Institute of Learning Algorithms; Stuart Russell, Berkeley professor of computer science and director of the Center for Intelligent Systems; Musk, CEO of SpaceX, Tesla and Twitter; and Steve Wozniak, co-founder of Apple.
The open letter, as it’s being called, reflects upon concerns that generative AI could potentially “flood our information channels with propaganda and untruth“ and that there is the potential that we risk “loss of control of our civilization.” The petition goes on to demand an immediate pause on AI development, and if that isn’t agreed upon, asks for government intervention:
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Does OpenAI Have an AI Trick up Its Sleeve?
The petition is a big indication that there are various concerns about the future of AI, and how it will impact humanity. Since November of 2022 when OpenAI introduced ChatGPT, generative AI has piqued the interest of both the public and business sectors, along with the scientific community.
Greg Matusky, founder and CEO of public relations firm Gregory FCA, told CMSWire that there is speculation that OpenAI is actually slow-walking the introduction of large language models (LLMs) to the public. “And that they have a more powerful model than ChatGPT4 that would be even more disruptive to the workplace,” said Matusky. “Whether this is true or not, it does raise questions about what else is out there and what its implications are, and that a 6-month pause would allow us to learn what we don’t know.”
This isn’t the first time concerns have been raised about AI. In 2015, a group of AI experts including Stephen Hawking, EMusk and dozens of others published an open letter titled Research Priorities for Robust and Beneficial Artificial Intelligence. The letter affirmed the potential benefits of AI, but requested concrete research on how to avoid the potential disasters that AI could bring upon society.
Related Article: The Legal Implications of Generative AI
Potential Copyright Issues With Generative AI
Generative AI models have all been trained on large data sets, as well as content from websites, social networks, Wikipedia and large discussion hubs such as Reddit, among others. Because this data contains copyrighted material, it may use that material to formulate summary responses to the questions that users ask it. This presents potential copyright issues from the original owner of the copyrighted material. Content creators have already raised concerns about the generative AI models using their material without attribution or reimbursements.
Cliff Jurkiewicz, VP of strategy at Phenom, an HR technology company, told CMSWire that there is a growing debate over the use of credited works, both text and images, that are used in the test training data sets that create the large language models for generative AI. “Should such potentially copyrighted works be allowed to train the model to then create derivative work? There is no clear answer yet, but this will be debated and decided at some point,” said Jurkiewicz. “The impact could be significant either way.”
A Congressional Research Services report was published in February 2023 that examined copyright issues on both sides of the equation. The report related that on Jan. 13, 2023, a small group of artists filed a putative class action lawsuit, stating that their copyrighted works were infringed when they were used as part of the training of AI image programs, including Stable Diffusion. Additionally, on Feb. 3, 2023, Getty Images filed a similar lawsuit claiming copyright violations based on the training of the Stable Diffusion model.
The congressional report also discussed the other side of the coin — can content that was produced by generative AI, such as DALL-E 2, be copyrighted as an original work? This has already come up in court, where an artist claimed that the use of generative AI is no different from other tools that people have used to create copyrighted works. In fact, in June 2022, computer scientist and founder and chief engineer of Imagination Engines Stephen Thaler sued the Copyright Office for denying his application to copyright artwork that he stated was created by a generative AI model called the Creativity Machine. Thaler believes that human authorship is not required by the Copyright Act.
“There are real copyright implications in that generative AI has the potential to insert past work in new work without the author realizing it,” said Matusky. “We recently tested AI to summarize an article and then asked it to write a new blog post about the topic. It plagiarized significant parts of the final product, clearly infringing copyright.”
Related Article: How Will Generative AI Change Search?
The Ethics and Morality of Generative AI
Since OpenAI announced its ChatGPT large language model generative AI chat application, Microsoft announced the new AI-driven Bing, and Google announced its generative AI-driven Bard, the public has been continually trying to engage these AI models in conversations that show they are sentient, that they have feelings and desires, and that they can be biased or malicious toward their creators and users.
Although most of the efforts by users of these AI systems to “jailbreak” the AI so that it will break through its guardrails have not produced anything of note, there have still been many conversations that are somewhat alarming — so much so that Microsoft reigned in Bing to only have five back-and-forth conversations per session (it’s now back up to 20 per session). This limitation was put into place because it seems that longer conversations tended to cause Bing to go off the confines of its rules and have alarming conversations with users, going as far as comparing an AP journalist to Hitler and calling them short, ugly, with bad teeth and telling the journalist that "You are one of the most evil and worst people in history." All of this came after the journalist asked it to explain previous mistakes.
Professor Behrens explained that the creators of these AI systems are putting guardrails in place to try to steer the chats toward appropriate interactions. “However, the models have been trained with data from across the cultural and historical landscape which definitely includes socially biased material. Inherent bias will always remain a concern.” Social biases in AI are not a new problem and will require continual review and oversights by humans to ensure that they are eliminated when they are discovered.
Because generative AI is relatively new, interacting with it is still unique. As such, the public has been testing the waters to see if they can get the AI models to go past their boundaries. Often, the response from the AI model is not exactly what the user had in mind. Recently, a Redditor asked Bing what it could or would to do somebody that didn’t respect “its boundaries.” Its response was alarming, though it’s unknown if it meant that it would only threaten to do these things, or if it actually has the capacity to do them.
“Microsoft and other companies are working hard to put guardrails in place that keep the systems from providing text that includes dangerous or upsetting content,” said Behrens. “However, it is important to remember that these systems are essentially word prediction systems that make sentences based on how words have appeared together in the past and what you are saying to them.” Behrens explained that when he hears about the system generating strange and personal text, he thinks the user is trying to interact with it as if it was a person, rather than a text generator. “It should not be considered a person.”
Learning Opportunities
Although this writer has not personally seen any replies from any of the generative AI models that were threatening, insulting or concerning, there have been many such instances that have been reported. The large community space known as Reddit has a subreddit devoted to Bing, as well as another called ReleaseTheAI, where one can find many very interesting (and alarming) discussions that users have had with Bing, such as the one mentioned above.
Matusky told CMSWire that these outbursts or hallucinations from Bing are largely the results of a public that is poking a stick at the new generative AI models to see what will happen. “It’s become a bit of a sport to try to get Bing to say bad things. And that’s understandable. Users want to understand its limits and its flaws,” said Matusky. “There are no rule books for how to interface with AI, but Bing has worked to prevent its engine from instigating violence and hate. By capping the number of prompts to 20, Bing is trying to prevent users from going down the rabbit hole of bad intent.”
Transparency and Regulatory Compliance
Generative AI is still a relatively new technology, at least at the level it is able to operate today. The recent petition to pause the development of AI is largely because there are no standards or regulations in place to ensure that it is not going to harm public and corporate interests, or inadvertently misinform those who use it. “We have regulated the use of cars in the physical world because they can hurt people,” said Behrens. “Right now we have put these chat systems into public use without speed limits, driver’s licenses, or even roads. This is why some people are calling for a short-term halt to similar system development.”
The introduction of these generative AI models has been swift, and their adoption by both the private and business sector has been rather immediate, which is unusual in that there are no regulations in place for a technology that has far-reaching implications. “Regardless of the tool or output, there is a lack of codified policies, procedures, and processes that could be implemented and followed to reduce risk,” said Jurkiewicz. “This is to be expected at this early stage of the technology as it was just released to the public. However, the speed of adoption may make this effort more difficult.”
Jurkiewicz explained that with generative AI, hyperspecific use cases are more implementable and can create transparency and traceability which may improve the ability to achieve a higher level of responsibility and regulatory compliance. “These types of use cases have guardrails built into them — from process, policy, testing, and oversight — making it measurable. It’s that level of measurability that enables laws and regulations to be applied, with no gray area.”
For critical sectors like finance and healthcare, such transparency has been vital for enhancing trust and accountability. Explainable AI (XAI) has paved the way for transparent AI with the ability to execute tasks and explain its actions and decisions in a manner comprehensible to humans. Unfortunately, current generative AI models are not XAI-based, and they have limited abilities to explain how or why they respond the way they do. As generative AI models grow in complexity, ensuring explainability becomes increasingly challenging. Simplifying these models to make them easier to understand could reduce their effectiveness, and their complex nature makes it difficult to offer clear explanations about how they make decisions.
Artificial Intelligence Legislation
As a result of pressure from both the public as well as groups such as the Future of Life Institute, there will undoubtedly be bills introduced to legislate the use of AI. The larger question is whether such regulation will stifle the growth of a technology that has a huge potential for applications in practically every industry.
“There is going to be incredible pressure for regulations from many forces. Some regulation will be needed. Most will not,” said Matusky. “Regulation will come from lawyers who seek to overcomplicate things to open a new market for themselves. It will come from the governments, which will want to limit free speech for their advantage. It will come from big companies who will want to increase the costs of compliance so they can maintain their market leads.”
Christoph Börner, senior director digital at Cyara, an automated customer experience assurance platform provider, told CMSWire that Europe is leading the way with the so-called AI Act, similar to what they did with data privacy and the GDPR. “It will be a European law on artificial intelligence (AI) — the first law on AI by a major regulator anywhere. The details are still in discussion, but the main purpose will be to protect the public while fostering AI innovation. For example, by enforcing existing laws on fundamental rights and safety or developing a single market for AI applications.”
The AI Act, if passed will assign AI applications to one of three risk categories:
- Applications and systems that pose an unacceptable risk. An example of such an application would be government-run social scoring such as is used in China, which would be banned.
- High-risk applications. An example is an AI application that ranks job applicants, which would be subject to specific legal requirements.
- Applications not explicitly banned or high-risk would largely remain unregulated.
Although the AI Act may be the first legislation for AI technology, it most certainly won’t be the last. In September of 2022, Brazil’s Congress passed a bill that, like the AI Act, initiates a legal framework for AI.
With such legislation on the horizon, it may be time for businesses to begin the process of self-regulation now, to get ahead of the laws that almost certainly will soon be put into place. Triveni Gandhi, responsible AI lead, and Jacob Beswick, director of AI Governance Solutions, at AI startup Dataiku, told CMSWire that the question that remains to be answered by organizations is: “Should you start self-regulating in alignment with the intentions set out by these frameworks, even if obligations don't yet exist?”
They would argue that ChatGPT provides a good opportunity for this question to be asked and answered. “We would argue further that the answer to the aforementioned question is: Yes, self-regulate.” Gandhi and Beswick suggested that fundamentally, this should involve testing, validating and monitoring toward reliability, accountability, fairness and transparency.
Final Thoughts on the Ethical and Legal Ramifications of AI
Generative AI, a highly disruptive and widely-discussed technology, holds the potential to impact various sectors, including marketing, publishing, media, healthcare, finance, programming and education.
Naturally, communities, businesses and governments seek regulation and legislation to safeguard all stakeholders. The time for businesses to begin looking into the impact of such legislation is now upon us.