The Gist
- It wasn't me. It was AI. The importance of user accountability in AI interactions and the need to avoid scapegoating technology.
- Just the facts, ma'am. The critical role of fact-checking and corroborating information from AI sources before acting on it.
- Responsibility meets ethics. Users need to educate themselves on AI functionality and limitations, fostering responsible and ethical use of technology.
Sometimes you can tell just from the headline of a news story that whoever wrote the piece and whoever gave the nod for its publication, either did not bother to do their basic homework, or were mainly out to get people to click on the story and pass that story along without reading it (often it is both).
Such a story was recently published by The Washington Post, and the headline read: “ChatGPT invented a sexual harassment scandal and named a real law prof as the accused.”
My immediate reaction to that headline — knowing full well what the story was pitching, having read such stories for several months now, ever since the mainstream launch of ChatGPT in November 2022 — was to ask: Would such a story have been published if instead of ChatGPT being the source of the allegation, it was one single human being who was widely known to occasionally make stuff up, some of it wild and obviously made up, some of it half true, and some of it subtly deceptive?
The answer of course is no.
What's the Difference Between AI and Humans?
So, let’s go back to basics and ask: What is the difference between an AI telling you something and some human being telling you that same thing? In both cases, if you are a responsible, self-respecting adult, you would double check and triple check what you were told before you accept it, especially if what you heard or read is something that pertains to a consequential matter or piece of information that you are going to act on.
Consider the Credibility of the Source
First: if you are such an adult, you would consider the credibility of the source. In this case, ChatGPT, as I mentioned, is very well known (or maybe not?) to hallucinate, especially when it comes to answering questions about people. It hallucinated marvelously about me, saying things like I did a stint in Oxford, that I was once a member of SRI, that I founded several companies I personally have never heard of, and that I was a billionaire (in my dreams!).
In what it came back to about me, there was a whole lot that was true, but it also yielded howlers as well as lots of stuff that was just a bunch of garden vareity untruths. And yet, I saw none of that as "troubling" or “concerning.” It was what it was. It was a bundle of text that a piece of software gave me when I gave it as input a series of words. Not more than that. Only someone naive about such technology would have taken its output as anything more than that, let alone as gospel truth.
Related Article: OpenAI's New ChatGPT Might Be the First Good Chatbot
The Fine Act of Corroborating Information
Second: After considering the credibility of the source, you, the grown up adult, would then move to the next step of corroborating what you were told. And you would corroborate not with another AI (like Google’s Bard), but with some other type of source: for instance, a regular old human person and/or material that you found via good old blue-link search engines from reliable sources, and maybe hard copy books that you own, and so on.
Only after you have done all of that and gotten yourself to feel that you are standing on solid ground (that is why editors exist in journalism) would you act on the information — and in the case of journalism, you would also provide the names of the people you quote, or at the very least describe their provenance.
So, no, these AIs and the companies that have created them, are not in any way liable, as many are insisting that they are. The people liable are the people who take their output and use it without doing some good old fashioned due diligence. Whether it was out of malice that they asserted falsehoods or committed slander, or whether it was as a result of lazy negligence, the liability rests with them.
Learning Opportunities
And if they are merely ignorant about how the AIs work, they need to get curious and educate themselves. Ignorance has stopped being an excuse a long time ago. At our easy and free disposal we now have powerful search engines, smartphones that we can pick and use to call or text someone to verify; then there is email, social media, videos, podcasts, live discussion forums such as Clubhouse and Discord, and more.
What's Next With Generative AI: Frivolous Lawsuits
I have no doubt that a lawsuit is going to happen sooner or later (if several have not already been filed). But these lawsuits will go nowhere because they will quickly reveal that those filing them are suffering from the basic illusion that the companies that make these AIs claim that the AIs are reliable dispensers of truth, when in fact they have never made such claims. And anyone who has used them for any stretch of time can easily grasp that reality. More crucially, they also reveal an attitude toward authority or perceived authority that has us be passive recipients of what we hear and read (or things we are kept from hearing or reading).
Today, if, say, a professor tells you a big fat lie and her students accept the lie and act on it and something bad happens, most of us would not hesitate to say that the professor is responsible for any damage that was caused by the student acting on the professor’s lie. Why? Because we operate in an epistemic ecosystem that has us trust people such as professors — and experts, and gurus, etc. — and, that by lying to us, they have betrayed our trust. And then when an AI like ChatGPT arrives, we pull it into that epistemic ecosystem and expect it to behave according to that ecosystem’s rules.
Related Article: ChatGPT Suffers First Data Breach, Exposes Personal Information
It's Not the AI's Fault
But the solution here is not to slap a red warning label against ChatGPT and leave the rest of the ecosystem of expertise mongering intact. Rather, we should ask: Is trusting anyone, especially professors and other cultivators of the obedience culture, a good thing -- and has it ever been a good thing? I think not.
Yes, the professors have considered opinions and even perhaps knowledge and much to say that is useful, but that does not render them immune to scrutiny. And if one agrees that we should always do the best that we can to make sure that we are not being fooled, and that even when some "authority" tells us something, we should not take it as the gospel truth, what grounds have we to point the finger at some software built, God knows how, by a private company that no one believes is an expert in anything?
As my hip hop friends would put it:
Learn how you can join our contributor community.