A rusty orange robot holds a tin can phone to its head, listening to a string stretched off-frame, against a solid orange background.
Editorial

When AI Speaks for Us: The New Crisis of Authorship and Identity

6 minute read
Luis Fernandez avatar
By
SAVED
The more AI helps, the more we risk losing the voice that defines us. Here’s how to draw the line.

The Gist

  • AI helps — but can also erase you. Generative AI amplifies creativity and productivity, but it can just as easily obscure authorship and dilute personal and organizational identity.
  • Ownership still matters. As with AI-generated code, AI-generated content requires human understanding, responsibility and intent to be authentic.
  • The real risk is becoming an impostor. Overreliance on AI can create professional shortcuts that undermine integrity, credibility and your voice.

Generative AI, more than other types of AI, has found a place in the daily routine of many people and organizations. I have the opportunity to see and feel this from a diverse perspective, as part of my job is to advise organizations on how to leverage it more effectively, making it more than a simple prompt answerer. From that angle, I have experienced firsthand its strengths, dangers, opportunities and threats.

Table of Contents

The Rise of Generative AI Across Professions

In some domains, the evolution of this tool and its impact have developed quickly, such as in software engineering. We have now developed points of view, risks and benefits. Despite the hype, it’s clear that AI-generated code cannot be blindly trusted. You need an engineer who understands the code and who takes ownership and responsibility for it to be trusted. Yes, AI can help with the heavy lifting, but ownership still rests with the engineer.

But there are domains in which this is still written in pencil, like music, writing, design and consultancy, to name a few.

AI’s Complicated Role in the Life of a Writer

Allow me to talk about writing. I am an avid reader and writer. I deeply enjoy the craft of writing, and I admire those who, throughout the centuries, have had a razor-sharp pen. I write daily almost as a need. It helps me unwind, document, expose and play with my thoughts, feelings and ideas.

The first time I prompted an LLM, I felt threatened: that thing could write better than me! It scared me. As models improved, I became comfortable with them and understood their value. I still believe they won’t be able to write Dante’s Divine Comedy, nor a good book on technology leadership, but it can help. And for certain types of work, such as a large DAM, an ecommerce site with an extensive catalog, or a website that requires to be translated or transcreated, it can be transformative.

Related Article: The Art of Localization Strategy: Beyond Translation for Better CX

When Convenience Challenges Craft

For the past months, I’ve been wondering about the role of AI in my own writing, both my note-taking, my articles, and my “at job” writing, which is considerable. Is it still worth investing 8–10 hours to craft a strong article when a machine can produce something passable in minutes? Should I spend 30 minutes explaining some ideas in an email when I can get 90% of the way there with a prompt?

Recently, I got my answer in a very unexpected way.

The Experiment That Challenged My Assumptions

I was experimenting with building a tool to bulk-generate content, pushing the edges of the digital content supply chain by finding ways to industrialize content creation. The process was fascinating. The output ... less so.

AI Authorship Risks and Responsibilities

A comparison of the most common pitfalls in AI-generated work and the human responsibilities required to maintain authenticity.

ChallengeWhat It Looks LikeHuman Responsibility
Loss of personal voiceContent sounds polished but generic; lacks intentional tone or perspective.Shape, revise and inject your authentic style and viewpoint.
False authorshipWork appears to be “yours” even though it was produced entirely by a model.Take ownership only when you have reviewed, edited, and stand behind the final result.
Over-industrialization of contentScaling production leads to volume without meaning or connection.Use automation strategically, not as a replacement for craft or intent.
Erosion of critical thinkingRelying on AI for ideas, structure or synthesis reduces original reasoning.Begin with your thought process, then use AI to enhance—not replace—it.
Brand dilutionMachine-generated communication weakens personal or organizational identity.Ensure every output reflects your values, clarity and authorship standards.

In my experiment, my tool successfully created a large amount of good content in an automated way. The content had my signature on it.

The process was crisp from an engineering perspective, and it can have multiple uses. I shared the output with some experts to get feedback. One of these experts, whom I respect greatly, had an immediate, genuine and direct reaction: "How can you ask me to read something that seems to be yours but is not?”

It stung. And he was right.

I had engineered an innovative process, but I wasn't the author of the content it generated. The words weren't mine. I felt no connection to them, and I hadn't earned the right to sign them, nor did they deserve my name. They were not bad, but were not mine.

I followed his advice and changed the direction of the experiment. But I felt bad. I felt like a fraud. Unauthentic. And I was.

Related Article: Which AI Path Will You Take as a Marketer?

The Question of Authorship and Authenticity

The whole event reminded me of Isaac Asimov's story "Light Verse," where an artist became famous for her art, which was in reality created by her robot, Max. Max was able to make such fabulous creations due to a bug in its software. Someone fixed the bug, and the art was gone, and so was the artist. Who was the artist? The robot or the lady? It was science fiction when I read it two decades ago; it is a real question today: if a robot creates work, who deserves the credit? The robot? The human? The robot maker? A combination?

The Ethical Line Between Assistance and Fraud

AI is a tool. But it is an entirely different type of tool. Spreadsheets are also tools, but we don't care about taking credit for the calculations (or is it implied that we didn't perform them)? Both the spreadsheet and AI amplify our capabilities, but unlike any other tool to date, AI can not only amplify our thinking but also replace it. It can produce writing, code, paintings and music that appear to be thoughtful, intentional and authored, when they are not.

And yet, authorship is not typing. Authorship is a responsibility. Authorship is a relationship. Authorship is time, revision, ownership and intent.

And this goes on different levels. If we put our name on something generated entirely by a machine, without reviewing, shaping or taking responsibility for it, yes, we are committing fraud. We are not being genuine. We are deceiving.

And this is not just personal. It's professional and organizational. It applies to individuals and brands alike.

Our brands (personal or organizational) have become increasingly digital. We are exposed to the possibility that AI could mask every email, message, image, CV, proposal and social post. Our signature is no longer ink on paper; it's our domain name, our social profile and our email address.

On the flip side, AI is a tool that has to be used. It wouldn't be sound to try to ignore it or fight it, because those who understand how it works and how it can be applied will have an advantage over those who don't. And that applies not just to ignoring AI, but also to overusing it.

AI tools have tremendous potential to enhance our skills, but they also pose the danger of eroding who we are. And although it is heavily philosophical and ethical, it also lands into its pragmatic aspects.

The Cultural Risks of Overreliance on AI

Examples are abundant. Many years of selfies and filters have affected our self-perception and our body image. Amazon and other marketplaces have been flooded with AI-generated books creating a major obstacle for human authors. Open source projects have been flooded by tons of AI-generated pull requests, making projects harder to maintain.

Learning Opportunities

If we are not mindful and aware, we can silently and easilly over-rely on machines to generate ideas, words, images, formulas and code. Will we eventually forget how to hear and define our own voice? Will love letters be written by AI models? Will we lose our identity behind layers of generated messaging? Will people start to believe that the polished, AI-smoothed version of themselves is who they really are (as filters and selfies have started to)? Will we lose capacity to think critically?

Amplification, Impostor Syndrome and Identity

AI shines as an amplification of our capabilities. But if we are not cautious, it will also amplify our impostor syndrome. And, if we are not mindful, it can turn us into impostors.

Where the Real Line Is Drawn

I learned a good lesson that applies to both individuals and organizations. Might seem obvious, but it took me a while to internalize it:

  • Use AI, but own your work.
  • Use AI, but keep your voice.
  • Use AI, but never outsource your identity.

That is where the real line is drawn: not in the tool, but in the integrity of the human using it.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Luis Fernandez

Luis is a professional who specializes in applying digital technologies to improve businesses. He currently serves as a tech executive director at VML, a global experience agency that leverages creativity, technology, and culture to create connected brands. Connect with Luis Fernandez:

Main image: Optinik | Adobe Stock
Featured Research