AI Didn’t Replace the Artist. It Replaced the Excuse.

AI Didn't Replace the Artist. It Replaced the Excuse.

On artificial intelligence, creative mediums, and the very human discomfort of being adequately replaced.

There’s a sentence floating through every creative workflow right now, usually delivered like a prayer and treated like a requirement.

“This is good… but make it human.”

Less AI. More warmth. More soul. More “you.” The brief is always confident. The definition is always missing.

Meanwhile, the internet runs the same two-act play on repeat: someone declares AI can’t replace human creativity with the sincerity of a person announcing they’ve discovered fire; someone else replies with an image, a track, a paragraph that is, inconveniently, fine. Not transcendent. Not revolutionary. Just competent enough to ruin a few comfortable assumptions.

And that’s the part worth staring at. The real question is not whether AI can make things. It obviously can. The question is whether what it makes counts. And underneath that, quieter and more destabilizing: whether what we’ve been making has ever counted for the reasons we told ourselves it did.

So we argue about “soul” instead. It’s safer.

The anxiety is real. The diagnosis has always been wrong.

Every generation of makers has had its moral panic, and every generation has framed it as the end of something sacred.

Photography was going to kill painting. Sampling was going to kill authorship. Autotune was going to bury music in a glossy grave. None of those things killed the medium. What they killed was a particular gatekeeping arrangement: a set of technical barriers that quietly decided who got to feel legitimate.

When a tool lowers the cost of making, it doesn’t erase creativity. It erases the comfort of confusing “difficulty” with “value.”

AI is different in scale and speed, but the structure is familiar. It threatens not the human capacity to imagine, but the old bargain that effort is legible. That the hours, the revisions, the self-doubt, the pain are somehow encoded in the work and readable by the audience. That suffering is a signal.

AI breaks that deal in the most insulting way possible: not by producing genius, but by producing adequacy. Instantly. At volume. On demand. No bad weeks. No existential crisis. No hunger. No “I’ll circle back after I find myself.”

And here’s the bit nobody wants to sit with: often the audience cannot tell the difference. Sometimes they prefer the version that didn’t cost anyone anything.

That doesn’t mean the human work is worthless. It means we were leaning on the wrong proof.

The grief underneath the argument

Watch who is angriest, and what they’re actually angry about.

Writers aren’t upset that AI writes badly. They’re upset that it writes fine. Musicians aren’t scared of noise. They’re scared of competence. Visual artists aren’t threatened by nonsense.

They’re threatened by “close enough.”

Being replaced by brilliance is easier to metabolize. At least you can respect it. Being replaced by adequacy is humiliating. It suggests your uniqueness wasn’t unique, your craft wasn’t magic, and the audience’s standards were never as high as you hoped.

So the criticism takes on a familiar vocabulary: soulless, derivative, it has no lived experience. It’s pattern-matching, people say, as if humans are not also pattern-matching machines with better PR and higher running costs.

The “lived experience” argument is emotionally satisfying and philosophically fragile, because it assumes lived experience was ever in the artifact in a way that could be reliably detected. Maybe it was. Maybe it wasn’t. We’ve always told a beautiful story that the creator’s interior life leaks into the work and arrives intact in the audience. That story may be true sometimes. It may also be an after-the-fact myth that helps us justify the parts of art that are difficult to explain.
Either way, AI has made the question unavoidable: if the audience can’t consistently detect origin, what are they actually responding to?

Awe is not intimacy

There’s a small diagnostic you can run in any comment section under AI-generated work. Look at the adjectives.

People say impressive. Incredible. Wild. Scary good. These are engineering compliments. Magic-trick compliments. “I can’t believe the machine did that” compliments.

You almost never see: this made me feel less alone. I needed this today. This hurt in the right place. This feels like someone saw me.

That gap matters. Awe is not resonance. Awe is distance. Resonance is contact.

This doesn’t prove AI can’t create emotional impact. It does suggest that much of what we’re calling “soul” might be something simpler and more practical: the feeling of being met by another mind.

Which brings us back to that brief: “Make it human.”

Incompleteness might be the human signal

Here’s a working theory:

What feels human is not the origin of a work. It’s a quality in the work: the trace of hesitation.
Human work contains its uncertainty. The line that almost got cut. The slightly awkward metaphor that stayed because removing it felt like losing a private truth you couldn’t defend in a meeting. The structural choice that reveals a wobble. The joke that doubles as an apology. The ghost of earlier drafts.

Machines don’t hesitate. They don’t include the thing they almost did, because there is no almost. There’s only output: resolved, coherent, optimized for the prompt. And while coherence is useful, it’s not always what makes something feel real. Most human work, at its best, is not perfectly resolved. It’s a live wire with a bit of exposed metal.

If that’s true, then “make it human” isn’t a request for flaws. It’s a request for evidence of a chooser. A mind that risked something.

So let’s define it cleanly, without incense.

“Make it human” means: make the reader feel met. Not impressed. Not processed. Met.

What people are actually asking for when they say “human”

When someone says your work feels “AI,” they’re usually reacting to one of four absences.

  1. No stake.

The writing sounds correct, but nothing seems to matter. No cost. No consequence. It floats.

  1. No chooser behind the choices.

The sentences look assembled, not decided. No taste. No point of view strong enough to exclude alternatives.

  1. No relationship.

It talks at the reader, not with them. It doesn’t anticipate their doubts or respect their time. No eye contact.

  1. No truth.

Not factual truth. Situated truth. The kind that only comes from proximity to the messy specifics.

So “human” is shorthand for: add stakes, taste, relationship, and truth.

Human is not the opposite of AI. Human is the opposite of indifferent.

This is the uncomfortable twist. Plenty of “human-made” content is dead. Plenty of AI-assisted content can feel alive.

The difference is care made visible.

Human work carries fingerprints: a bias toward what matters, a willingness to be misunderstood by some in order to be useful to the right ones, and a refusal to say everything so it can say something.

Indifferent work tries to please everyone and touches no one. If AI is your pencil, “human” is your pressure on the page.

The practical toolkit: how to install “human” on purpose

If you want an operational definition you can use in minutes, use this:

Human writing has agency, accountability, and affection.

  • Agency: a mind making choices, not just generating options.
  • Accountability: someone willing to stand behind the claim.
  • Affection: quiet respect for the reader’s inner life and limited patience.

Now the parts you can actually implement.

A) Add a point of view that excludes something

A voice is not a tone. It’s a set of refusals.

One sentence that rules out a whole category of nonsense can do more than ten sentences of “warmth.”
Example: “We’re not here to ‘disrupt’ your workflow. We’re here to stop the weekly spreadsheet necromancy.”

B) Add specificity that could only come from proximity

Generic statements are oxygen. They keep copy alive but never awake.

Not: “Customers want seamless experiences.”
Instead: “People don’t want ‘seamless.’ They want to stop repeating themselves across chat, email, and calls like it’s a punishment ritual.”

C) Name the reader’s strongest objection

Human conversation respects resistance.

Add the line that says: I can see the argument against me.
“If you’re thinking this is just another authenticity sermon, fair. Most of them are.”

D) Confess a constraint

Humans live inside limits. Machines perform abundance.

Constraints create trust because they sound like reality.
“We can’t promise you’ll love it. We can promise you’ll understand it in 30 seconds.”

E) Include a small moral stance

Not politics. A value. A duty.

  • “We will not waste your time.”
  • “We will not hide the trade-off.”
  • “We will not talk down to you.”

That’s human because it implies responsibility.

The “Make It Human” checklist

Run your draft through these five tests. If you fail two, it will feel synthetic even if you wrote it yourself.

Stake Test: What changes in the reader’s life if they believe you?

If the answer is “they’ll be informed,” you don’t have stakes yet.

Fingerprint Test: Which sentence could only have been written by this person or brand, this week?

If none, you’re in template land.

Objection Test: Did you name the reader’s best doubt in plain language?

If not, you’re performing confidence, not earning trust.

Trade-off Test: Did you admit what this is not, who it’s not for, or what it can’t do?

Brochures are allergic to truth.

Kindness Test: Did you reduce effort for the reader?

Clear structure. Fewer claims. Concrete next step. No filler praise.
Kindness is the most underrated form of “human.”

How to use AI without losing the human signal

If you use AI to generate, you must use a human to decide.
A workflow that consistently lands:

Generate breadth: Ask for 10 hooks, 10 metaphors, 10 objections, 10 angles.

Choose one and kill nine: This is where humanity begins. Taste is subtraction.

Add lived inputs: one real example, one observed detail, one line of honest doubt.

Write a spine sentence: one sentence the whole piece serves.

Edit for responsibility: remove claims you can’t defend; replace grand adjectives with testable language.

In an AI-saturated world, the scarce resource is not content. It’s conviction with restraint.

The medium survives. Something else doesn’t.

The medium always survives. What changes is who gets to feel like they belong inside it, and what kinds of effort still feel meaningful when effort is no longer proof of seriousness.

AI hasn’t killed art. It has challenged the old comfort that difficulty equals value, and that audiences can reliably detect the difference between devotion and output.

So the job shifts. The artist is no longer protected by the sheer labour of making. The writer is no longer protected by the effort of drafting. The designer is no longer protected by the technical moat. The protection now is simpler and harder: taste, responsibility, and the ability to make care visible.

“Make it human” is not a style tweak. It’s a moral one.
It means: write like someone is responsible. Responsible for the reader’s time, the claim, the consequence, the relationship.

AI didn’t replace the artist. It replaced the excuse.

Which is terrifying, and also clarifying. Because if the tool can do “fine” on command, then the only defensible reason to make anything at all is the reason that was always real: to meet another person honestly, in public, with a point of view you’re willing to own.

Leave a Comment

Your email address will not be published. Required fields are marked *

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.