I've worked in social media and marketing—two industries that worship at the altar of imitation. One of the most common pieces of advice you’ll hear is to “study the top five people in your niche and replicate what they do.” Look at their hooks. Their tone. Their topics. Their camera angles. Reverse-engineer the formula and plug yourself in.
It’s positioned as strategic research. In practice, it’s how originality dies.
This copycat logic has spread like mold through every corner of the internet. Scroll long enough, and you start to feel like you’re watching a single mind rehearse its lines in a thousand different bodies. Everyone’s saying the same thing, using the same words, in the same order—then pretending they discovered it themselves.
And nowhere is this more obvious than in how people talk about AI.
The irony of “AI slop”
If you’ve spent time online lately, you’ve seen it: people declaring that “AI content is slop,” that it’s flooding the internet with garbage, that it’s replacing creativity with automation. And sure, there’s plenty of low-quality output out there. There’s always been. We had clickbait before ChatGPT; we had BuzzFeed quizzes before Midjourney.
But the irony is that the loudest critics are often producing the very same thing they claim to despise. They’re not generating insight—they’re recycling someone else’s outrage. The “AI slop” narrative itself has become slop.
It’s like the modern version of the “everyone’s saying” trick. Trump made that rhetorical move famous: make a vague claim, back it up with an invisible consensus, and repeat it until it feels true. Online discourse has mastered the same art. “Everyone knows AI art is bad.” “Everyone can tell AI writing is soulless.” These statements work not because they’re true, but because they’re familiar. And familiarity feels safe.
And in the attention economy, safety sells.
The algorithm loves déjà vu
Social media rewards sameness. If something performs well once, the system favors replicas. So people learn the moves—copy the cadence, mirror the framing, duplicate the take. It’s the easiest path to engagement. You don’t even need to think about what you believe anymore; you just have to sound like you belong to the dominant camp.
The truth is that most people don’t hate AI because they’ve thought deeply about it. They hate it because the right people told them to. They’re performing dissent, not practicing discernment. It’s groupthink with a creative filter over it.
And here’s the kicker: AI isn’t even the real problem. It’s a mirror. It reflects the laziness, fear, and mimicry already baked into the system. When you ask AI for something derivative, it delivers. When you prompt it with clarity and originality, it delivers that too.
So if the internet is flooded with regurgitated ideas, maybe it’s not the machine’s fault. Maybe we trained it on our behavior.
Being a nonconformist in a copycat culture
I’ve tried the copycat approach before—it’s soulless. The posts might get engagement, but they don’t build gravity. You can’t earn respect by parroting strategy; you earn it by having a point of view.
That’s why I’ve started using AI not to replicate what’s popular, but to challenge it. When someone posts a lazy anti-AI rant, I’ll often suggest they use ChatGPT to form a better argument. It’s funny, but also telling. The same tool they’re mocking could have helped them express their critique more intelligently.
It’s not about defending AI. It’s about defending thinking.
How to study what works without copying it
If you want to stand apart online—really stand apart—you still have to study what works. But you need to study why it works, not just what it looks like.
Here’s how I frame that difference when I’m building content myself:
- Study structure, not surface.
Look at how someone’s ideas are organized, not just how they’re styled. Are they telling a story? Challenging an assumption? Creating tension? Once you understand the structure, you can rebuild it your own way. - Invert the trend.
When a take becomes too common, ask: what’s the unspoken assumption here? What if the opposite is true? That’s where originality lives. - Add your fingerprints.
If you’re going to touch a trending topic, make it personal. Bring in your lived experience, your humor, your contradictions. Algorithms can’t replicate that. - Make the argument no one else is making.
Instead of echoing “AI slop is bad,” ask: why do we keep producing slop without AI? Why are we more forgiving when a human makes it?
Ask the questions that pull the thread, not the ones that tidy it up.
Prompts for nonconformists
If you use AI tools, don’t let them flatten your thinking. Use them to expand it. Here are a few prompt examples I often use to escape the copycat loop:
- “Everyone in my field is saying X. What’s a nuanced or underexplored counterpoint?”
- “Summarize the dominant narrative around [topic], then give me three creative ways to reframe it.”
- “If I wanted to challenge the most viral post on this topic without being confrontational, how might I structure that argument?”
- “Help me make this post sound more like an original essay and less like a LinkedIn trend.”
Or one of my favorites:
- “What questions would a true nonconformist ask about this topic before forming an opinion?”
These kinds of prompts turn AI into a sparring partner instead of a shortcut. The goal isn’t to sound smarter—it’s to think differently.
Escaping the slop cycle
The internet doesn’t need more opinions. It needs more thinking.
If you’re posting online, whether about AI or anything else, ask yourself: am I adding to the noise or clarifying the signal?
Calling out bad takes online might feel good in the moment, but it rarely changes minds. What does change minds is modeling what curiosity looks like in public.
That’s how you break the echo chamber—by refusing to feed it.
When I see people parroting the same recycled lines about AI, I remind myself that the real fight isn’t against the machines. It’s against mediocrity disguised as certainty.
The slop isn’t just the content—it’s the complacency behind it.
AI didn’t create that.
We did.