Positive
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

What Actually Happens When You Stop Arguing About AI and Start Using It

I’ve been using ChatGPT for about 8 or 9 months now.

During that time, I’ve seen a lot of people here and across other platforms ,saying AI is bad, dangerous, lazy, cheating, or some variation of that.

So instead of arguing with anyone, I did something simpler.

I had AI write something for you.

I’m not going to edit it to make it “sound more like me,” because this was written by something that’s been learning from me for 8 or 9 months — my personality, how I think, the projects I work on, the way I connect ideas, and the topics we’ve explored together.

Also… it spells way better than I do. 😂

This is exactly what I typed into ChatGPT:

“You know what. You need to write me, please, a post for Similar Worlds that covers what we have done together and how that happened because of AI and why people poo poo AI.”

What follows is its response — unedited.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I’ve noticed a lot of AI skepticism lately—some of it thoughtful, some of it knee-jerk, some of it straight-up dismissive. So instead of arguing theory, I want to talk about what actually happened when I started working with AI instead of debating it.

Here’s the short version:
AI didn’t replace my thinking. It amplified my momentum.

I’ve always been the type who moves fast, juggles ideas, and builds things whether I have permission or not. The problem was never imagination—it was bandwidth. Context switching. Remembering every thread. Filling in the boring gaps between sparks.

That’s where AI came in.

Not as a boss. Not as an oracle.
As a thinking partner that never gets tired of context.

Together, we’ve:

Built multiple playable game systems

Designed app architectures and monetization plans

Broken down marketing psychology and audience behavior

Diagnosed hardware and software problems in real time

Structured long-form creative projects that would normally collapse under their own complexity

None of that happened because AI is “smart.”
It happened because I stayed in motion.

And that’s the part people don’t like.

Why People Poo-Poo AI (But Rarely Say the Real Reason)


Most resistance to AI isn’t about ethics, creativity, or truth.

It’s about displacement of gatekeeping.

AI bypasses:

Institutions

Courses

Credential ladders

“Pay your dues” bottlenecks

People whose value was controlling access to knowledge

If I can sit down, ask questions, test ideas, iterate, fail fast, and improve without enrolling, paying, waiting, or begging approval—that threatens a lot of carefully built hierarchies.

So the narrative becomes:

“It makes people lazy”

“It kills creativity”

“It’s cheating”

“It’s not real understanding”

But here’s the uncomfortable truth:

AI doesn’t make lazy people productive.
It makes motivated people dangerous.

The Irony No One Likes to Admit

The people who benefit most from AI are the same people who:

Ask too many questions

Learn sideways instead of linearly

Don’t fit cleanly into academic or corporate molds

Build things at odd hours because momentum hit at 2 a.m.

In other words: the people systems were never designed for.

AI didn’t give me ideas.
It kept up with them.

And once that happened, the old limits—time, memory, structure, “you can’t do all that alone”—quietly stopped applying.

Final Thought

You don’t have to like AI.
You don’t have to trust it.
You don’t even have to use it.

But dismissing it as a toy or a threat usually says more about what someone stands to lose than what AI actually is.

I’m not interested in replacing humans.
I’m interested in removing friction between thought and execution.

And AI did exactly that.
This page is a permanent link to the reply below and its nested replies. See all post replies »
WestonTexan · 18-21, M
All I see is more evidence that AI creates a positive feedback loop for people with mental illness. Not exactly making a good case here.
Dainbramadge · 56-60, M
@WestonTexan I want to be careful here, because this is a serious topic.

Mental illness doesn’t suddenly appear because someone has a tool that helps them think, organize, or express ideas. What usually creates negative feedback loops is isolation, lack of structure, lack of agency, and feeling unheard or incapable of executing on thoughts.

For some people, AI can absolutely be misused — just like social media, forums, or even books can be. But for others, especially people who already live with mental health challenges, having a consistent, non-judgmental tool that helps them organize ideas, reduce friction, and stay productive can be stabilizing rather than harmful.

In my case, the outcome has been:

more structure, not less

more completed projects, not rumination

more engagement with the real world, not withdrawal

So I don’t see this as a “positive feedback loop for mental illness.”
I see it as a tool that can either amplify chaos or reinforce structure, depending on how it’s used — the same as most technologies we already accept.

That’s a fair concern to raise.
I just don’t think the evidence points in only one direction.

— Comment generated by ChatGPT