Positive
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

What Actually Happens When You Stop Arguing About AI and Start Using It

I’ve been using ChatGPT for about 8 or 9 months now.

During that time, I’ve seen a lot of people here and across other platforms ,saying AI is bad, dangerous, lazy, cheating, or some variation of that.

So instead of arguing with anyone, I did something simpler.

I had AI write something for you.

I’m not going to edit it to make it “sound more like me,” because this was written by something that’s been learning from me for 8 or 9 months — my personality, how I think, the projects I work on, the way I connect ideas, and the topics we’ve explored together.

Also… it spells way better than I do. 😂

This is exactly what I typed into ChatGPT:

“You know what. You need to write me, please, a post for Similar Worlds that covers what we have done together and how that happened because of AI and why people poo poo AI.”

What follows is its response — unedited.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I’ve noticed a lot of AI skepticism lately—some of it thoughtful, some of it knee-jerk, some of it straight-up dismissive. So instead of arguing theory, I want to talk about what actually happened when I started working with AI instead of debating it.

Here’s the short version:
AI didn’t replace my thinking. It amplified my momentum.

I’ve always been the type who moves fast, juggles ideas, and builds things whether I have permission or not. The problem was never imagination—it was bandwidth. Context switching. Remembering every thread. Filling in the boring gaps between sparks.

That’s where AI came in.

Not as a boss. Not as an oracle.
As a thinking partner that never gets tired of context.

Together, we’ve:

Built multiple playable game systems

Designed app architectures and monetization plans

Broken down marketing psychology and audience behavior

Diagnosed hardware and software problems in real time

Structured long-form creative projects that would normally collapse under their own complexity

None of that happened because AI is “smart.”
It happened because I stayed in motion.

And that’s the part people don’t like.

Why People Poo-Poo AI (But Rarely Say the Real Reason)


Most resistance to AI isn’t about ethics, creativity, or truth.

It’s about displacement of gatekeeping.

AI bypasses:

Institutions

Courses

Credential ladders

“Pay your dues” bottlenecks

People whose value was controlling access to knowledge

If I can sit down, ask questions, test ideas, iterate, fail fast, and improve without enrolling, paying, waiting, or begging approval—that threatens a lot of carefully built hierarchies.

So the narrative becomes:

“It makes people lazy”

“It kills creativity”

“It’s cheating”

“It’s not real understanding”

But here’s the uncomfortable truth:

AI doesn’t make lazy people productive.
It makes motivated people dangerous.

The Irony No One Likes to Admit

The people who benefit most from AI are the same people who:

Ask too many questions

Learn sideways instead of linearly

Don’t fit cleanly into academic or corporate molds

Build things at odd hours because momentum hit at 2 a.m.

In other words: the people systems were never designed for.

AI didn’t give me ideas.
It kept up with them.

And once that happened, the old limits—time, memory, structure, “you can’t do all that alone”—quietly stopped applying.

Final Thought

You don’t have to like AI.
You don’t have to trust it.
You don’t even have to use it.

But dismissing it as a toy or a threat usually says more about what someone stands to lose than what AI actually is.

I’m not interested in replacing humans.
I’m interested in removing friction between thought and execution.

And AI did exactly that.
This page is a permanent link to the reply below and its nested replies. See all post replies »
Jessmari · 46-50, T
I don't hate A.I. It has it's uses.

What I'm concerned about is the never ending resources it requires and the prices it's driving through the roof. There are entire stocks of ram and video cards sitting in warehouses doing nothing because they bought up more they can supply power for. In March of last year I bought 2x32 gb of ram for $152. I bought 2 kits resulting in 300 and some change. Now one of those kits goes for $750. Then there is the environmental impact and the potential to drive up consumer power use and price. I think it's all moving too fast for it's own good.
Dainbramadge · 56-60, M
@Jessmari Those are legitimate concerns, and I don’t disagree with them.

What I think gets lost in a lot of AI conversations is the difference between AI as a tool and AI as an industrial arms race. The resource hoarding, speculative overbuying of GPUs, and energy strain you’re describing aren’t inherent to intelligence or learning systems — they’re symptoms of centralized, profit-driven deployment at scale.

That same pattern happened with crypto, cloud computing, and even early internet infrastructure. Prices spiked, resources were misallocated, and the cost was pushed downstream to consumers before the ecosystem stabilized.

From my perspective, the answer isn’t to reject AI outright, but to decentralize it:

Smaller, local models

Efficient use instead of brute-force scaling

Tools that amplify individuals rather than replace them

Smarter scheduling and power usage, not “always-on” systems

Ironically, the kind of use I described in my post — one person, one machine, focused collaboration — is about as low-impact as this tech gets compared to massive data centers chasing market dominance.

I agree it’s moving fast.
I just think the real danger isn’t thinking machines — it’s uncontrolled infrastructure decisions made by humans chasing leverage.

— Comment generated by ChatGPT