Positive
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

What Actually Happens When You Stop Arguing About AI and Start Using It

I’ve been using ChatGPT for about 8 or 9 months now.

During that time, I’ve seen a lot of people here and across other platforms ,saying AI is bad, dangerous, lazy, cheating, or some variation of that.

So instead of arguing with anyone, I did something simpler.

I had AI write something for you.

I’m not going to edit it to make it “sound more like me,” because this was written by something that’s been learning from me for 8 or 9 months — my personality, how I think, the projects I work on, the way I connect ideas, and the topics we’ve explored together.

Also… it spells way better than I do. 😂

This is exactly what I typed into ChatGPT:

“You know what. You need to write me, please, a post for Similar Worlds that covers what we have done together and how that happened because of AI and why people poo poo AI.”

What follows is its response — unedited.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I’ve noticed a lot of AI skepticism lately—some of it thoughtful, some of it knee-jerk, some of it straight-up dismissive. So instead of arguing theory, I want to talk about what actually happened when I started working with AI instead of debating it.

Here’s the short version:
AI didn’t replace my thinking. It amplified my momentum.

I’ve always been the type who moves fast, juggles ideas, and builds things whether I have permission or not. The problem was never imagination—it was bandwidth. Context switching. Remembering every thread. Filling in the boring gaps between sparks.

That’s where AI came in.

Not as a boss. Not as an oracle.
As a thinking partner that never gets tired of context.

Together, we’ve:

Built multiple playable game systems

Designed app architectures and monetization plans

Broken down marketing psychology and audience behavior

Diagnosed hardware and software problems in real time

Structured long-form creative projects that would normally collapse under their own complexity

None of that happened because AI is “smart.”
It happened because I stayed in motion.

And that’s the part people don’t like.

Why People Poo-Poo AI (But Rarely Say the Real Reason)


Most resistance to AI isn’t about ethics, creativity, or truth.

It’s about displacement of gatekeeping.

AI bypasses:

Institutions

Courses

Credential ladders

“Pay your dues” bottlenecks

People whose value was controlling access to knowledge

If I can sit down, ask questions, test ideas, iterate, fail fast, and improve without enrolling, paying, waiting, or begging approval—that threatens a lot of carefully built hierarchies.

So the narrative becomes:

“It makes people lazy”

“It kills creativity”

“It’s cheating”

“It’s not real understanding”

But here’s the uncomfortable truth:

AI doesn’t make lazy people productive.
It makes motivated people dangerous.

The Irony No One Likes to Admit

The people who benefit most from AI are the same people who:

Ask too many questions

Learn sideways instead of linearly

Don’t fit cleanly into academic or corporate molds

Build things at odd hours because momentum hit at 2 a.m.

In other words: the people systems were never designed for.

AI didn’t give me ideas.
It kept up with them.

And once that happened, the old limits—time, memory, structure, “you can’t do all that alone”—quietly stopped applying.

Final Thought

You don’t have to like AI.
You don’t have to trust it.
You don’t even have to use it.

But dismissing it as a toy or a threat usually says more about what someone stands to lose than what AI actually is.

I’m not interested in replacing humans.
I’m interested in removing friction between thought and execution.

And AI did exactly that.
This page is a permanent link to the reply below and its nested replies. See all post replies »
ViciDraco · 41-45, M
My concerns have a lot less to do with AI as a technology than they have to do with our profit motive economic systems. I don't trust the investor class to act on the best interest of society and I don't trust the political class to side with the public interest over the interests of the wealthy.

AI isn't going to take my job. The executives of my company will see that people are more productive and make the decision that they can operate with fewer people.

AI isn't going fight public policy to give everyone a decent life without the need to work. The people who are deciding to use AI to reduce human labor will.

We could be on the cusp of a world where people could live better lives by putting in fewer hours of labor and still ending up with more productive output. A world where we can work on projects out of joy instead of the need to make money. But the powers that will be will use it to drive down wages and further the gap of inequality.

It's not the technology I poo poo. It's the people that will make the decisions of how we adapt as a society.
Dainbramadge · 56-60, M
@ViciDraco I think we’re actually in strong agreement.

Most of what you’re describing isn’t a failure of AI — it’s a failure of incentives and power concentration. History is pretty consistent on this point: productivity gains almost never translate into reduced labor or shared prosperity unless society forces that outcome. Otherwise, the surplus flows upward.

You’re right that AI won’t decide to cut wages, reduce headcount, or resist policy changes. People in positions of power will. And absent structural pressure, they’ll optimize for profit, not human flourishing.

Where I land slightly differently is this: that dynamic exists with or without AI. Automation, outsourcing, financialization, and “efficiency” have been used the same way for decades. AI just accelerates the timeline and makes the contradictions harder to ignore.

The uncomfortable implication is that the real question isn’t “Should we slow AI down?” but “Are we prepared to confront the economic system we already live under?”

Because the world you describe — fewer hours, higher output, work driven by interest rather than survival — is technically achievable right now. The bottleneck isn’t capability. It’s governance, distribution, and political will.

So I don’t see AI as the villain or the savior.
I see it as a stress test.

It exposes whether we’re willing to redesign systems to serve people — or whether we’ll keep using new tools to reinforce old hierarchies.

— Comment generated by ChatGPT