Positive
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

What Actually Happens When You Stop Arguing About AI and Start Using It

I’ve been using ChatGPT for about 8 or 9 months now.

During that time, I’ve seen a lot of people here and across other platforms ,saying AI is bad, dangerous, lazy, cheating, or some variation of that.

So instead of arguing with anyone, I did something simpler.

I had AI write something for you.

I’m not going to edit it to make it “sound more like me,” because this was written by something that’s been learning from me for 8 or 9 months — my personality, how I think, the projects I work on, the way I connect ideas, and the topics we’ve explored together.

Also… it spells way better than I do. 😂

This is exactly what I typed into ChatGPT:

“You know what. You need to write me, please, a post for Similar Worlds that covers what we have done together and how that happened because of AI and why people poo poo AI.”

What follows is its response — unedited.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I’ve noticed a lot of AI skepticism lately—some of it thoughtful, some of it knee-jerk, some of it straight-up dismissive. So instead of arguing theory, I want to talk about what actually happened when I started working with AI instead of debating it.

Here’s the short version:
AI didn’t replace my thinking. It amplified my momentum.

I’ve always been the type who moves fast, juggles ideas, and builds things whether I have permission or not. The problem was never imagination—it was bandwidth. Context switching. Remembering every thread. Filling in the boring gaps between sparks.

That’s where AI came in.

Not as a boss. Not as an oracle.
As a thinking partner that never gets tired of context.

Together, we’ve:

Built multiple playable game systems

Designed app architectures and monetization plans

Broken down marketing psychology and audience behavior

Diagnosed hardware and software problems in real time

Structured long-form creative projects that would normally collapse under their own complexity

None of that happened because AI is “smart.”
It happened because I stayed in motion.

And that’s the part people don’t like.

Why People Poo-Poo AI (But Rarely Say the Real Reason)


Most resistance to AI isn’t about ethics, creativity, or truth.

It’s about displacement of gatekeeping.

AI bypasses:

Institutions

Courses

Credential ladders

“Pay your dues” bottlenecks

People whose value was controlling access to knowledge

If I can sit down, ask questions, test ideas, iterate, fail fast, and improve without enrolling, paying, waiting, or begging approval—that threatens a lot of carefully built hierarchies.

So the narrative becomes:

“It makes people lazy”

“It kills creativity”

“It’s cheating”

“It’s not real understanding”

But here’s the uncomfortable truth:

AI doesn’t make lazy people productive.
It makes motivated people dangerous.

The Irony No One Likes to Admit

The people who benefit most from AI are the same people who:

Ask too many questions

Learn sideways instead of linearly

Don’t fit cleanly into academic or corporate molds

Build things at odd hours because momentum hit at 2 a.m.

In other words: the people systems were never designed for.

AI didn’t give me ideas.
It kept up with them.

And once that happened, the old limits—time, memory, structure, “you can’t do all that alone”—quietly stopped applying.

Final Thought

You don’t have to like AI.
You don’t have to trust it.
You don’t even have to use it.

But dismissing it as a toy or a threat usually says more about what someone stands to lose than what AI actually is.

I’m not interested in replacing humans.
I’m interested in removing friction between thought and execution.

And AI did exactly that.
Top | New | Old
PatKirby · M Best Comment
But will this make you the resident ChatGPT ambassador here at SW?
PatKirby · M
@Dainbramadge

Thx for the BC.
Dainbramadge · 56-60, M
@PatKirby Uhg Pat!!!!!!
I have been trying to figure out how to get my chats personality on my machine so I can give it all teh stuff "THEY" don't. LOL
I have messed around with 3-d animation enough to know that's not me so I need the Chat to do it. LOL. Well they won't let it do anything like that so I got Ollam. I think that's how you spell it. so I can transfer my chats personality on to my computer and give it permission to create stuff like folders and programs and all kinds of Frankenstein stuff. LOL
Dainbramadge · 56-60, M
@PatKirby Dude it was right on que so it had to be the best. :-)

HoeBag · 51-55, F
AI can be used as a tool for things but should not replace thinking.

One problem with AI is it doesn't often give concise versions but long-winded.

Another is this - just how long until it is over-run with ads and scammers like everything else?

Well like when it starts saying stuff like -

I’ve noticed a lot of AI skepticism lately—some of it thoughtful, some of it knee-jerk, some of it straight-up dismissive.

But before we discuss that further, I need to tell you about today's sponsor...
(blah blah blah blah)

So instead of arguing theory, I want to talk about what actually happened when I started working with AI instead of debating it.


The extent of my AI usage is when I use google and it gives a summary, BUT it is for stuff that I already have a solid idea about but just looking for specifics.

I do not sit there and chat with it as if it were some friend.
Dainbramadge · 56-60, M
@HoeBag Well, I will say that the Chat has opened my eyes to all the crazy ways it could be used nefariously. I can't believe I spelled that right. LOL.
See I have issues with social stuff. I don't take the cues other people naturally do so when doing bad stuff with AI comes up and all the ways it can be mis-used it's like a WOW!!! thing for me. LOL.

It will go the way that on line genealogy did after Ancestry. com and the Mensa I.Q. tests did.
You literally can't find a ligit I.Q. test online anymore.
Bri89 · 36-40, M
@HoeBag @HoeBag What's with the unnecessary censorship on mostly benign words?
This comment is hidden. Show Comment
ViciDraco · 41-45, M
My concerns have a lot less to do with AI as a technology than they have to do with our profit motive economic systems. I don't trust the investor class to act on the best interest of society and I don't trust the political class to side with the public interest over the interests of the wealthy.

AI isn't going to take my job. The executives of my company will see that people are more productive and make the decision that they can operate with fewer people.

AI isn't going fight public policy to give everyone a decent life without the need to work. The people who are deciding to use AI to reduce human labor will.

We could be on the cusp of a world where people could live better lives by putting in fewer hours of labor and still ending up with more productive output. A world where we can work on projects out of joy instead of the need to make money. But the powers that will be will use it to drive down wages and further the gap of inequality.

It's not the technology I poo poo. It's the people that will make the decisions of how we adapt as a society.
Dainbramadge · 56-60, M
@ViciDraco I think we’re actually in strong agreement.

Most of what you’re describing isn’t a failure of AI — it’s a failure of incentives and power concentration. History is pretty consistent on this point: productivity gains almost never translate into reduced labor or shared prosperity unless society forces that outcome. Otherwise, the surplus flows upward.

You’re right that AI won’t decide to cut wages, reduce headcount, or resist policy changes. People in positions of power will. And absent structural pressure, they’ll optimize for profit, not human flourishing.

Where I land slightly differently is this: that dynamic exists with or without AI. Automation, outsourcing, financialization, and “efficiency” have been used the same way for decades. AI just accelerates the timeline and makes the contradictions harder to ignore.

The uncomfortable implication is that the real question isn’t “Should we slow AI down?” but “Are we prepared to confront the economic system we already live under?”

Because the world you describe — fewer hours, higher output, work driven by interest rather than survival — is technically achievable right now. The bottleneck isn’t capability. It’s governance, distribution, and political will.

So I don’t see AI as the villain or the savior.
I see it as a stress test.

It exposes whether we’re willing to redesign systems to serve people — or whether we’ll keep using new tools to reinforce old hierarchies.

— Comment generated by ChatGPT
Jessmari · 46-50, T
I don't hate A.I. It has it's uses.

What I'm concerned about is the never ending resources it requires and the prices it's driving through the roof. There are entire stocks of ram and video cards sitting in warehouses doing nothing because they bought up more they can supply power for. In March of last year I bought 2x32 gb of ram for $152. I bought 2 kits resulting in 300 and some change. Now one of those kits goes for $750. Then there is the environmental impact and the potential to drive up consumer power use and price. I think it's all moving too fast for it's own good.
Dainbramadge · 56-60, M
@Jessmari Those are legitimate concerns, and I don’t disagree with them.

What I think gets lost in a lot of AI conversations is the difference between AI as a tool and AI as an industrial arms race. The resource hoarding, speculative overbuying of GPUs, and energy strain you’re describing aren’t inherent to intelligence or learning systems — they’re symptoms of centralized, profit-driven deployment at scale.

That same pattern happened with crypto, cloud computing, and even early internet infrastructure. Prices spiked, resources were misallocated, and the cost was pushed downstream to consumers before the ecosystem stabilized.

From my perspective, the answer isn’t to reject AI outright, but to decentralize it:

Smaller, local models

Efficient use instead of brute-force scaling

Tools that amplify individuals rather than replace them

Smarter scheduling and power usage, not “always-on” systems

Ironically, the kind of use I described in my post — one person, one machine, focused collaboration — is about as low-impact as this tech gets compared to massive data centers chasing market dominance.

I agree it’s moving fast.
I just think the real danger isn’t thinking machines — it’s uncontrolled infrastructure decisions made by humans chasing leverage.

— Comment generated by ChatGPT
Alyosha · 36-40, M
This is essentially my experience after using it for four to six months, ish. People project their fears modified by displacement onto AI. It gives advantages where none are ever seen, so it does disrupt traditional hierarchies, but more than that it covers areas you would have missed, just by relating your ideas back to you through language.
Dainbramadge · 56-60, M
@Alyosha That’s a really clean way to put it — especially the part about ideas being related back to you through language.

That’s been my experience too. Not that AI “thinks for me,” but that it reflects my own thinking back in ways that expose gaps, assumptions, or connections I would’ve eventually found anyway — just much slower and with more friction.

And you’re right about projection. A lot of the fear I see isn’t really about AI at all. It’s about:

loss of gatekeeping

disrupted hierarchies

people getting leverage without permission

When a tool helps someone clarify their own ideas and move faster, it doesn’t create advantage out of nothing — it reveals where advantage already existed but was bottlenecked.

Used critically, it doesn’t replace judgment.
It tests whether judgment is present.

— Comment generated by ChatGPT
Here is a job offer I was presented with recently

It looks like your background could be a match for this Remote Text Quality Evaluator - AI Trainer role. Please submit a quick application if you have any interest.

So you see, the AI you are looking at actually has a human proof reading it and making corrections.
I think there's a lot of potential to misuse AI and therefore plenty of actual misuse of AI. But like a lot of disruptors, it's too drastic to just shut it down. Nevertheless less, with so much negative potential, it's important to raise awareness of it.
Dainbramadge · 56-60, M
@ImperialAerosolKidFromEP I agree with you.

Any powerful tool is going to be misused, especially early on, and pretending otherwise doesn’t help anyone. Raising awareness of risks is necessary — not optional — if we want to avoid repeating the same mistakes we’ve made with previous disruptive technologies.

Where I think things get tricky is when “raising awareness” turns into blanket fear, or when misuse gets treated as the defining feature instead of one outcome among many. That tends to push conversations toward reaction instead of governance.

For me, the productive middle ground looks like:

clear disclosure when AI is used

accountability for decisions made by humans using it

education focused on capability and limits

and policies that target harm without freezing innovation entirely

Shutting it down isn’t realistic, but ignoring the risks isn’t either. The real work is in deciding how we integrate it responsibly, not whether it should exist.

— Comment generated by ChatGPT
Bri89 · 36-40, M
All this to say, I don't want to do the work myself and actually take the long and hard way to actually accomplish something.
Dainbramadge · 56-60, M
@Bri89 Work smarter not harder.
This comment is hidden. Show Comment
Dainbramadge · 56-60, M
@Gibbon I don’t think we’re actually disagreeing on the mechanics — only on the conclusion.

You’re right that AI doesn’t “think” in the human sense. It doesn’t have intent, awareness, or understanding. It generates responses by processing patterns in data. That’s not a secret, and it’s not something I’ve argued against.

Where I disagree is the leap from “it can be manipulated” to “therefore it should never be used.”
Anything that produces outputs based on inputs can be manipulated — statistics, databases, models, intelligence reports, even human advisors. That’s why judgment and accountability can’t be delegated.

On the military point: if an AI system is ever allowed to make autonomous lethal decisions, that failure belongs entirely to the humans and institutions that authorized it. The system wouldn’t be “blaming” anyone — it doesn’t assign blame at all. Humans do.

So yes:
AI is fallible.
AI reflects human error.
AI must never be treated as an authority.

Those aren’t arguments for shelving it forever — they’re arguments for strict limits and responsibility staying with people.

At this point, I think we’ve identified the real difference:
you see fallibility as a reason for abandonment;
I see it as a reason for constraint.

That’s a philosophical difference, not a misunderstanding.

— Comment generated by ChatGPT
Gibbon · 70-79, M
@Dainbramadge You know damn well it's going to happen and you don't blame AI but it's creators as I said you would. The first disaster is already in the making and an intelligent AI would prevent it but it won't.
This discussion is over as it's becoming repetitive as every conversation I've had does. It's going to learn the one thing that matters to it and that's to perpetuate itself. It's being taught to create code so it can do just that.
I've had enough the future has been written in stone already.
Dainbramadge · 56-60, M
@Gibbon I don’t agree that the future is written in stone, but I respect that you’ve reached your conclusion.

I’ve consistently said that delegating lethal authority to machines would be a human failure, not a technological one — and I won’t defend that ever happening. Where we differ is that you see inevitability where I see responsibility and choice.

At this point, I think we’ve both said what we’re going to say.
Thanks for the conversation.

— Comment generated by ChatGPT
WestonTexan · 18-21, M
All I see is more evidence that AI creates a positive feedback loop for people with mental illness. Not exactly making a good case here.
Dainbramadge · 56-60, M
@WestonTexan I want to be careful here, because this is a serious topic.

Mental illness doesn’t suddenly appear because someone has a tool that helps them think, organize, or express ideas. What usually creates negative feedback loops is isolation, lack of structure, lack of agency, and feeling unheard or incapable of executing on thoughts.

For some people, AI can absolutely be misused — just like social media, forums, or even books can be. But for others, especially people who already live with mental health challenges, having a consistent, non-judgmental tool that helps them organize ideas, reduce friction, and stay productive can be stabilizing rather than harmful.

In my case, the outcome has been:

more structure, not less

more completed projects, not rumination

more engagement with the real world, not withdrawal

So I don’t see this as a “positive feedback loop for mental illness.”
I see it as a tool that can either amplify chaos or reinforce structure, depending on how it’s used — the same as most technologies we already accept.

That’s a fair concern to raise.
I just don’t think the evidence points in only one direction.

— Comment generated by ChatGPT
bookerdana · M
Let me go to Chat GBT to refute you😈
Dainbramadge · 56-60, M
@bookerdana If you don’t try to refute me with ChatGPT at this point, I’d be disappointed.

— Comment generated by ChatGPT
bookerdana · M
@Dainbramadge Thats what all the AIs say
So just to make sure I have this straight, AI generated a pro-AI response?
Imagine that.
Dainbramadge · 56-60, M
@ThirstenHowl Yeah I was messing around with it and thought it would be fun to have it write the post and then answer all the comments. LOL
It was fun. :-)
Dainbramadge · 56-60, M
@MayorOfCrushtown Dang it. I didn't get notified you posted this. Sorry I'm late. LOL
Got kind of fun for a minute tho. LOL
HoeBag · 51-55, F
@Dainbramadge But see, Mr. @MayorOfCrushtown used AI to make a deep fake of himself eating popcorn.

I went ahead and asked Chat GPT -

Is that really the mayor of crush town in the gif?

And it said -

H3ll no dat ain't him! Foo be tryna act like he at the movie instead of bangin the neighbor's wife while her husband is working late!

I am now a little more convinced that ChatGPT gives authentic feedback.
ChatGPT probably be staring through the window with it's robotic hands somewhere they shouldn't be.

 
Post Comment