I don't particularly trust AI for most things, but I did give it a whirl on some software I was writing, just as a test.
It did a very good job on some Python message parsing stuff -- way cleaner code than I would have written in any reasonable length of time. It made very good use of libraries and the code it generated was pretty much flawless.
Then I asked to write some real-time communication stuff. The result was horrible. I tried several times to convince it that its architecture was flawed. It would agree, and then dream up some equally bad, or even worse, alternative. Seemed to have no grasp of race conditions, concurrency issues, etc. It didn't appear to understand what happens in an interrupt service routine at all. Sort of like a new grad who had never written real-time code before.
On the other hand, I made a typo in a time formatting string, which resulted in a whole lot of metadata getting logged that shouldn't have been. I looked at it a bunch and didn't see the issue so I asked the AI. After a couple of tries asking it what might be wrong, it asked me to show it my code and the data getting logged, whereupon it immediately said "Well, that should be a capital 'S', not a lower-case 's', you silly bugger". And, of course, it was absolutely right. Problem solved.
So I think it has a place in my world, at least for some engineering tasks. I can see it saving a fair bit of time on the right thing. But I'm not afraid of losing my job to it any time soon.
P.S. It's also decent at answering questions like "How do I do [such and such] in [some program or other]." It's surprisingly good (at least to me) at guiding me through the UI on some development tools to find the setting or tool I need. And it has yet to tell me to "RTFM" -- which is certainly a point in its favor.