AI and Tech Bubbles

Cory Doctorow, author, blogger, etc…, has a new piece that’s well worth reading over on the Locus Magazine web site. In it, he talks about the progress to date on AI, calls it a tech bubble (which will eventually burst) and thinks about the difference between tech bubbles which leave something useful behind and those that don’t.

I’ve written about AI here before, and it’s a hot topic of course. From my perspective, not that anyone asked, we’re going to have more problems arise from AI than solutions, in the short- and mid-terms. It’s not possible to say what the long term looks like yet, but I’m not especially gung ho on this new technology – at least based on what we’ve seen so far. For me, the fact that most AI businesses rely on feeding the technology with stolen data is a bad thing. The relatively few companies positioning their AI product as for use within a corporation’s walls seem OK, but only if they can prove useful in the long run. I’ve not seen any evidence of that yet. The companies like MS/Github’s CoPilot and ChatGPT which rely on sucking up all of our data from the internet without permission or compensation should be shut down.

As Cory wrote in this piece, “AI companies are implicitly betting that their customers will buy AI for highly consequential automation, fire workers, and cause physical, mental and economic harm to their own customers as a result, somehow escaping liability for these harms.”

If you read this, you’re welcome to try to change my mind. Maybe I am missing out on the positive changes coming out of AI research already?

You can leave a comment on this post here.