A couple of technology/economics experts rip into what they see as a growing AI bubble in a big way in this article in Salon. Their argument centers on the premise that while the various Large Language Models (LLMs) sound impressive, they are really just recombining existing information according to statistical probabilities, not actually analyzing or providing thoughtful responses.

LLMs are the latest wave of artificial intelligence hype. IBM’s Dr. Watson was supposed to revolutionize health care. Ten years and $15 billion later, it was sold for parts. Radiologists were supposed to be obsolete by 2021; there are now more radiologists than ever. Fully self-driving cars were supposed to be zipping back and forth between Los Angeles and New York by 2017; we’re still waiting for a car that can drive down a street reliably avoiding pedestrians, bicyclists and construction crews.

One could certainly mount the argument that “AI” (which is really a shitty catch-all term) will get better over time, and proponents of the technology will posit that feeding them more information will only make them stronger. But detractors will make the case that as the volume of AI-generated crap (disinformation, falsehoods, etc) accounts for an ever-growing proportion of the information ingested into the models, they will actually get less reliable over time.

I suspect the truth, as always, lies somewhere in the middle.