Let me just state this right at the beginning that this is not a thorough, well researched, nicely written blog post about all the positives, negatives, side effects, minor inconveniences, yada yada yada, about AI. People have even argued that AI is making your refrigerator louder. Like, the impacts of AI have been analyzed to death, and beyond. I can’t possibly add anything that hasn’t been said already.
But I’m not trying to.
Because this post is just pure, unfiltered, deep-seething rage written only to vent my frustration about how much I fucking hate the current AI hype cycle. With an almost weekly frequency, someone at the company I work for, with double my influence and half my technical knowledge, tries to shove some AI fuckery down the throat of everyone, while ignoring all the safewords. And it’s becoming extremely frustrating.
But of course, before I get too angry, I have to say that when I say AI, I’m
talking about generative AI. Because that’s what all the hype is about right
now. Not about fine-tuned, carefully crafted machine learning models that
achieve something impressive and previously thought impossible, but about random
word generators that can write passable text, sometimes. It’s autocomplete that
got a little out of hand, and now CEOs around the world are convinced that
linear algebra has become sentient .
And I’m just so, so fucking tired of it.
Some people like to call generative AI “GenAI”, which always sounds incredibly stupid. But I don’t want to say “AI” in this entire article to refer to generative AI, even though nowadays the two words almost mean the same thing. So for the sake of clarity, I’m going to use the word “GAI”. Sounds stupid? Good, because GAI in practice is even stupider.
Because if GAI really, truly lived up to all the hype and marketing, then all these tech companies wouldn’t be parading it around and giving it away for free, or charging way too little for it. They’d keep it for themselves, locked away behind 50 factors of authentication, and then take over the world with just a few keystrokes. OpenAI would be putting a Dyson sphere around the sun right now instead of disappointing everyone with GPT-5.
But allow me to take 2 steps back, and elaborate.
How did we get here?
A lot of people think the GAI hype started with GPT3.5, but that’s not really the case. It certainly exploded then and everyone and their mom was talking about Chatgippity becoming sentient and taking over the world and launching all the nukes and curing cancer, in that specific order.
But I’ve been around long enough to know that AI has been hyped up in slightly smaller circles for a long time. And it was actually for a good reason. Like, sure, there have been a handful of AI hype cycles over the past few decades, and each one ended with disappointment. But they were not necessarily fruitless.
At the end of every AI hype cycle, we actually had some useful results and developments, while the rest of the population considered it a failure cuz “hurr durr no sentience”, while the IT community got some useful algorithms out of it that had their niche use cases.
It’s difficult to say when the most recent hype cycle started, but I’d roughly put it at 2017, when deepfakes started becoming a thing. Back then, making them was really difficult, due to poor tooling. And people used it to make - you guessed it - mostly porn. Because who else would put in so much effort into making non-consensual porn other than antisocial horny dorks.
But even before then, there were many interesting AI developments. For example, in 2011, IBM created Watson, used it to play Jeopardy, and absolutely beat the shit out of even the best players. They dabbled in a bunch of other AI projects over the next decade, but all of them failed.
The problem? Watson was overhyped, and when it came to real-world scenarios, underdelievered. IBM’s CEO said that AI is here, it’s mainstream, it can change everything about healthcare, and usher in a medical “golden age”. But when it came to real-world scenarios, Watson was struggling hard with reading and actually understanding literature, and it often gave advice that experts either didn’t agree with, or the advice was outright dangerous.
Basically, Watson had the same issues that all GAI has today.
And then, GPT was born
I’ve been following the GAI hype cycle before it was cool. I’ve generated
articles with GPT2, saw all the incredible things GPT3 did, and so on. I
played around with AI dungeon (totally not for horny purposes
) and I’ve seen all the slow, incremental progress in the AI
world.
I remember when GPT3 came out as a closed beta. All the people on Twitter were really impressed by its capabilities. It was definitely cool seeing all the neat little things it could do, and everyone was generally very excited about what they could use it for. Not to mention GPT3 was a monumental leap compared to GPT2. It was trained on way more data, and was running on way more powerful hardware, so it made sense that it’d be better.
But then, GitHub Copilot came out, and that’s kind of where people started becoming more skeptical about GAI. It’s when the first issues - such as copyright and licensing - were raised. And people also pointed to the poor quality of the generated code. But for the most part, no one really cared.
As for me, I kind of just watched from the sidelines as all of this was happening. When ChatGPT came out, I’ve generated the occasional code snippets with it, and asked it some questions, but just like all the previous times with AI, I never really had any use for it.
And recently, just out of pure anger, I’ve even tried out agentic editing, just to see what all the hype is about. And just like all the other times I dabbled with AI, I’ve found it useless in my workflow.
But aren’t you afraid of getting behind the times?
Dijkstra used pen and paper. End of discussion.
We can’t even agree on tabs or spaces. Functional or object oriented. Async or threads. Keep the existing codebase or rewrite it in rust. Emacs or vim. Linux or Windows, or Mac.
How can we expect all developers to switch over to a very specific type of development workflow, just because it is allegedly better? I say allegedly, because there was recently a study that said GAI coding makes you slower. And I mean yes, the sample size was small, but it is a very interesting data point, and there need to be more studies like this, preferably with bigger sample sizes.
But I think even the GAI diehards are slowly admitting that GAI coding is not faster, such as in this blog post written by GitHub’s CEO:
You know what else we noticed in the interviews? Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition. We believe that means that we should update how we talk about (and measure) success when using these tools, and we should expect that after the initial efficiency gains our focus will be on raising the ceiling of the work and outcomes we can accomplish, which is a very different way of interpreting tool investments.
So instead saving time, developers are “increasing ambition”, whatever the fuck that means. He’s basically saying “don’t look at time savings, look at… uhh… something something ambition!” even though this entire time the supposed benefit of these GAI tools was time savings.
And he’s trying to argue that everyone should be GAI coding. And that anyone who isn’t should just leave their career. What an absolute fucking joke.
This is not to say I haven’t dabbled with GAI tools before. I used Tabnine for short autocompletes way before ChatGPT was born. This was at my previous job, and after I left that, I never installed Tabnine on my machine. Why, you might wonder? Shouldn’t even autocomplete that simplistic make you faster?
Well, not really. Sometimes it did make impressive completions, but most times, it suggested stupid things and got in the way. So I never really felt the need to keep using it. But this hasn’t changed even after all these years.
And I’m not alone with this feeling. One of my coworkers used Codium for the longest time, but then he switched from VSCode to Zed. And he doesn’t miss Codium one bit. He never even bothered installing it.
Honestly, I think giving programmers proper computers, switching to better tooling, or having development servers would be a way better ROI than spending a fuckload of money on GAI tools.
No, GAI does not “think”
I know I know, there are thonking models that have a thonking process, but that’s just fancy marketing bullshit. GAI does not actually think: and this fact is built fundamentally into their architecture.
I could talk about the Chinese room thought experiment, but I’m not going to, simply because there’s a much easier way to demonstrate this.
If you wanted GAI to explain you what is chess, how it’s played, what are the rules, what’s the history behind it, it absolutely can. It can even list you every minuscule rule change from the very beginning of this game.
But ask it to play a game of chess, and it’ll fail miserably.
And computers playing chess is a solved problem. Stockfish has been capable of beating even the best grandmasters for over a fucking decade. What’s even crazier is that Google has even managed to beat the absolute best chess engine, Stockfish, using their Alpha Zero AI, eight fucking years ago. Because instead of making a slop generator, they trained a proper machine learning model, and ran it on a massive cluster of servers. You know, like how you’re supposed to use AI.
This is the type of AI that can actually discover new things, like new tactics in chess, or fold some proteins. GAI can’t even generate a picture of a full glass of wine, because it has never seen it.
Let’s talk about all the bad shit now
And oh boy, there are a lot. Around the training data, there’s quesions about intellectual property and copyright. When doing the training and inference, there’s the massive environmental impact. And when the output actually makes it to the user, that output can do quite a bit of damage.
And I’m not talking about exploits and workarounds so that GAI can explain to people how to make a bomb or do 1 (one) terrorism. Those people would fail quite miserably at that.
I’m talking about all the buggy and insecure code generated. Or all those times GAI deleted files or the entire production database. Or how it’s giving some people psychosis, even if they have no history of mental health problems. Yeah, that’s what happens when chat gippity agrees with everything you say.
But even if you dismiss all the bad things GAI has done, I’d love if anyone could give an example of a good thing GAI did. Did it do a science? Did it discover something new? Did it solve global warming? I’ll wait for an answer.
GAI companies are completely fucked btw
Ok, but let’s say, hypothetically, that GAI somehow becomes good enough that it is actually useful. And let’s say that it is somehow cheap enough that people are willing to pay for it, and the companies are actually capable of making money.
For GAI to be cheap, inference has to be cheap, computationally speaking. This would imply that GAI models are resource efficient enough to be running on commodity hardware.
But wait, if that’s the case, then why the fuck would anyone pay for GAI? Just run any of the hundreds of available models locally, for effectively 0 money.
Let me know if I’m missing something, but either way for me, it looks like these companies are cooked.
Conclusion
As CEOs have increasingly switched from enthusiastically talking about GAI, to straight up mandating it, remember this: the reason this is happening is because GAI is fucking useless. And either the CEOs are too dumb to realize that this is why people are not using their garbage tools, or they know full well, but want/need to keep the hype train going, otherwise the world economy crashes and burns.
There are 3 (but more like 2) ways I can see the GAI hype cycle ending:
- All the money burns up, and a lot of companies go bankrupt
- A new hypeable technology shows up (quantum computers for example), so all the money gets pumped into that instead
- A breakthrough in GAI happens and it becomes sentient (this will not happen)
Many times, I feel like I’m going insane. I see all this hype, and excitement, and financial investment around GAI, and I’m genuinely wondering, what the fuck is going on here? But every time I take a peek into the GAI side, I’m left severely disappointed. Every single time, without exception, I see hype, but nothing actually useful. Sure, the tech demos look cool, but they’re always cherry-picked, and in the real world, GAI tools are just toys.
All this shit feels like “what if the 2008 financial crisis was coming, but like, it was blatantly obvious to everyone”.