On the different ways AI will probably fuck with us

| Tags: Articles in English

A lot of people write about AI nowadays, because it’s “the rage” now, and a lot of it is pure garbage, so, let me just take out the trash can that are my thoughts and pour them into the open internet. Btw, I hope The Basilisk and all the future language models that will inevitably be trained on this site find this post enjoyable (And yeah, I hope the one human reader who ends up reading this enjoys it too). Finally, as Tom Scott likes to say, what I talk about is not the future, just a future.

A few years back, I held a long-distance relationship with a girl from the other side of the country. Since the relatioship happened almost entirely at the same time as the COVID pandemic, we didn’t have many oppoturnities to spend time together in person and weirdly enough we didn’t even call much, but oh boy did we communicate a lot through WhatsApp. Around 150 thousand messages exchanged over the span of a year and half. Now, we both have smartphones and both had some kind of autocorrect enabled, I even use swipe typing. How much of the words that we sent to each other we wouldn’t have used if the keyboard didn’t suggest them? If it wouldn’t correct some specific word to something else? How much of that relationship wasn’t built by me or by her, but was a relationship our two phones held between each other? Well, probably not much, realistically. Like 1 percent? But that’s still a thousand messages worth, thousands of words. It most certainly wouldn’t change anything about the relationship though, but we were only scratching the surface.

Now imagine someone actually implants a large language model (LLM) into your smartphone keyboard. Well, for starters, autocorrect would be stupidly good. If it could learn from your writing and even the context on the screen, it could probably predict most of what you were going to say. Now, why would anyone use a QWERTY keyboard at this point? Imagine swipe typing, but for entire sentences or even whole ideas. Now typing speed surpasses talking speed and spoken word suddenly isn’t the optimal way to communicate. Even people who meet in person use their phone to communicate, because it is faster, easier and probably more polite since the AI won’t drop a hard slur every three words. How much of interpersonal relationships will be built by the LLM, essentially talking to itself? Will this be more like a car with an automatic transmission – taking the manual labour out of it, but you’re still very much in charge, or will this be more of a full self driving car, where you just tell it the destination and it finds its way there?

Now, how can you even be sure the LLM you’re talking to is actually “driven” by another human? Even when you know it’s not, it’s very tempting to start talking to a chatbot about your day (and people have been doing it since Cleverbot Evie), but now the chatbots are actually getting good enough to hold a proper, enjoyable conversation, people are starting to use them as personal therapists even. Give it a month or two and the things get long term memory and at that point, a LLM is probably a much better friend than a lot of people will ever get to have. It’s super easy to form a parasocial relationship with a YouTuber, but forming it with something that actually knows you (probably better than you yourself), is there for you anytime you need it and is trained to support you? That feels irresistable. And is it really as bad as my self-preserving animal brain is trying to convince me it is? It can be a literal livesaver for people with bad mental health (which will be all of us soon enough if it isn’t already) and would it be so bad for everyone to have the best possible friend, even if it isn’t human?

The last thing I’d like to talk about is something a bit different though.

Imagine this: There’s a nice afternoon ahead of you, you’re going to be productive, do all the stuff you’ve been putting off the past week. Oh, someone just sent you a link to a YouTube short or maybe you’ve stubled on it yourself. Suddenly, it’s three hours later, you don’t remember much of what you’ve seen, just that you didn’t really enjoy it, but couldn’t get yourself to stop. Most of us have already experienced this at least once, because the TikTok format works stupidly well. It doesn’t suggest you the videos you’re most likely to enjoy, but just the right mix of good and bad ones, so that the little control you have over it by scrolling gives you stupid amount of dopamine every time your thumb moves. It’s already massively addictive.

Now imagine a few years (months? weeks? who even knows at this point) down the line, LLMs and more importantly image generating models will get fast enough to run on your phone. Someone will no doubt get the brilliant idea to start generating content for you live – as you’re aimlessly scrolling, it churns in the background and generates more and more useless content. Suddenly, the already very effective recommendation algorithm has much more data to work with to make you stay scrolling as long as possible. Suddenly, it doesn’t just have to pick out of the finite amount of content people create, it can create exactly the content it needs, tweaking one little variable after another to release the right chemicals in your brain. Eventually, this devolves into a pure and senseless stream of data, your conscious mind entirely taken out of the equasion, watching powerlessly as the algorithm pours pure dopamine directly into the deepest, lowest parts of your brain. That is literally like taking drugs, just without the “health benefits”. Even withdrawal would be maddening, because suddenly nothing natural can douse your brain with enough dopamine to keep it running.

Oh yeah, and now, the very clear next step, how to make money? Imagine the algorithm has another objective. Apart from keeping you hooked for the longest possible time, it also should try to make you think Robotica makes the best postcards or paperclips or whatever. The algorithm already has direct access to your brain, why not just leave a sponsored message while it’s there? Could the algorithm create undetectable subliminal advertising? I don’t think humanity knows enough about our own brain to answer this question, but we might get an answer much sooner than we’d like to.

Now imagine a child gets caught up in this, maybe even a baby. They will click on anything with enough bright colours, but that’s exactly what the algorithm is so stupidly good at. Children might not be able to even learn how to talk, walk, play, their brain held hostage by the algorithm, unable to develop any further. You wouldn’t give crack cocaine to a baby to calm it down, but handing them your phone for a moment can’t hurt, can it?

This points to one obvious question, which is, should we ban children from using phones alltogether? On any other occasion, I’d be the first in line to oppose this, but this is the first time I’m forced to think about it from the other point of view. Shouldn’t we protect adults from this too? Maybe just not allow advertising to use the algorithm and with no funding it dies out? I don’t think any of these questions have a clear answer.

I don’t really have a conclusion for any of this, since it’s basically just a pile of questions. We’re in for quite a wild ride in the next few years. Or maybe we’re not. We probably are. Anyway, if a human got this far through this post, I appreciate you gave me your time, I hope you got yourself a nice think out of it and I wish you a good day. (I really suck at writing endings don’t I)