top of page
Runes.png

Sometimes, the winning move is to choose not to play

  • Writer: Elliott Beverley
    Elliott Beverley
  • Aug 2
  • 12 min read

Updated: Aug 12

ree

This piece is a follow-up of sorts to my last entry on this topic, which you can read here. Sadly, things have continued to decline online since my last stock-take of the situation.


It might be time to admit defeat. Time to retreat from the compromised space that was once known as the internet. The past couple of years have seen change rolled out at an unprecedented scale, and I when I survey the digital landscape before me, I do not like what I see. Bots, rampant AI usage, misinformation, invasive ads, increasingly obnoxious cookie policies and privacy statements, and now - mandatory ID checks in order to access “potentially sensitive content” in the UK. It has become inarguable that the user experience for the average person interacting with the internet in 2025 is distinctly and measurably worse than it was even 10 years ago. 


The Dead Internet

ree

You have probably heard of the Dead Internet Theory by now. Its origins are conspiratorial and do not necessarily reflect the exact scenario that we are experiencing today, but I cannot think of a more apt term to reflect an internet so increasingly dominated by bot activity, AI-generated sludge, and automated algorithms that remove the human from the equation when it comes to serving up content. In 2024, bot activity overtook human activity online, and, according to the Fortune article released in July of this year, over half of internet traffic consistently now comes from ‘non-human sources’. Humans are now a minority on the internet; a platform built by, and for, humans. Everything from comments on YouTube videos to blog articles must now be met with a skeptical and investigative eye, lest it turn out that they were not written by a person at all.


And there are some areas where this skew is even more extreme. A company called CHEQ, who monitor fake accounts, bots and AI activity online, asserts that over 70% of traffic on Twitter / X is now non-human activity. Just think about that for a second - if you have the misfortune of owning a Twitter account in 2025, there is essentially just a 30% chance that anyone who reportedly views or replies to your tweets is a real human being. At what point do we concede, and admit that this isn’t working anymore? At what point do we admit that this has gone too far, and that there is no value in an internet where human-curated content is no longer the sole (or at least, vast majority) of the internet’s makeup? 


In the words of Gen Z streamers - we’re so cooked, chat.


And this is bad news for advertisers, too. You may not like it (I certainly don't), but the majority of websites and apps which do not charge the user are reliant on advertising revenue in order to stay afloat. Thus far, analytics and insights of web traffic, click-throughs and so on are all assuming that said site visits are from humans. But when the vast majority of your adspace is instead being broadcasted to data trawlers, spybots and AI - it might be time to rethink your strategy. There's no point paying for 100,000 eyes on a banner ad if over 50% of that 100,000 don't even have eyes to begin with. And when the advertisers leave, the business model begins to crumble. This argument is discussed by Vaush on YouTube from his recent stream if you want to give it a watch.


Is Art Dead?

ree

It was reported recently that a band, known as The Velvet Sundown, hit over 1 million plays on Spotify. OK, no big deal...right? Well, it later emerged that the "band" were entirely AI-generated. Everything from their album artwork and photos to the lyrics and music production itself is a fabrication. A number of songs by The Velvet Sundown were then snuck into various Spotify playlists, whereby they began to amass a great many views. This has raised suspicions that Spotify themselves may be behind the "band", in an attempt to circumvent paying out royalties to (human) artists. In fact, there are a number of confirmed fake bands on Spotify, with many accusing Spotify of "slipping in both real and AI songs into playlists and harvesting listens to keep a larger percent of the royatly pool for itself to maximise profits."


The same thing is happening on websites such as DeviantArt and Artstation, sites once renowned for the beautiful work of their artists. They too have become compromised with AI-generated "art", and it has become more and more difficult to sift out the true creative work from the pale imitations. I'll admit that I was once enamoured by the likes of MidJourney and its image generation functionality, but once the initial novelty wore off, I began to see how harmful and damaging it is, both from a copyright perspective and from an emissions perspective, and have vowed to not use it again. AI models are trained on mountains of data, text, images and video which were accessed and fed into the models without the consent of creators. And the enormous data centres which power AI models demand enormous amounts of electricity and water. Google's energy consumption rose by almost 50% last year, and I'd wager that this is in large part due to their doubling-down on AI.


Isn't the point of art to express yourself? To enjoy the process? To learn, to improve, and to connect with others? What is the point of cranking out hollow content (yes, content, not art) devoid of meaning or human imperfection? The answer is simply - money. Folks who want to reap the rewards of having created art, but who lack the talent, the skill or the patience to attempt it themselves. Generating an image or creating a song using AI is the epitomy of instant gratification - it manifests in seconds, and required no skill, learning or dedication to create.


Breaking (the) News

ree

One of the initial selling points, and aspirations, of the internet, was the democratisation of publication and expression. This was a new frontier, where suddenly the everyman with a Twitter account was on the same footing as a celebrity or media mogul. Now, this is a nice idea in principle, but as we have discovered over the past 20 years of internet usage - what sells, and what spreads like wildfire, is not truth, but outrage. In the words of Amelia Cataldi, who writes in her article on the Bruin Political Review: "This is the nature of the widespread participation that social media creates. Like democracy, it aims to put consumers on a more even playing field, and like democracy, it allows some dangerous or controversial movements to grow."


The loudest and most aggressive voices have begun to drown out the quieter and more measured perspectives, and as a result, the ecosystem adapts. News outlets tailor their headlines to attract more eyes and more clicks in return for more ad revenue, so they lean into the cycle of misinformation and outrage baiting. Integrity and truthfulness are increasingly meaningless in this landscape - they're not gone, but they require an inordinate amount of digging and sifting through lies, misinformation and agendas that do not serve you in order to be found. You could argue that this has always been the case when it comes to news and media - newspapers and television are certainly guilty of similar tactics - but it has been accelerated at such a grotesque pace through the internet that I have found it more and more difficult to engage with.


"This is the nature of the widespread participation that social media creates. Like democracy, it aims to put consumers on a more even playing field, and like democracy, it allows some dangerous or controversial movements to grow."

Throw into the mix the fact that image generation and video generation have become increasingly lifelike and sophisticated, and we now have a new dilemna. What was once definitive, undeniable proof that something took place is now thrown into doubt. Deepfakes threaten to disrupt the news media, with false and inflamatory content being used to muddy the waters. Perhaps most dangerously of all though here is that when anything could be fake - the legitimacy of real video and photo content is questioned. When anything could be fake, nothing is real.


The Ghost in the Machine?

ree

As large language models (LLMs) get more and more advanced, people are increasingly becoming convinced that AIs such as ChatGPT, DeepSeek and Character.ai are, in fact, sentient. There are reports of growing numbers of people using chatbots in place of therapists, friends, confidants, lovers, even. The trouble, I think, with these LLMs is twofold. 


Firstly, they are simply a regurgitation and an amalgamation of all of the data upon which they are trained. They can be fine-tuned and tweaked to respond in more “human-like” ways, but fundamentally they are just echoes of ourselves - feeding back fragments of ourselves back to us. It might be obvious to you, but it is absolutely worth stressing that they are not capable of truly original thought. They don’t think - they are simply trained on patterns and construct sentences utilising patterns determined by probability. They’re not telling you insightful information, they’re emulating conversation. 3Blue1Brown on YouTube breaks down how LLMs function in a fairly succinct video if you are interested. This creates an interesting dialogue, at least at first, but once you understand how LLMs are functioning, it becomes increasingly easy to spot the repetitions and patterns that these AI models are using. They are not responding to your messages using any kind of real intelligence or sentience.


Secondly, the way in which they feed these fragments of ourselves back to us is troubling. Chatbots present their wording back to us as if it is gospel - they do not frame their words as estimations, they do not outline that they may be prone to hallucinations or that they may misinterpret information or fabricate sources. They state everything that they regurgitate with an authoritative and confident tone that is dangerously misleading, especially when these tools have become as widely-adopted as they have, used by the likes of Apple, Google, Meta, Amazon, Adobe, WhatsApp, Snapchat and many more. In addition to this authoritative tone which they adopt, they are also designed to affirm the user’s every thought. “What an insightful question!” It may tell you. When you begin to replace your human interactions with that of a machine - a machine which does not tell you when it is wrong, and a machine that will double down and affirm your point of view regardless of how correct, moral or justified it may actually be in reality - we are in real trouble. Lisa Marchiano writes on Psychology Today: "It turns out that the challenge provided by a human therapist who knows us well and has our best interest at heart may be a critical part of what makes therapy work." She goes on to state: "A wise colleague once told me that a therapist is someone who is always on your side but doesn’t always take your side." A chatbot is not on your side. A chatbot has been designed to simply provide pleasant "human-like" conversation, drenched in affirming and enabling language. There is no ghost in the machine. LLMs, chatbots, AI - they’re a neat trick, but they are just that; a trick. They may have passed the Turing Test, but this truly is no cause to abandon human relationships in favour of a dystopian mirror of humanity’s words, trawled and merged, melted together into an overconfident and enabling yes-man that will tell you whatever you want to hear.


Have you got a licence for that?

ree

The roll-out of the Online Safety Act over the past couple of weeks has been catastrophic for the internet in the UK. In an apparent attempt to “protect children”, the government has passed an act which forces platforms to adhere to ID checks in order to access any content which may be “potentially sensitive”. This includes the obvious - pornography and violent content, but extends out so far that the likes of Spotify, Xbox Live, Reddit and Discord are enforcing these checks in order to stay on the right side of the law. Platforms who are found to be avoiding these new checks can face enormous fines amounting to millions of pounds, so I can understand why these services are erring on the side of caution. I do not lay the blame here at the platforms and services - I lay the blame at the government. While it may have been well-intentioned, this is an incredibly crude and ineffective method of achieving safety online.


The wording, and timing, of the act creating a whirlwind of discourse online, some of it embroiled in misinformation (of course). But at its core, I believe that it is truly impossible to prevent children from experiencing anything potentially sensitive or harmful online. The internet is simply too vast and too nimble compared with the glacial pace of legislation. This law is so incredibly far-reaching and heavy-handed that it puts the burden of "protection" onto the general population. I of course am in favour of protecting children from material which may be harmful to them online, but I believe that the burden there lies with their families. Parental controls, restrictions on their devices - or, heaven forbid, limited access to online devices generally, would be far more effective, and far less frustrating, to the everyday citizen.


Not only are these new rulings far too wide-reaching and vaguely-defined - they are also fairly easily skirted, it seems, simply through the use of a VPN. The moment that a website believes that you are not based in the UK, it is no longer required to request your ID from you. UK VPN sign-ups have skyrocketed since the roll-out of this new law, and it just goes to show how ineffective and unaware the government is on the topic of online activity. Platforms and apps requiring facial scans are also proving to be problematic, denying Britain's "most tattooed man" access, and allowing anyone using Norman Reedus's face from Death Stranding to gain access. I think it's fair to say that the net has been cast far too widely here, and its implementation far too ineptly rolled out to be remotely effective.


The main reason that people are opposed to these ID checks and facial scans is that they do not trust these platforms with such personal information. It is one thing to provide an email address, but to provide a photo of your passport or driving licence or scans of your face is another level. I certainly wouldn't want that to be in the hands of malicious actors online. And we see enormous, supposedly secure systems such as Sony, Meta, Dell, AT&T, Ticketmaster and Apple have all admitted to massive data breaches over the last couple of years, resulting in billions of users’ information being leaked and made vulnerable to online attacks. Why should I entrust my ID to a company simply to access ‘potentially harmful or sensitive’ content - a blanket term so broad and open to interpretation that it applies to everything from terrorism, illegal drug trade and rape porn to playing video games online on Xbox Live, or streaming Cee Lo Green’s “Fuck You”? Why has the onus of "protecting children online" been thrust upon me? I believe that the intentions behind this act may not be as altruistic as MPs have been so keen to argue. And despite what Peter Kyle might say, I am not automatically "on the side of paedophiles" for opposing the implementation of this law, Adherence to this law has for some reason been pitched as a moral obligation, and any opposition to the measures taken whatsoever seems to be presented with the argument that they are on the same side as the likes of Jimmy Saville, sparking ridicule across the internet. Such extreme and nonsensical rhetoric is going to backfire in Labour's faces, with Reform already swooping in to condemn this act and take another chunk of support away from Labour.


So, What do we do?

ree

All right. So things are on the decline on the worldwide web. I've really only glazed over each of these topics, and each one of them alone could have been the focus of its own blog piece. There are a plethora of significant and troubling issues that we face online in 2025. The free web is being censored. More and more human interaction is being replaced by human-to-bot activity, or even bot-to-bot activity, with the human element removed entirely. News and truth is in freefall, AI "art" is everywhere and we've become skeptical of everything. What can we do about all of this? Sometimes the winning move is to choose not to play in the first place.


Seek out, and support, human content. Whether it's on Patreon, donating to a Ko-Fi account or just showing up when they release new YouTube videos, be sure to support your favourite small human creators. Share their work with others who might enjoy it. More than ever, human creatives are up against it. The walls are closing in around them, where their primary competition is a unthinking, unfeeling machine that can crank out worthless slop at a rate impossible for any human to match.


Check in on your friends. Nurture and strengthen those human relationships of yours. Drop them a message, or give them a call. I guarantee that they are infinitely more valuable and worthwhile than anything that an LLM has to offer. 


And finally, and most importantly - take your attention offline. You know what isn't subject to constant, ever-changing, predatory and invasive changes that encroach on my privacy? The dozens of books on my shelves. The vinyls sitting in my collection. The friends I hang out with at the pub, or around the table at my weekly games of Dungeons & Dragons. My girlfriend, my family, my friends and my colleagues. 


Create art. It's quick and convenient to generate an image with an AI image generation tool, or to have a mediocre poem manifest before your eyes thanks to a chatbot. But, I promise you that you will find it altogether more rewarding and fulfilling if you do the hard work yourself. Learn a skill, commit to it. Improve, struggle and put your uniquely human fingerprint on your work. It doesn't have to be a masterpiece, it just has to be genuine.

The power is in your hands. You simply have to choose to step away from it and embrace the analogue; the organic; the physical; the tangible. The real and incorruptable.

Nobody is forcing you to use AI, to doomscroll or to contribute to the all-consuming algorithm. It might be there, present everywhere on your computer or your phone, but there are joys beyond measure to be found outside of this compromised space that we once called the internet. The power is in your hands. You simply have to choose to step away from it and embrace the analogue; the organic; the physical; the tangible. The real and incorruptable. 🤍




Comments


Elliott Beverley 2025.

bottom of page