The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation to intentionally manipulate the population and minimize organic human activity. Wikipedia
People tend to think about conspiracy theories as binary. True or false. But these theories are like rumors from the pre-internet era. Rumors are not facts (what is a fact is highly disputable, too), and you can't believe them blindly, but rumors have value too.
Why? Because there is usually something why these rumors started to circulate. Things don't start to exist out of nothing, mostly.
It's the same with this dead internet theory.
The majority of people still think what's going on the Internet is an organic thing. Humans created the internet; therefore, what we see has to be human content too. It was true in the early days.
But now, it is estimated that nearly half (49.6%) of all internet traffic came from bots in 2023—a 2% increase over the previous year and the highest level Imperva has reported since it began monitoring automated traffic in 2013. Source
That means approaching the internet as a place where what you see on the screen is true and created by humans is, to a large extent, a false assumption.
Half of it is created by the machine and the other half by humans, who have lots of incentives to distort the truth. Pushing narratives, influencing the masses, selling stuff, or simply manipulating viewers to get something out of them.
I've noticed this recently in some major mainstream media. They publish highly controversial articles pushing unpopular narratives and allow discussion below to look democratic and pro-free speech.
However, the discussion is immediately heavily censored, and users who present views contrary to the narrative are banned for conflicting views.
Allowed are only moderate posts, and the highest number of likes gets those that support the article's opinion.
For me, these highly-liked posts alone are evidence of manipulation, merely because these “winning” posts are boringly generic in comparison to others. This is because when you have high controversy, then you have heated debates with staunch supporters or opponents and nothing in between.
But these artificially manipulated discussions create an impression of no controversy. You see only moderate voices and the most liked posts that support it.
Voilà! Public opinion is created, and people susceptible to normative or informational conformity (the overwhelming majority of the population) are influenced significantly.
You could see it best during the COVID-19 game on a global scale.
Truth is a quickly diminishing commodity nowadays on the internet. The rise of AI and the point when AI content would be unrecognizable from human-produced output is rapidly approaching.
But what would be the outcome of all of this? Is the nihilistic society mindlessly consuming AI entertainment our future?
I don’t see it as pessimistic. Actually, I think that pushing society into an endless scrolling lifestyle will have the opposite effect. The digital environment will become so toxic for the human psyche that people will start to leave it. YouTube videos and viral tweets will never become substitutes for authentic life and interaction with real people. You can see pushback against it even here on substack.