title: The Great AI Disconnect
desc: The dystopian AI future that is very close.
date_published: 2024-11-09T00:00:00.000Z
published: true
I have an uneasy feeling about the way the digital world is getting more and more cluttered with increasingly realistic AI-generated text, images, audio, and video. In the next few years, AI will become more convincingly real as well as cheaper to produce. Social media sites will become saturated with AI content, and there won't be any reliable ways of detecting AI content. Moreover, people are already primed to believe what they want to believe, so it shouldn't even be that hard to convince people of lies. To make matters worse, it is far easier to spread AI content than it is to correct people's misconceptions. Imagine a photo that gets seen by 10 million people in 1 hour - something that happens every single day on Twitter, Instagram, Tik Tok, facebook, etc. It might take a few hours, maybe a few days to figure out if that image was actually real. How many of the millions of people would see the update that the photo was fake? I'd imagine it would be a tiny fraction, thus showing the power of spreading AI content.
I am anticipating a critical event where millions of people are misled by AI-generated content, resulting in people losing all trust in anything they can't see. There are already people who don't feel they can trust any news because of the "fake-news" rhetoric. How will people be able to trust anything that they don't see with their own eyes? How will information be spread when people are disconnected from the internet? Will there need to be a slow verification process by multiple third parties before anything can be posted? These are a few of the questions I have been thinking about, and it is very hard for me to see how this catastrophe can be avoided.
The one possibility I see is that people will start to rebel and destroy data centers, leading to all GPUs behind consolidated within an international organization that has many layers of redundant oversight and transparency. Maybe the GPUs will even be air-gapped so that nothing can get out, meaning that all research and development would have to happen at this one organization. There would undoubtedly be a black market of GPUs that didn't get seized, or GPUs that get "lost" somewhere in the manufacturing process. Even a single DGX machine could be pretty useful for creating a large amount of AI content, and since it is about the size of a desktop PC, it would be pretty easy to conceal and smuggle.
I suppose one positive aspect is that people may start developing more in-person relationships after disconnecting from the internet. This is one silver lining, but I think there are far more negative aspects than positive ones. I predict that the Great AI Disconnect will happen before 2030.
Why don't I think that AI-detectors will be useful? There are already AI-detectors being used to check student's essays with alarming false-positive rates. I've worked on a project related to this task, and it seems like something that is extremely difficult to reliable detect. Moreover, just like cybersecurity, this is a cat and mouse game where the attacker is always trying to beat the latest defense, and vice versa. There is always a new way to beat a new technique, and I have a feeling that it is far easier to break the detection than it is to create a robust detector.