In the last 2 months, I’m increasingly realizing that we’re *right at the brink* of everything changing because of generative AI. In Silicon Valley circles, there’s a lot of talk about AGI killing everyone and replacing humans, but I don’t think there’s quite enough discourse about how we’re going to deal with what’s right around the corner.
When we look at what we can do today and extrapolate 1 or 2 years into the future, we can get a clearer idea about what’s going to be possible. We will be able to statistically generate *any form of content*. Text, audio, video, you name it. I thought I’d speculate a bit about what’s going to be possible.
Whatever can be AI generated, will be.
An immediate effect is that whatever can be auto generated, will be. Why pay someone to take 10-100x more time to do something a computer could in a few seconds? It’ll be possible to have a photo realistic AI avatar on zoom calls, so you won’t have to get ready (or even get out of bed) to look professional on a video. You’ll have an AI assistant that can make routine phone calls for you. VCs will automate banger tweets. You might have fully automated 24/7 news anchors. And maybe they’ll be customized to the news the user wants to hear.
Malicious use
But this can and probably will be used maliciously. It’ll be possible to have entire subreddits that are just fake accounts. We might have massive botnets that sway public opinion that are barely distinguishable from humans. People will make fake videos that look 100% real that incriminate celebrities and people in power. Any song, or piece of visual art will be immediately copied and remixed (which can be both great for creativity, but possibly not great for the artist).
Societal effects
The effects of this are twofold. First, people will begin to trust digital content much less. Anything that is digital can be statistically modeled and replicated, and so anything you’re seeing on a computer could be made by a computer. Your gut reaction when you see a viral video will be that it’s fake.
Second, in-person interactions will receive a massive premium. In a world where AI assistants are managing everything, and you don’t know if you’re interacting with a human, taking the time to meet face to face will be a real sign of trust/value.
And even after all of this, the sheer amount of content that can be generated means that the recommendation algorithms used by platforms on the internet will become even more important. But people will inevitably mistrust these. I mean, they’ll have an impossible task. They’ll need to filter a never ending stream of content (they already do, except a never ending stream of AI generated content will make it an order of magnitude worse). Whatever they don’t surface might as well not exist (when’s the last time you checked Google’s second page). And in that sense, we’re going to have to trust search algorithms to work well, because they’ll be the de-facto arbiters of truth.
How will we know what’s true?
Reputable news sources! You say. Well, how much do you trust CNN? Fox news? Maybe a mythical 100% unbiased news channel pops up. Well, they could get a video of a politician saying something egregious. Now, how do they know if it’s true?
What if someone gets Entity DDoSed? Maybe a politician really did say something bad, but now they’re sent a 100 videos where what they said has been slightly AI altered in every video, but different each time. Media about entity can have a stupidly large amount of copies made of them that each fudge information in a different way. Agnotology to the extreme. Disinformation by overwhelm.
So we’re going to need ways of legitimizing certain content while delegitimizing others. Somehow, companies will need to bake in “this is legitimate” into their content. E.g. Twitter will have to step up its game- bots will need to be blocked, the UX will need to be improved so people with similar usernames don’t seem like the same person. Somehow, we’ll need to find tools that can tell whether online content is real (though you could just use this to train a network to generate even better).
And so in a world where all online content could be fake, how do we trust what we see online? How do we know we’re seeing a real human, or a real utterance, or listening to an original song? Eventually, maybe it won’t matter. Maybe we’ll have no choice. I’m genuinely not sure how we’re going to deal with this.
This COULD be a way out, IF it succeeds - https://underlay.mit.edu/
Alternatively, there may be a future where people will OBS-Studio/stream their input flows to confirm authenticity of their inputs
It's interesting that you sense this trend early on!
This COULD be a way out, IF it succeeds - https://underlay.mit.edu/
Alternatively, there may be a future where people will OBS-Studio/stream their input flows to confirm authenticity of their inputs
It's interesting that you sense this trend early on!