None of the people in this photo exist. They’re all deep fakes generated by a site called This Person Does Not Exist. Refreshing this site to see new pictures of made-up people is harmless fun. But what happens when these realistic fakes become moving images with voices and opinions?
What happens when one of these images is … you?
I host a small marketing leadership retreat called The Uprising. At our last online event, we had a mind-bending presentation from Nina Schick, the author of “Deep Fakes” explaining the implications of this emerging trend.
This discussion had a profound impact on the participants and I thought this was so important that I wanted to share it with a broader audience. You need to know what is going to be happening in our very near future.
Here is my video interview with Nina as well as a transcript of our talk.
Get ready. The world is about to go synthetic …
Mark: Nina you’ve done an amazing job positioning yourself as an expert in this emerging technology. What we all know is a threat, or is it an opportunity — or even a nightmare? Tell us a little bit about the premise of your book.
Nina Schick: My background is in information warfare and geopolitics so I saw this thing emerging which really is going to be a paradigm change in not only the way we communicate but also the way that humans perceive the world and perceive themselves … and I am talking about AI-generated synthetic content, the ability for AI to manipulate or wholly create content that is fake. It can be a video, it can be a picture. It can be a piece of text, it can be a piece of audio. This is a nascent technology which is only been emerging for the last three years, it really is due to the revolution in deep learning.
AI is now getting to the point where it can actually generate synthetic media. This is going to be immensely valuable for a whole plethora of creative industries.
It’s going to rewrite the future of everything from fashion to film to corporate communications because AI is actually going to democratize the ability for anyone to generate synthetic or fake content with no skill and no money needed.
It is also going to become a very, very powerful weapon of mis- and disinformation. Misinformation and visual manipulations have been around for many decades. The Soviet dictator Joseph Stalin was a great proponent of doctoring photographs for visual disinformation. But what AI can do is far more sophisticated than anything we’ve seen in the past.
It’s also going to be accessible to anyone so while you’re creating this paradigm change in the future of all content production and human communication, which is going to be terrifically exciting. This technology is undoubtedly also going to be weaponized by bad actors.
“The future is synthetic”
Mark: I’ve written a lot about that. Where corruption can occur, corruption will occur, of course. One of the things that was so interesting in your book, and I’m going to get this all wrong, because I’m not a technologist. But you said that one of the fellows that developed the face-swapping technology created two AI systems that competed against each other, trying to fool each other to get better and better and better. Absolutely fascinating stuff. And the technology is magnificent.
What are some of the non-obvious things that you’re seeing? Maybe some unintended consequences that people should be aware of.
Nina: For anybody who’s a marketing professional, the future is synthetic. I have no doubt that within the next five to seven years it’s going to become increasingly evident that all content, we engage with is going to be either wholly or partially generated by AI.
This is why I make this important distinction at the beginning of my book that the taxonomy around this field really hasn’t been decided yet, but this is a technology like all other powerful technologies in the past. It’s merely an amplifier of human intention. So just as there are going to be misuses, there will be many incredible applications.
The first thing to note is that I talked about synthetic media as AI-generated fake content that is viable commercially applicable and deep fakes, as its misuse as mis- and disinformation so when we talk about synthetic media, for example, I think we’re increasingly going to see that for content creators, the ability for even a YouTuber to have the same kind of effects that are only accessible now to Hollywood Studios with multi-million dollar budgets and teams of special effects artists. It’s going to mean that creativity is going to go through the roof! A lot of people who I speak to, who are on the startup side when it comes to generating AI-generated artificial media say you can’t even imagine what the future is going to look like.
Virtual Worlds, targeted marketing — it’s going to be a huge boon for creativity.
The second consequence is that it is going to pollute an already corrupt and broken information ecosystem because for the past 10 years we’ve already been dealing with this huge crisis of misinformation, which increasingly has taken the shape of visually manipulated media.
But when AI-generated fake media gets into this polluted information ecosystem, the noise is going to get a lot busier and the ability for us as consumers to distinguish between truth and noise what’s authentic is going to become increasingly difficult, especially because these AI-generated fakes, are going to be from a fidelity perspective, just like the genuine items.
No shared reality
Mark: As you are explaining this Nina, I thought of an article I read that said we are increasingly polarized because we no longer have a shared reality.
And what I mean by that is, 30 or 40, years ago, the shared reality was created by the daily newspaper, and a couple of news networks. Everybody kind of watching the same thing. There were certain standards for content.
Certain commentators and curators and reporters were trusted and at least you had a shared reality for discussion and debate. Today it’s so hard to even recognize what’s true. And I think what you’re pointing out here is, it’s going to get worse by some magnitude.
Nina: Absolutely. This is already a trend that has been going on for a long while and has been accelerated in particular with the technology of the information age, the ability for anyone to kind of exist in a silo when it comes to receiving their information forming their worldview. Even an objective reality becomes a purely subjective experience.
We are already in a culture when people talk about things like “your truth or my truth.” There is an objective reality but our ability to agree on what that is becomes increasingly partisan. In order to protect ourselves from deep fakes or AI-generated fakes, one of the first steps is inoculating the public so they know this kind of high-fidelity fake content exists before the fake content becomes ubiquitous.
One of the consequences of that is this phenomenon is perversely known as the “liar’s dividend,” because if people believe that anything can be faked — so seeing is no longer believing — then everything can also be denied and even authentic media can be decried as fake. The corrosion of reality and objective truth only becomes even more profound.
The emergence of deep fakes
Mark: So what’s holding this back right now? You mentioned this could be five to seven years away. One expert in your book said it might be three to five. But some of the examples that you give in your book, which I looked up on the internet, are quite compelling! What’s keeping this from prevailing in the next 12 to 24 months?
Nina: Experts debate on how long that’s going to be. But I think we’ll definitely start seeing changes within the next three to five years.
This technology is so nascent. It’s only been about two and a half years since the first deep fakes started emerging from the cutting edge of AI research. It has grabbed so much public attention that there’s sometimes been a tendency to overstate how good deep fakes already are. There’s been a lot of headlines written about deep fakes going to end democracy etc. But that’s only because it’s such an interesting topic.
We’re not there yet, because the barriers to entry are still quite high, but they’re coming down rapidly. This technology is already being wrapped up in very accessible interfaces, like apps on smartphones, and I have no doubt that within the next five years, they’ll be far more abundant.
There are many startups, a lot of investment, focusing on the generative side of getting AI, creating AI-generated fakes. So I think that it’s inevitable that it’s coming fast. I can’t actually even tell you where we’re going to be in 12 months, let alone in three years.
The positives of synthetic content
Mark: In my book, “Marketing Rebellion,” I emphasize that in the consumer world of today, the customer is the marketer. They’re the ones who are carrying our stories forward. So on the positive side, it’s kind of exciting to think about what if our consumers are telling, big stories, Hollywood-level stories, about their experiences with our products and our services. So that’s certainly exciting.
You can also think about some of the problems that could come from marketers and business professionals as company content becomes held hostage by some of the corrupt people out there. So from a marketing perspective, there could be both positives and negatives that emerge.
Nina: Well, first let’s tackle the positives.
Some of the earliest applications we’ve seen for synthetic media is applications to creative advertising. There was a State Farm ad, which was basically promoting the Netflix documentary “The Last Dance” on Michael Jordan and the Chicago Bulls in the 90s. They took a piece of original ESPN archive footage and used AI to manipulate that original clip to make it seem like the commentator was a fortune teller.
This shows the tremendous potential of synthetic media to be used not only in an advertising capacity but also for marketing.
Can you imagine creating personalized video content for every consumer? This is something that sounds as though it is in the realm of science fiction but it’s probably within this decade.
Another opportunity is the licensing of brand images. So for example, AI is very good at generating the likeness of celebrities or any human. Someone like Michael Jordan could just license his brand and marketers could use his image to deliver personalized content to consumers. The possibilities are endless.
When it comes to the negative side, the corroding information ecosystem filled with disinformation means that every brand and every business is also going to become a potential target of a disinformation campaign. And we’ve already seen how famous brands have been corrupted by being put in a negative context. Every single brand needs to have a crisis plan. Not only social monitoring of what’s going on but a plan for crisis communications in the event of a disinformation attack, which may include synthetic media.
How do we fight back?
Mark: Just as you could license these images in a legal way for positive entertainment purposes, it can be an act of terrorism as well. One of the things that was quite moving to me in your book is the example you provide of celebrity images being abused in horrible ways. Scarlett Johansson said “Look, it’s impossible. There’s nothing you can do. I give up.” And my heart just sank thinking how an innocent person doesn’t deserve this sort of abuse … and this could happen to anybody if somebody has a grudge against you.
My favorite part of your book Nina was the last part where you talk about fighting back and some of the positive things that can be happening to protect us. I hope your next book is about blowing up that one chapter because that’s what we really need to start working on. What are some of the things that you’re encouraged by that could help us focus more on the positives? What are some of the things that we can be looking forward to in the next couple of years that are hopeful?
Nina: Well the good news is that there are many groups, organizations, and individuals already working in this space, trying to shore up the information ecosystem.
I think that the first step is conceptualizing the problem — How is Russian interference related to fake news, related to AI-generated synthetic media, related to a failed coup in Gabon. All of that exists within the wrapper of this corroding information ecosystem. Once we understand that the consequences, it’s easier to talk about how to fix the problem.
Broadly that falls into two categories. There’s a lot of technical solutions that will detect fakes, as well as technology that could be embedded in your devices that will prove the provenance of authentic media. It will be increasingly important for brands to somehow watermark content from its inception to show that it’s not a fake.
Next, you have to talk about building society-wide resilience That is really a broad discussion because it relates to policy, it relates to regulation. It relates to a networked approach between synthetic media creators, policymakers, and big tech companies.
It’s not something that one government or one part of society can tackle by itself and I think that because it’s so nascent, we still have this opportunity to formulate how synthetic media can be used positively while mitigating against its worst use cases. This technology is incredibly exciting in many ways, and it’s imperative to not throw out the baby with the bathwater, and say it’s all bad.
You and deep fakes
Mark: What can an individual do to learn more about this, to really support you in your activism and take individual responsibility for what might be happening in our future?
Nina: We need to understand the conceptual threat. I think that is the first step. With knowledge comes power. And I really think this is something that is going to have to be a society-led grassroots effort to try and correct the course when it comes to our corroding information ecosystem.
The second thing I would say to all individuals is to be critical — but not cynical — because I think that if you become cynical and just believe everything is fake, then we’ve lost the plot a little bit.
And of course, be vigilant, understand that there are new ways in which your identity can be hijacked, your biometrics can be emulated by artificial intelligence and be used against you in the most heinous ways without your consent, without your knowledge, just from a digital footprint that almost everybody has online. Be vigilant and when the solutions come out to protect yourself with detection tools and provenance tools, implement them. It’s just like installing the antivirus on your computer. This is the next threat coming down the line so protect yourself when you can.