Deepfakes Part 3: How Deepfakes Will Impact Society [Deepfakes]
What the discussions get wrong about the impacts of Deepfakes
Hey, it’s Devansh 👋👋
This article is part of a mini-series on Deepfake Detection. We have 3 articles planned-
Proving that we can leverage artifacts in AI-generated content to classify them (here)
Proposing a complete system that can act as a foundation for this task. (here)
A discussion on Deepfakes, their risks, and how to address them (here)
I put a lot of effort into creating work that is informative, useful, and independent from undue influence. If you’d like to support my writing, please consider becoming a paid subscriber to this newsletter. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.
PS- We follow a “pay what you can” model, which allows you to support within your means, and support my mission of providing high-quality technical education to everyone for less than the price of a cup of coffee. Check out this post for more details and to find a plan that works for you.
Executive Highlights (TL;DR of the article)
Deepfakes have garnered significant attention and sparked widespread concern about their potential misuse. However, in my opinion, the discussions around the risks from Deepfakes are incomplete (or wrong) since they overexaggerate some risks while ignoring others. To round out our three-part series, I want to end with a discussion on what I believe are the true risks of deepfakes, and how we can tackle them. This is important because we continue to see policies and discussions by influential people that are often ineffective, useless, or even harmful, stifling innovation in the elitist guise of “safety”.
So…what are the risks associated with Deepfakes?
The most immediate and pervasive impact of deepfakes would be the cognitive overload and information fatigue they create. As the volume of potentially manipulated content increases, individuals face constant pressure to verify information, leading to mental exhaustion and possible decision paralysis. Social Media and Technology have already led to massive overstimulation, which Deepfakes (and click-farm style AI Generated Content) will only make worse.
This is a problem that parents, schools, and society in general should be actively thinking about. Our thought processes, support systems, and education need to be tuned to a world where the ability to learn constantly, prioritize what to focus on, and discard most stimuli. The way we see Education needs a rework- the emphasis on Courses, books, and degrees (which are all useful for things) creates learners who are too static (learn one set of things and never fully deviate)and passive (expect everything to spoon-fed to them).
The best way to combat the information overload created by Deepfakes is to empower people to stand on their own, interact with the world, and take care of themselves. The reason I dislike the discourse around Deepfakes is that it’s too obsessed with gating “dangerous technology” or obsessively building the perfect technological solution while ignoring the personal development aspect that must be done to empower people to handle themselves. Focusing mostly on the former while ignoring the latter is ineffective, patronizing, and this coddling will only raise wimps.
And that’s mostly my view on Deepfakes (also why I stopped working on Deepfakes during my original stint in Detection ~2021). Technical or regulatory interventions can play a part but I think the most important step is to help people be better.
To add some more nuance to this discussion, I will spend the rest of this article discussing the implications of Deepfakes on the following:
Political misinformation: The negative outcomes of this are often overstated. The real danger lies in the lack of media literacy and critical thinking skills, exacerbated by political polarization. The ability to create convincing fake political content isn’t as concerning as the public’s people’s refusal to do some due diligence to verify what they’re fed.
Exploitation of public figures: Non-consensual use of deepfakes can dilute personal brands and harm fan relationships. Celebrities, athletes, and influencers may find their likenesses used to endorse products or express views without their permission, potentially damaging their reputations and earnings. This is why I believe that heavily AI-generated content should be labeled, and people featured in AI Ads must have given explicit approval for their appearance.
Entertainment industry power dynamics: Deepfakes might be misused by labels and executives to avoid paying artists, worsening existing imbalances. This could lead to inferior products for consumers and further exploitation of creative talent. However, AI and deepfake technology could also potentially improve working conditions by assisting with certain tasks. In either case, I believe that we need two major things. Firstly, we should have good regulations (and education) to help creatives understand important facets like image rights, copyright, IP ownership, etc. Next, we must use ethically created AI tools that either pay the people whose data was taken or provide some kind basic of attribution. I think Gen-AI would be a lot more ethical if it did either (we covered this in more depth here).
Legal complications: Deepfakes challenge the reliability of digital evidence in court, potentially slowing legal processes. New forensic techniques may be needed to authenticate digital content, complicating legal proceedings and potentially impacting the course of justice. I find this concerning because there’s already a huge backlog of cases, and the last thing we need is a slower judicial system. The slowdown will be especially problematic for underprivileged communities, which often have to spend much longer circling courts to get any kind of justice (this is extremely costly, and many get justice too late).
I’m not sure what we can do about this aside from using AI to speed up various legal processes. This is something I’ve had an interest in for a long time, but I don’t know enough about Law to provide any meaningful suggestions. Since we have a weirdly large number of lawyers in our cult, I’d invite you all to come on here and share your thoughts in guest posts. This is a very important problem, and I think we should work on fixing it together.
Scams targeting vulnerable individuals: Deepfakes provide a new tool for scammers, especially in targeting emotionally vulnerable people. However, the underlying issue is growing societal loneliness. The technology amplifies preexisting social problems rather than creating entirely new ones. Asking social media platforms to flag potentially Deepfake-based accounts/messages (which are used by many scammers) is a good start. Aside from that, fixing the underlying loneliness crisis is a much better investment.
Environmental concerns: The energy-intensive process of generating deepfakes will contribute to climate change. As the creation and distribution of deepfakes become more widespread, the associated increase in energy consumption may have significant environmental impacts.
Aside from direct energy costs, Deepfake generation will also require additional mining/distribution (which itself can cause problems) AND increase the wear and tear on electronic systems, boosting the production of E-Waste. All things that are not good.
This aspect of mainstream Deepfakes is often overlooked since the people worried about Deepfakes are mostly affluent, and the environmental impact will hit the people in the global south the hardest.
There is also the usage of Deepfakes by bullies. We will not touch on that b/c- more often than not, these deep fakes aren’t a problem of authenticity but malice; and b/c I couldn’t tell you the first thing about what it would take to stop bullying (unfortunately, I was a massive douche to a lot of people growing up).
If these ideas interest you, keep reading. As always, I’m excited to hear what you have to say, so feel free to reach out through my social media (below) or through email (devansh@svam.com).
I provide various consulting and advisory services. If you‘d like to explore how we can work together, reach out to me through any of my socials over here or reply to this email.
Deepfakes and their Impact on Politics
Say you saw a video of your most hated political candidate talking about the importance of kicking puppies, punching grandmas, and selling naughty children to human traffickers. This video looks realistic enough, and it passes through any advanced Deepfake detector w/o issues. Should you believe it right away? Maybe pat yourself on the back for identifying how evil this politician is and how all of their followers are anti-human sheep with zero critical thinking? Share this with your political groups to listen to the satisfying chorus of likes, “Amen”s, and “That’s Right”s?
What I hope you’d do is to ask for a source. Verify the information yourself. Googling takes a minute. Something this major will have lots of coverage, clear indications about when/where things happened etc. Just as with other forms of Social Media misinformation, your best course of action is to slow down and not let yourself be rage-baited. Verify a bunch of times with different sources.
In doing so, you make the quality of the deepfake irrelevant. No matter how good the deep fake is, you won’t fall for it. If we make this standard practice, then you can cut off most deepfake related political misinformation and it’s negatives right there.
But what about more subtle variations? Maybe there’s a deepfake of a random thinker arguing for some policy/endorsing some product. This is often used as a way of generating social proof for something, with the goal of manipulating you.
In this case, you have two options-
You believe the video as is b/c that’s what people are saying.
You analyze the arguments and make up your mind accordingly.
The former isn’t good, irrespective of whether the video is a Deepfake. And in the latter case it doesn’t matter. In either case, the deepfake-ness of the source didn’t matter. Only some skepticism and a willingness to decide things for yourself.
Yes, this requires some effort to educate yourself. Yes we will be wrong/have incomplete information on various issues. But this has always been the case. Deepfakes don’t change that. If you approach them with the same degree of skepticism as normal internet misinformation, then you won’t be that much more susceptible to it. The answer to Deepfake-backed misinformation is the same as other SoMe misinformation- teach people to learn for themselves, think critically, and engage with a broad set of sources.
We fail with Deepfakes b/c we fail with SoMe, and we are resorting to the same ineffective cases for both- censorship, overreliance on platforms to make us feel good, and an abdication of personal responsibility.
The best regulation will, therefore, focus on equipping us with the skills needed to navigate this. It’s not easy, but every generation has a struggle, and this is ours (and tbh, I’d take this over others). Once we have trained people, then the regulatory intervention and technical solutions become nice to have. Without them, it’s as effective as Man United’s transfer policy (I would have said Chelsea, but I’m the conductor of the Cold Palmer hype train).
With that covered, let’s talk about one group that might be affected by Deepfakes.
Deepfakes and Public Figures
Given how important a public figure’s “brand” or persona is, this is definitely something we should look out for. The non-consensual use of deep fakes can have several damaging consequences:
Brand Erosion: Deepfakes can erode carefully cultivated personal brands. Imagine a beloved celebrity seemingly endorsing a controversial product or political stance against their will. This misrepresentation can alienate fans, damage partnerships with existing brands, and hurt their long-term earning potential.
Damaged Fan Relationships: For celebrities, athletes, and influencers, the trust and connection they build with their fan base is invaluable. Deepfakes can shatter that bond, creating confusion and distrust. Erosion of trust can be permanently damaging.
Financial Harm: Deepfakes can directly impact an individual’s financial well-being. If a likeness is used to endorse a product without their knowledge, they not only miss out on potential earnings from a legitimate endorsement but also risk being associated with a product or brand they wouldn’t normally choose to support.
In the face of these risks, it is crucial to advocate for measures that protect individuals and ensure the transparency of AI-generated content:
Mandatory Labeling: Requiring clear and prominent labeling of AI-generated content empowers viewers to make informed decisions. When people know they are watching a deepfake, they can evaluate the content with a critical eye, understanding that it may not reflect the views or actions of the person depicted.
Explicit Consent: Require explicit consent from individuals before their likeness can be used in deepfakes, particularly in commercial settings like advertisements. This ensures that people have control over how their image is used and can prevent unwanted associations.
Technological Solutions: Combine this with good quality Deepfake detectors that can catch things as required.
This works b/c of two things-
In adverts- the people using the celeb likeliness will probably not go out of their way to break the law b/c the ROI is too low.
In political cases- you’re hopefully making up your own mind and not just following your fav influencer.
Now, there are some bad actors who will try to work around this and do this illegally. This can’t be helped. Increasing awareness of Deepfakes and encouraging people to always do some research before accepting important claims is very important here.
Speaking of public figures, the impact of Deepfakes on the entertainment industry is worth studying.
The Entertainment Industry and Deepfakes
The entertainment industry is no stranger to power imbalances, with artists, musicians, and athletes often struggling for fair compensation and control over their work. Deepfakes could exacerbate this problem:
Bypassing Entertainers’ Rights: Record labels, sports teams, and other industry entities might exploit deepfakes to create “virtual performances,” “endorsements,” or other content without paying entertainers or seeking their permission. This strips them of their agency and the ability to negotiate fair compensation for their talent and contributions. This happens b/c many musicians and athletes sign away their image rights early in their careers, often without fully understanding the long-term implications.
Exploitation of Talent: Deepfakes could be used to manipulate and control entertainers, potentially coercing them into unfavorable contracts or unwanted projects. This could create a hostile environment where talent is further exploited and silenced. We’ve seen this in other fields, where management will often put money into mediocre AI projects to cut costs- leading to worse services for the customer and a bad working environment for employees.
However, what I find interesting about the industry is that it’s not all doom and gloom. Deepfake technology might also have some serious benefits:
Empowering Entertainers: Ethically developed AI tools could empower artists, musicians, and athletes by automating tedious tasks for their social media creation, freeing them to focus on their creative or athletic pursuits. For example, AI could assist with audio editing and visual effects, serving as a valuable tool for improvement. I’ve already seen some very creative examples, such as the YouTuber Ejiogu Dennis, who makes hilarious skits using pop-culture characters using AI. If AI can enable fantastic creators like Gravemind (seriously, one of the greatest channels on YouTube) to make more videos/put in less effort on tedious tasks, then that is a huge contribution.
Leveling the Playing Field: AI-powered platforms could enable independent entertainers to create and distribute their work without relying on traditional gatekeepers. This could democratize the industry, giving more individuals a chance to reach audiences and earn a living from their talents. Examples given above.
Clearly, Deepfakes (and Deepfake-related tech) can be both good and bad for the entertainers in the industry. To minimize the negatives and maximize the positives, we’d need the following-
Education and Awareness: Entertainers need to be educated about their rights, including image rights, copyright, and intellectual property ownership. This knowledge empowers them to negotiate fair contracts and protect themselves from exploitation.
Fair Compensation Models: If an entertainer’s data is used to train an AI model, they should be compensated for their contribution. This could involve royalty payments, naked payment to buy rights to that work, attribution, or other fair compensation mechanisms that recognize the value of their work in creating the AI’s capabilities. I don’t like how Generative AI benefits everyone except the people whose work is used to create it. We’ve established how important high-quality data is, and the creators of the data should be given some benefit.
Regulation and Oversight: Governments and industry organizations need to develop regulations and ethical guidelines for the use of AI in the entertainment industry. These guidelines should prioritize transparency, fairness, and the protection of entertainers’ rights, particularly for those who have signed away their image rights.
I’m going to skip the following-
Deepfakes on Legal- As stated, I don’t have too many intelligent things to say- both on the problem and it’s solutions. I invite you to share your thoughts.
Deepfakes and lonely people- The main issue here is combining awareness with taking active steps to ensure that we can develop strong emotional support systems and communities that will stop people from being vulnerable to these scammers.
Let’s end on a discussion of the environmental impact of deepfakes.
Deepfakes and the Environment
The creation of deepfakes is not a simple process. It involves complex algorithms, vast amounts of data, and powerful computing resources. This translates to a significant energy demand, contributing to several environmental concerns:
Increased Carbon Footprint: Deepfake generation is very costly b/c it involves complex processing + video generation. This can add to the environmental load fairly substantially. This holds true even if you move the Grid to something “green” or renewable-based (as many data centers claim to be). As we covered in our investigation into the business of climate change, the production of important components of green energy has major sustainability issues-
However, there are critical sustainability issues connected to the production of wind turbines, solar photovoltaic modules, electric vehicles and lithium-ion batteries. These include the use of conflict minerals, toxicity, and finite availability or supply chain governance risks of rare earth elements, cobalt, and lithium, that need to be taken into consideration. “Conflict minerals” refer to tantalum, tin, tungsten and gold, and their current mining is frequently linked to human rights violations and the financing of violent conflicts.
This is to say nothing about offline/local systems, which might be used by certain malicious actors to avoid detection. In this case, your energy comes with a hefty carbon bill.
Resource Depletion: The hardware required for deepfake creation, such as GPUs (Graphics Processing Units), is resource-intensive to manufacture and requires rare earth elements. The extraction and processing of these materials can have detrimental environmental impacts, including habitat destruction and pollution.
E-Waste Generation: As technology advances rapidly, older hardware used for deepfake generation quickly becomes obsolete (or runs down due to wear and tear). This contributes to the growing problem of electronic waste (e-waste), which poses a significant environmental hazard due to the toxic materials it contains.
As deepfake technology becomes more accessible and widespread, the environmental impact is likely to escalate:
Increased Demand: The demand for deepfakes, both for legitimate purposes and malicious use, is expected to grow. This will drive the need for more computing power and energy, exacerbating the environmental impact.
Higher Resolution and Complexity: Deepfakes are becoming increasingly sophisticated, with higher resolutions and more complex algorithms. This requires even more processing power and energy, further straining resources and increasing emissions.
Unregulated Usage: The lack of comprehensive regulations and ethical guidelines regarding the creation and use of deepfakes could lead to unchecked energy consumption, with little regard for the environmental consequences.
Soo…how do we deal with this? To address the environmental impact of deep fakes, several strategies can be considered:
Education- By making people more robust to Deepfakes, you reduce the benefit to malicious actors, reducing the environmental load. Promoting responsible and ethical use of deepfake technology can help curb unnecessary energy consumption. This includes limiting the creation of deepfakes to legitimate purposes. Both of these are probably the best solutions to keep impacts down.
Energy Efficiency: Investing in more energy-efficient hardware and software for deepfake creation can significantly reduce energy consumption and emissions. One promising example is the Matrix Multiplication Free LLM, which might be extended beyond for reducing costs significantly-
Renewable Energy Sources: Transitioning data centers and computing facilities to renewable energy sources, such as solar or wind power, can help decarbonize the process. While they have their problems, these are still better than the alternatives. And while I have your attention, go nuclear. The fear-mongering around nuclear energy is ridiculous and needs to be stopped.
Regulation and Standards: Developing regulations and standards for the energy efficiency of deepfake generation/Deep Learning can incentivize the industry to adopt more sustainable practices.
That is my overall assessment of deep fakes and their true risks to society. What do you think? How scared are you of deep fakes, and how do you think they should be handled? I would love to hear your thoughts.
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
Good article. Devansh, you asked elsewhere about why deepfakes are a problem. The answer is in what you wrote. :) “A majority of the risk from deepfakes comes from people’s inability to thick critically.” To the extent that it’s true, it isn’t easy or quick to fix. And that is exactly WHY deepfakes are such a problem and create high risks.
Another factor is that many people don’t care if something is fake or not, especially with political garbage. If it aligns with their views and preferences, they’ll happily spread it, even if they know full well it’s fake. It’s not due to a lack of knowledge or lack of ability to think critically. That isn’t easy or quick to fix, either.
It's crazy simple when we accept that we have agency and can be accountable for our own information diet. We love deepfakes when they confirm our bias. We hate deepfakes when they oppose our bias. AI has provided such a great foil on which to analyze just how little people around us actually understand themselves.