Deepfakes And The Human Psyche

Deepfakes are drastically changing how humans perceive things. What is unreal becomes real.

What is impossible becomes possible – such as in this content where Facebook founder Mark Zuckerberg brags about taking total control over billions of people’s stolen data. You may have seen this content, but we’re here to tell you this never happened. If it’s your first time hearing about deepfake technology, it actually refers to photos or videos created using artificial intelligence, or AI, to make things appear as if they’re real or they happened when in fact, they aren’t and did not.

Alarming? According to the blog post by ExpressVPN, one of the most concerning attributes of deepfakes is their potential to be used for unethical and malicious purposes, like spreading fake news or impersonating someone so they can do criminal acts.

However, despite these fears, many experts say deepfakes could be our friend and offer many advantages. For instance, for businesses, deepfakes are lower in cost, and they have the potential to reach out to more people.

So, in this piece, we will take a deeper look at this technology and correlate it to the workings of the human mind. We deal with Science this time. Read on!

The Human Brain Can Subconsciously Detect Deepfakes

A research done at the University of Sydney claims our brains can subconsciously spot deepfakes, even if the conscious mind is deceived.

Neuroscientists found that the human brain can detect fake faces, for example, generated using AI, though people could not particularly say which faces were real and which were fake.

The research’s findings show that in its subjects’ brains, monitored using electroencephalography or EEG, the researchers found human braids can encode and interpret artificially generated images, such as deepfakes, in a manner different from how they would do in real life. EEG refers to a test that shows activity on the surface layer of the brain.

This technology, EEG, is then replicated to be present in other equipment like electroencephalography-enabled helmets. When used, they could help prevent bank heists and corporate fraud cases, which are usually those who use deepfakes for personal gains.

EEG can successfully detect content created using deepfake technology 54 percent of the time. However, when people are asked to identify the same deepfakes verbally, the success rate is only 37 percent. Interesting, right? Deepfakes may be created by computer technology, but somehow they leave “fingerprints” that can be detected.

In the study, the researchers did two different experiments. One experiment was behavioral, and the other used neuroimaging. Participants were presented with 50 images of real and fake faces created using computers. They were then asked to spot which were real and which were fake.

Next up, the study’s proponents brought in a new group of respondents who looked at the same set of images, but this group is the experimental group. They were monitored using EEG neuroimaging.

Then, the researchers compared the results of the two experiments, finding out that the human brain is better at spotting deepfakes than the person’s eyes. This research shows that some deepfakes can fail to drive home their point despite their capabilities. The study also showed that tools like EEG, such as EEG helmets, can accurately detect real and fake, helping society curb those deepfakes used for unscrupulous purposes.

Deepfake Technology And The Human Psyche

The University of California Irvine’s Elizabeth Loftus, who pioneered several pieces of research in false memory formation during the 1990s, said the potential of deepfakes to lead to abuse is “so severe.”

Loftus said, “The potential for abuse is so severe. Once you expose people to such a powerful visual presentation, how do they get it out of their minds?”

Here’s a fact. Despite some studies that seem to show otherwise and despite some tools, like EEG, that can help the mind detect what’s real and what’s not, the human mind is still vulnerable to forming false memories. And this falsified content can spread like viruses. Meaning to say, these AI-generated media or deepfakes will only make planting false memories even easier.

Let’s take, for example, this fake image of former United States President Barack Obama shaking hands with the former president of Iran, Mahmoud Ahmadinejad. This scene is intriguing because of the issues of the United States with Iran. But this did not happen—all fake.

But in 2010, the online magazine Slate did a project about this image and asked around 1,000 of its readers whether they remembered seeing that image. Surprisingly and interestingly enough, about 21 percent said yes, they did. What’s even more disturbing is that the project found that another 25 percent said they remembered the event actually happened, though they could not recall specifically seeing the photograph.

False Memories

Here’s what science experts are saying. The human memory does not function or work like a

videotape or a digital recording. Though it can remember, it usually does not wind back to that moment in time where the thing happened, nor relive that particular moment. Instead, human memory is constructed.

It is a tricky thing to comprehend. When a person tries to remember something back in time, they have to piece it together from disparate details in their mind. Some of those that end up in the recollection are considered the truth. However, there exists a so-called “laziness” in these recollections. Why? When people reminisce, their brains only grab the most accessible information they can recall. And the information learned since the event will be considered memory gaps.

Instead of comparing it to a videotape, the human memory is like a video editor. On a whim, this video editor splices bits of truth with whatever is handy. But what’s handy is even biased or a piece of new information altogether. It also boils down to familiarity. The more familiar a person with an idea, the more it gets into their memory as the absolute truth.

In Conclusion

Our minds are powerful. The subconscious is powerful. It can identify what’s genuine and authentic and what’s not. However, a technology known as deepfake threatens this capability – how we perceive things.

Beyond those thoughts you learned a while ago, it is a fact that the human mind is susceptible to being tricked by deepfakes. Because of this, deepfakes become a way to spread false information, manipulate public opinion, and even harm a person’s reputation and privacy.

Therefore, there is a dire need to increase awareness and educate individuals on identifying and mitigating the risks that deepfakes bring forward. Additionally, there is a need to develop more technologies designed to spot and counteract the adverse effects of deepfakes. While deepfakes helping in a good way must be retained, their cons should be shattered.

Overall, individuals, organizations, businesses, and governments must collaborate to address these complex issues and guarantee the responsible use of deepfake innovation.