close
close

Utah Valley University students study how deepfakes affect viewers and voters

Utah Valley University students study how deepfakes affect viewers and voters

UTAH VALLEY UNIVERSITY, Utah — Three Utah Valley University student organizations wanted to investigate how deepfakes affect the viewer, whether viewers can identify them and how they interact with deepfakes.

This comes after several false misinformation related to political candidates or political campaigns circulated online ahead of Election Day, including one involving Gov. Spencer Cox that surfaced earlier this year.

A deepfake is a video in which a person’s face or body has been digitally altered to appear as someone else using artificial intelligence.

School neuromarketing SMARTLab tracked the microexpressions of 40 participants using technology called iMotions.

These participants were tested in front of a computer equipped with hardware and software for eye tracking and facial emotion analysis.

Lab results showed higher levels of engagement and disorientation when exposed to false content, as reflected in their microexpressions, but they did not report these feelings in post-test interviews.

Real sight and sound evoked more traditional emotional responses.

Another 200 participants took part in an online study that assessed their ability to recognize deepfakes in video and audio formats.

At the beginning of the test, participants were divided into groups, unaware that they were watching content generated by artificial intelligence.

Subjects then rated the video or audio speaker based on factors such as trustworthiness and trustworthiness.

They were then informed that the aim of the study was to measure the impact of deepfakes and that some of the content could be generated by artificial intelligence.

They were then asked to rate whether they believed the media was true or false and to rate their confidence in their evaluation.

However, even after being told they may have encountered a deepfake, they had difficulty consistently identifying the fake content.

Among all fake video and audio and real video and audio, at least 50 percent of participants believed the media was “probably real.”

At least 57 percent were confident in their assessment, suggesting a 50/50 risk of detecting a deepfake.

“If you just need their image, if you just need their voice… each person’s image and voice is available on the Internet and available for free,” says Hope Fager, a national security studies student who helped lead the study. “It’s kind of scary how easy it is to find the information you need to pretend to be someone else.”

Therefore, some campaigns have taken additional steps to ensure that voters are exposed to fake media as little as possible.

Michael Kaiser is president and CEO of Defending Digital Campaigns, based in Washington, DC

“We talk to campaigns a lot about the issue of deepfakes and fraud, and this is another thing that happens in the space where someone deceives a campaign or a candidate and we work with them,” Kaiser says. “We have some tools that we provide to federal campaigns to help them monitor what’s happening online, identify deepfakes, identify places where people are trying to pretend to be an actual campaign, and help them take them down.”

As deepfake technology becomes more and more sophisticated, is there a way to distinguish fact from fakery?

“I think it’s really difficult,” Kaiser says. “I think there might be some technical things that people might see blurry here and there, maybe the background doesn’t look right, but it’s really hard to train people to see things like that, so I think you kind of have to trust your own instincts.”

Some social media sites know this and flag content that may seem suspicious.

“Verification checks to see if it might be disinformation, and things like that can be very significant,” Fager says. “The mere idea that something could be a deepfake is enough to make people unhinge and start thinking critically. No one really thinks about this alone, but if we can bring it to more people and say, “Hey, just be careful what you look at.” Trust your gut.

Until more safeguards are created to eliminate deepfakes, the gains start and end with voters who view the content until Election Day.

Be on your guard and watch out for what is not true.

“The problem is that it’s happening today. This is no longer a far-fetched idea and we need to prepare,” says Fager. “We have to be aware of what we’re watching and think critically, which is a little more difficult when we’re scrolling through social media and everyone’s brains are turned off.”