In a world filled with the ever-evolving use of Artificial Intelligence (AI), the use of deepfakes is also becoming more and more popular.
A deepfake is a digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of that person doing or saying something that they did not actually do or say.
Deepfakes are created using artificial intelligence software that currently draws on a large number of photos and/or videos recordings of a person to then model and create deepfake content.
When an individual creates a social media account and uploads pictures and videos, they are putting content out that can then be harvested and used to create deepfakes.
These deepfakes have the potential to cause significant damage.
“Deepfakes can be used as a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation and harassment,” stated an article from Esafety.
While online apps such as Deepfakes Web, FaceApp and Deep Face Lab are gaining popularity in the tech and online worlds, there are still many people that are unfamiliar with deepfakes and who are unaware about the potential risks they pose.
“I have no idea what a deepfake even is,” sophomore Amy Goodman said.
In addition to the lack of knowledge surrounding the rise of deepfake technology is the problem of actually being able to detect a deepfake when a person sees one.
A new study conducted by Dr. Klaire Somoray and Dr. Dan J. Miller from James Cook University and published in Computers in Human Behavior found that even when humans were given specific training on how to spot deepfake videos, participants were only able to correctly identify twelve out of twenty deepfakes, which “cast[s] doubt on whether simply providing the public with strategies for detecting deepfakes can meaningfully improve detection.”
To address the emerging threats that deepfake technology poses, several tech companies such as Optic and Intel’s FakeCatch are continuing to develop and improve deepfake detection technologies. These companies have a sole focus on determining AI involvement in audio, video, and text generated by AI chatbots.
ink detection technology will probably catch up as AI advances but this is an area that requires more investment and more exploration,” said former Google Trust and Safety Lead Arjun Narayan.