Artificial Intelligence Can Combat Deepfakes, Cybercrimes and Snooping (UF Explore)

Artificial Intelligence Can Combat Deepfakes, Cybercrimes and Snooping

When you can’t trust your own eyes and ears to detect deepfakes, who can you trust?

Perhaps, a machine.

University of Florida researcher Damon Woodard is using artificial intelligence methods to develop algorithms that can detect deepfakes — images, text, video and audio that purports to be real but isn’t. These algorithms, Woodard says, are better at detecting deepfakes than humans.

“If you’ve ever played poker, everyone has a tell,” says Woodard, an associate professor in the Department of Electrical and Computer Engineering, who studies biometrics, artificial intelligence, applied machine learning, computer vision and natural language processing.

“The same is true when it comes to deepfakes. There are things I can tell a computer to look for in an image that will tell you right away ‘this is fake.’”

The issue is critical. Deepfakes are a destructive social force that can crash financial markets, disrupt foreign relations and cause unrest and violence in cities. A video, for example, that appears to be a congressman or even the president saying something outrageous and untrue can destabilize foreign and domestic affairs. The potential harm is great, Woodard says.

Learn more about Artificial Intelligence Can Combat Deepfakes, Cybercrimes and Snooping.