Defenders show up to the war on deepfakes

avatar
(Edited)

Digitally altered and synthetic media are becoming more of a problem.  Openly available tools, including AI Deep Learning, enable the easy modification of pictures and videos for distribution on the Internet.  Most are benign; clearing up acne, improving image lighting, creating a funny meme, or perhaps narrowing a waistline for aesthetic reasons.  More disturbing is the generation of videos of known personalities, making them appear to make caustic statements or take part in inappropriate activities.  These fakes have appeared in political posts, social satire, news media, and pornographic material.  Motivations are sometimes for humor, vanity, vindictiveness, or to sway public viewpoints.   

The most malicious reasons are just around the corner.  Cybercriminals, who innately understand the value of impersonation and counterfeiting identities, are drooling at potentially using this technology to open entirely new lucrative branches of scams, phishing, and identity theft.  Every day the technology to create synthetic digital representations gets more believable and accessible, the closer it will end up in the hands of criminals. 

The societal problems are only beginning as the tools to create fakes are far outpacing the capabilities to detect them.  Several organizations are working toward the goal of confidently identifying digital modification in pictures, audio, and video. 

Microsoft has recently announced one such tool for analyzing videos, purposely being released in advance of the U.S. elections, to help media sites and social watchdogs detect misleading political deepfakes.  Microsoft Research is aware their technology will be undermined soon, but having some tools to help identify truth as the election cycle begins, is better than nothing. 

The war on deepfakes is just starting.  Technology innovation is working on both sides, to create realistic synthetic content and to detect such creations before they are accepted as truth.  Society will be caught in the cross-fire as we all must consider if what we see and hear is actually real.

 

 

Interested in more? Follow me on LinkedInMedium, and Twitter (@Matt_Rosenquist) to hear insights, rants, and what is going on in cybersecurity.

 

Image Source: https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/



0
0
0.000
5 comments
avatar

Deepfakes can be entertaining or really misleading. Being able to identify them is important.

0
0
0.000
avatar

Upvoted by GITPLAIT!

We have a curation trial on Hive.vote. you can earn a passive income by delegating to @gitplait
We share 80 % of the curation rewards with the delegators.


To delegate, use the links or adjust 10HIVE, 20HIVE, 50HIVE, 100HIVE, 200HIVE, 500HIVE, 1,000HIVE, 10,000HIVE, 100,000HIVE


Join the Community and chat with us on Discord let’s solve problems & build together.

0
0
0.000