New Research Undermines AI Image-scraping Classification

avatar

Privacy advocates rejoice! …and oddly, social media sites should also be excited! (both groups being happy does not happen often!). New privacy enhancing tech is designed to undermine unauthorized image scraping and harvesting by AI systems.

It is perfect for social sites where unscrupulous data harvesters are vacuuming up personal data and images for AI training and inference purposes. The technology outlined in the Fawkes academic paper will perturb images to throw off AI systems so they mis-classify what they see. So instead of recognizing and identifying you, they either can’t decipher or think you are something different.

This is also good news for social media sites, like facebook‍ twitter‍ instagram‍ and linkedin‍, as they want to protect their user data from outside systems that harvest all the information from their online property. This protects the social sites competitive data at the same time as it protects individual’s privacy. A double-win for everyone, except those who are indirectly victimizing everyone.


Check out the article: https://www.theregister.com/2020/07/22/defeat_facial_recognition/



0
0
0.000
9 comments
avatar

This should be incorporated into every social site! Protect users images!!!

0
0
0.000
avatar

It is quite interesting that the faces are still 100% recognizable as the people to my eye, but claimed to be altered such that FR cannot do the same.

Thanks!

0
0
0.000
avatar

Yes, it has to do with how Deep Learning trains its inference engine. Just the slightest coordinated changes can make a big difference!

0
0
0.000