The website, called exposing.ai, searches through public databases to determine if your Flickr photos were used for AI research. Software developers often use publicly available images to train their recognition systems. The practice may be legal, but some experts believe it’s not ethical.  “The fact that these photos are used without people’s knowledge is a significant privacy violation,” Thierry Tremblay, the CEO and founder of the database software company Kohezion, said in an email interview. “That’s a particular concern for minorities who could be profiled and targeted. Furthermore, users don’t necessarily consent to get scanned every time they go out in public.”

Flickr May Reveal More Than You Know

The exposing.ai website works by looking to see whether your photos were included in publicly available datasets. It looks for Flickr usernames and photo IDs. All you have to do is enter your Flickr username, photo URL, or hashtag in the site’s search bar.  The site was launched last month, and is based on years of research into public image datasets, Exposing.ai’s creators wrote on the website. “Telling the complex story of how yesterday’s photographs became today’s training data is part of the goal of this ongoing project,” they said.  The site searches millions of records, but “countless more face recognition training datasets exist and are continuously being scraped from social media, news, and entertainment sites,” they wrote. 

An Arms Race for Photos

The scraping of photos is part of an arms race among companies to develop better facial recognition. For example, the company Clearview AI sucked up 3 billion images and took this a step further by creating an AI app, Maple noted. The app acts as a search engine and allows a user to take a photo of someone, upload it, and see a list of public pictures of that person and links to where they came from. “Interestingly enough, we see the most hesitation for this software at the government/law enforcement level, due to legalities and profiling concerns,” Laura Hoffner, a crisis manager at the risk consultancy firm Concentric Advisors, said in an email interview. “But that means the private industry is superseding the government in experience and access.” Users who want to keep the photos they have already posted online private have limited options. “There isn’t much you can do other than take the nuclear option, that is, hire a lawyer and sue the company in question,” Maple said. “But of course, you’ve got to be dedicated and moneyed.”

Protect Your Face

If you want to keep the photos you haven’t yet posted from being used for research projects, there are software tools that disguise photos by making changes at the pixel level to confuse facial recognition systems, Maple said.  For instance, researchers at Chicago University have developed software called Fawkes to reduce the accuracy of the photo data sets that facial recognition tools gather from the web. However, Microsoft recently made changes to its Azure facial recognition platform “apparently designed to undermine the efficacy of the current version of Fawkes,” Maple said.  The best way to keep your photos private is to make sure they don’t get into circulation online, Sean O’Brien, principal researcher at ExpressVPN Digital Security Lab, said in an email interview. He suggested locking down your social media accounts by setting your profile to private, or even deleting social media altogether. “We only have one face and need to treat it with more caution than a password,” O’Brien said. “It’s critical that consumers hold tech companies and governments accountable for implementing tech policy that protects us and resolves facial recognition’s privacy-related shortcomings.”