UCicago graduate students develop software to avoid facial recognition technology

An open-source software program “Fawkes”, developed by a UChicago research group, can alter images largely imperceptibly to the human eye while rendering faces in the image undetectable by facial recognition systems.

Facial recognition software is often trained by matching names to faces in images pulled from websites and social media. The goal is to develop software capable of correctly identifying images of the faces of people he has never met before. This allows people to be easily identified when an image of their face is captured in public spaces, such as during a political protest.

By modifying some of your characteristics to look like someone else’s, Fawkes’ “mask” prevents facial recognition software from training its model. A facial recognition model is trained successfully when it associates your name with a distinct set of characteristics and can accurately recognize you in future images. The Fawkes Mask decreases the difference between your set of facial features and those of others, thus preventing facial recognition software from training. The Fawkes mask is largely imperceptible to the human eye but misleading for machine learning models.

The Fawkes project is led by two computer science PhDs. Security, Algorithms, Networking and Data (SAND) Lab students, Emily Wenger and Shawn Shan, who work with UChicago Ph.D. student Huiying Li and UC San Diego Ph.D. student Jiayun Zhang. They are advised by SAND Lab Co-Directors, Professors Ben Zhao and Heather Zheng, Department of Computer Science.

Fawkes took inspiration from the concept of model poisoning, a type of attack in which a machine learning algorithm is intentionally fed with deceptive data in order to prevent it from making accurate predictions. Usually poisoning attacks take the form of malicious viruses used by hackers. Shan asked, “What if we could use poisoning attacks for good?” “

Developing an algorithm that modifies photos in such a way as to confuse detection systems but remain unrecognized by humans requires finding a delicate balance. “It’s always a compromise between what the computer can detect and what bothers the human eye. ”

Wenger and Shan hope that in the future, people will no longer be identifiable by governments or private actors based solely on images taken around the world.

Since the laboratory published an article on its program in Proceedings of the USENIX Security Symposium 2020, their work was widely publicized. Wenger says some of the cover has made Fawkes a more powerful shield against facial recognition software than it actually is. “Much of the media attention exaggerates people’s expectations [Fawkes], which leads people to email us… “Why doesn’t that solve all of our problems?” ”Said Wenger.

Florian Tramèr, doctoral student in the fifth year. computer science student at Stanford University wrote that data poisoning software like Fawkes gives users a “false sense of security.”

Tramèr has two main concerns: Fawkes and similar algorithms ignore unmodified images that people have already posted to the internet, and facial recognition software developed after Fawkes can be trained to detect faces in images with the distortions. applied.

In their article, Wenger and Shan address the first problem by suggesting that users create a social media account with images hidden under a different name. These profiles, called “Sybil accounts” in the IT world, trick a training algorithm into associating a face with more than one name.

But Tramèr said Brown that flooding the internet with images masked under a different name is not going to help. “If Clearview [a facial recognition system] has access to the attack (Fawkes) then [it] can easily train a model immune to Fawkes attack.

Tramèr is not convinced that Fawkes can provide them with sufficiently strong protection against recognition software that will be developed in the future. There is “no guarantee of the strength of this disturbance in a year,” he said. Attempts to make her face undetectable in pictures could be thwarted by training next year’s algorithm on a set of photos obscured by an older version of Fawkes.

However, Tramèr believes that wearing a mask in a public space could escape detection, as the advantage always goes to the defending party. “If there’s facial recognition at the airport and you know it’s there, then every year you show up at the airport, you can come up with a new mask that’s better than the year before.”

However, Tramèr believes that the use of facial recognition software can only be limited by policy changes. He seemed moderately bullish and cited companies like Microsoft, Amazon and IBM as saying they would not sell facial recognition software to law enforcement. One of the concerns of these companies is that these models demonstrated less accurate recognition of faces with darker skin than faces with lighter skin, which could allow police brutality towards blacks. Yet other companies, such as the Ring doorbell camera company, continue to collaborate with law enforcement.

Wenger and Shan said there will always be a new model of facial recognition that could trump their latest attempt at masking. Still, they think Fawkes and other software that makes facial recognition more difficult are valuable. “We are increasing the costs for an attacker. If no one comes up with the idea, even if it is imperfect, no one ever moves forward.


Source link

Warzone’s new loading system has been removed after causing bugs

Raven Software has removed a new loading system from Call Of Duty: War Zone after causing several bugs and serious […]

calls for legal review of UK age-sensitive selection system for social benefits | Benefits

An automated system that filters welfare claimants for signs that they might be committing fraud or error has based its […]

Leave a Reply

Your email address will not be published. Required fields are marked *