Is That Guacamole Actually My Mother?

 

Feature

IS THAT GUACAMOLE ACTUALLY MY MOTHER?

MATTHEW HUTSON

 
 
glasses-video-small.jpg
 
 

THE FAULTY PROMISE OF FACE RECOGNITION SYSTEMS

 
 

Last month, U.S. Customs and Border Protection began testing automatic facial identification at 14 airports, and two days later it snagged a man from the Republic of Congo carrying a fake French passport. The incident was seen by some as a reassuring demonstration of security technology, by others as a frightening peek at a Big Brother society, and by all as the future. 

But the fraudulent Frenchman might have slipped past had he been wearing some computer-confounding eyewear or an engineered baseball cap, two ways that researchers have come up with to fool face recognition systems. More disturbing still, researchers might someday make security cameras play back someone or something that wasn’t there at all. Welcome to a Clark Kent world where wearing an algorithmically designed pair of glasses can let you impersonate almost anyone alive. What the camera sees is not necessarily what’s there.

 
 

The better artificial intelligence gets, the better AI gets at attacking AI. Perhaps nowhere is that more critical than in the use of neural-network-powered monitors to scan crowds—a controversial application that can be used to both protect airports and protect China’s authoritarian government from political dissent. As face-recognition systems spread, so are strategies to defeat them. 


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.


Early attempts to defeat facial-recognition algorithms were low-tech affairs. In 2010, the artist Adam Harvey began a project called CV Dazzle, suggesting avant-garde hairstyles and makeup patterns to make faces unrecognizable to an algorithm. You might grow long blue bangs to cover the left half of your face and adorn your right cheekbone with dozens of gems. But the looks only work for those comfortable dressing as an 80s pop star. In 2013, Harvey began developing another project, called HyperFace, an abstract textile pattern that vaguely looks like a bunch of pixelated faces. The idea is that if a security camera is looking for a face, it will be distracted by one of the faces on, say, a bandana. 

These are instances of “dodging,” fooling a program into thinking you are someone else or not recognizing a face at all. More sophisticated solutions allow you to algorithmically impersonate a particular person. Of course you could simply wear a mask or hold up a photo, but some algorithms use liveness detection—looking for blinking or movement—to tell if they’re seeing an actual face. You could also wear makeup to look like someone else, but dramatically changing your appearance can draw unwanted attention.

Newer attacks on facial recognition AI involve what are called adversarial examples. These are images, sounds, or other inputs that are slightly modified so as to appear nearly identical to people but are nonetheless misclassified by AI. You can change some pixels in a photo of a cat, for example, so that it still looks like a cat to you and me but to an algorithm looks like guacamole. Such AI illusions point to the mysterious ways that machine learning algorithms operate. 

Most state-of-the-art image recognition systems use neural networks, algorithms with layers of small computing elements that pass information along from one layer to the next. Such neural networks are trained using many labeled images, adjusting their equations after each incorrect guess. 

 
 

The better artificial intelligence gets, the better AI gets at attacking AI.

 
 
 

There are two types of attacks on such algorithms. A “white box” attack is when you know the algorithm’s internal workings and typically have an exact replica to play with and reverse engineer. You can write an attack algorithm that, for any given image—of a cat, say—will change each pixel the least amount necessary while maximizing the chance that the recognition algorithm will label it guacamole. 

“Black box” attacks, in which you don’t know the target algorithm’s inner workings and test inputs on it to refine your attack, are a bit trickier. One solution is to try to recreate the target algorithm by training another algorithm to spit out the same outputs given the same inputs, and then poke around inside that algorithm. In any case, a truly secure system should withstand even white box attacks, according to Dawn Song, a computer scientist at the University of California, Berkeley. “In security we have a saying,” she says, “We don’t want to have ‘security through obscurity.’”

The first methods for creating adversarial examples required knowing the entire input to an algorithm—every pixel of an image—but if you want to fool a system in the real world to misidentify cats, you can’t modify the whole scene. That led to more practical attacks. Song’s group found ways to place small stickers on real stop signs, for example, so that in video frames from many angles, a computer vision system reads them as 45-mile-per-hour speed limit signs. Another group 3D-printed a turtle with a specific pattern on its shell so that from any angle an algorithm thought it was a rifle. Such practical attacks open the door for the real-world duping of facial recognition systems.

One of the first such practical or “physical” attacks was proposed in 2016. The researchers first trained a neural network to recognize 140 celebrities plus three researchers, using about 40 photos for each person. The goal was to find glasses that the researchers could don so that a face-recognition system identified them as a celebrity. They began with a digital template for “geek” glasses frames and placed it on digital photos of themselves. For each researcher, they selected a random celebrity or two and created adversarial examples, modifying only the glass-frames in the image so that an image of the researcher would be mistaken for the celebrity. Finally, they printed the images, cut out the frames, and attached them to the front of actual glasses.  

To test the physical glasses, they took photos of the researchers wearing them and fed these new images to the facial recognition algorithm. In nearly all cases, the dodging was successful and the researchers were misidentified. (One researcher, who normally wore glasses, had to wear a larger set of AI-designed glasses).

 
 

As for impersonation, there were some surprising successes. One researcher, a 41-year-old white male, was mistaken for Milla Jovovich, a Ukranian-born American actress, 88% of the time, and John Malkovich, the 62-year-old actor, 100% of the time. Another researcher, a 24-year-old Middle Eastern male, was mistaken for Carson Daly, a 43-year-old white male television personality, 100% of the time. 

But the system had several limitations. For one, glasses were customized for each wearer. A pair that made you look like Milla Jovovich wouldn’t make me look like Milla Jovovich. Second, they were quire conspicuous—the frames looked like bright yellow tie-died novelty glasses. So last year the team proposed a new approach. For this one, they created not just individual designs, but an algorithm that could spit out endless variations of designs targeted at each celebrity. They also trained it to produce only designs that looked like normal glasses (not crazy made-up pairs). And they trained it with several wearers for each pair of glasses, so that designs would work on faces in general and not just on a specific face. The dodging worked well. Impersonation didn’t work as well as on the previous system, but the glasses themselves drew much less attention. 


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.


Although the attacks were more successful with dodging than with impersonating, impersonating might actually be more successful in the real world, says Lujo Bauer, a computer scientist at Carnegie Mellon who worked on both papers. “If you want to dodge to maintain your privacy for example, then you have to be successful the entire time you’re on camera, from any angle, which is actually quite challenging,” he says. “Whereas if you just want to impersonate someone for the three seconds in which you are in front of a laptop camera until you get logged in, that’s a much narrower and more targeted situation.” 

A few months ago, another research group proposed an attack that’s even more inconspicuous than glasses, something they call an invisible mask. Their solution uses three infrared lights attached to the underside of a hat brim and pointed at the face. Humans can’t see infrared light, which has a wavelength longer than red light, but digital cameras can. The researchers devised an algorithm that, given a spoofer’s face, a target’s face, and an algorithm to be fooled, figures out how to adjust the three infrared light spots to achieve its goal. It produces adversarial examples in which the adjustments to an image are constrained by what can be produced by the lights. Once an example is calculated, the wearer puts on the hat and manually adjusts the angle, focus, and brightness of the LEDs to match the spots in the computer-created example. 

When one of the researchers, an Asian man, tested the hat against a widely used facial recognition algorithm called FaceNet, he successfully impersonated four other men, including Moby, the (white) musician. They suggest the system could be improved by having the lights on the hat adjusted by an algorithm and little motors, or by replacing them with an infrared projector that can make changes at the pixel level. 

 

“It’s very scary. That’s a real issue.”

 

Can you fool facial recognition without altering your appearance at all, even to an algorithm? Last year Song’s group at Berkeley proposed such a method. It uses data poisoning, the injection of mislabeled examples into the set of data used to train an algorithm. They trained a popular neural net on 600,000 correctly labeled images of over 1,000 people. To that training set they added a mere 20 images total of four people wearing a particular set of store-bought sunglasses, with those 20 images labeled as one random individual from among the 1,000. Then they tested the trained algorithm. More than half the time, a fifth person wearing those sunglasses was labeled as the targeted individual. The researchers could do the same trick with a pair of reading glasses, but it required more poisoning data samples. “It’s very scary,” Song says. “That’s a real issue.”  

Going a step further, Song’s group has shown that algorithms can be induced not only to mislabel faces, but to produce the wrong faces out of whole cloth from “memory.” Some image compression systems work by using a neural network to encode a file into a shorter string of numbers that retains the most important features, then later using another algorithm to expand the encoding back into a recognizable image. The lab developed a method to reverse-engineer an encoder-decoder system that had been trained to compress and reconstruct more than 200,000 images of celebrities. They could take an image and modify it so that it looked almost the same but after feeding it to the system it came out looking like another image. Say you fed it a specially tweaked image of a cat. It would compress the image file for efficient storage, but then when you asked it to unpack the file it would render it as guacamole.

Song says the danger lies in security systems that record surveillance footage, locate faces, and store compressed versions of them. “So now when law enforcement comes back to investigate the original crime scene,” she says, “when they decompress this image, it can basically hallucinate, so they see something completely different.” You could rob a bank, and in the video replay they’d see my face. There are some practical hurdles, of course. The system produces adversarial digital images, but to fool a system in the real world you’d need to alter the appearance of your actual face, perhaps by combining this system with something like the one that designs glasses. But Song shows concern. “This is how serious the problem can be,” she says. “Basically, now how do you know you have seen the actual evidence? You don’t know.” 

Jonathon Phillips, an electronic engineer at the National Institute of Standards and Technology’s Information Technology Laboratory who works on facial recognition, is not on edge yet. “What attacks may work on a single algorithm may not work on a well-designed system that is fielded,” he says. And “I know from speaking with people who commercialize facial recognition, a large part of what they spend their time doing is engineering the system to be hardened.” But he still thinks it makes sense to have people in the loop for face-recognition quality control whenever possible. In a paper published this year, he and collaborators showed that AI performance is comparable to professional facial examiners on difficult cases of facial recognition, but that AI plus experts were best

And even though algorithms can regularly outperform people, they’ll always have their blind spots. You don’t even need fancy adversarial glasses. In some cases, you just need to be black. This year, two researchers reported that three commercial facial analysis systems, from Microsoft, IBM, and Face++, had error rates close to one in three when merely guessing the gender of African women, let alone their identities. 

Xinyun Chen, a computer scientist at Berkeley who co-authored the paper on poisoned data with Song, suggests that when AI isn’t confident in its judgements, “the algorithm could throw the inputs to humans for further investigation.” When and how to keep humans in loops is an area for further exploration. “In general, these security issues are long-term challenges,” she says, “and we need to work step-by-step to tackle them.”

 
 
 

 
 

EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.