Why Your Face Belongs to Them

Danielle Citron and Kashmir Hill
March 20, 2024

Kashmir Hill discusses her 2023 book, “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It," with UVA Law professor Danielle Citron during a LawTech Center talk, following an introduction by Professor Elizabeth Rowe. The book explores how facial recognition technology threatens privacy.

Transcript

ELIZABETH ROWE: Good afternoon and welcome. I'm Elizabeth Rowe. And on behalf of my codirector, Professor Citron, we welcome you on behalf of the LawTech Center to this wonderful event. Few areas are as interesting, challenging, and scary as the widespread adoption of facial recognition technology in our society.

And as I was doing research a short while ago for a paper on the regulation of facial recognition technology, I kept coming across so many wonderful articles by Kashmir Hill in the New York Times, with titles like "The Secretive Company that Might End Privacy As We Know It," and "How Photos of Your Kids are Powering Surveillance Technology," "Eight Months Pregnant and Arrested for False Facial Recognition." So she certainly had my attention.

And as we discussed potential speakers for this year, she was obviously a natural fit. And it is such a pleasure and an honor to welcome her here and that she said yes to coming to UVA to discuss her new book, Your Face Belongs to Us.

She is a technology reporter at the New York Times who started her journalism career at Above the Law, believe it or not, after a stint as a paralegal at Covington and Burling in Washington, DC. She's been covering the intersection of law, technology, and privacy since 2009, when she launched the blog called The Not-So-Private Parts while getting her master's in magazine journalism at NYU. And I should say that she received her undergraduate degree from that school in Durham that's a four-letter word, so--

[LAUGHTER]

We are so excited to have her here today. And interviewing her after she gives a brief presentation will be the country's leading privacy law scholar, all-around rock star and superhuman, our very own Professor Danielle Citron, who needs no introduction. So I will stop talking, and we will get ready to welcome these two and to hear and share. We'll also have time for Q&A after the conversation. So we welcome your questions. Thank you very much for coming. And welcome, Kashmir.

KASHMIR HILL: Thank you so much, Elizabeth. So yeah, so I'll give you kind of the Cliff Notes version of the book before Danielle and I sit and have a conversation. OK, I just want to make sure I how to advance the PowerPoint. So I'm getting invited to join the Wi-Fi. OK. In 20-- sorry, I'm getting a pop-up here.

In 2019, I discovered a New York-based startup that had done something extraordinary. It had created a facial recognition app that looked through a database of millions-- billions, sorry-- billions of faces scraped from the public internet to identify an unknown person and link them to their name, their social media profiles, and perhaps even photos of themselves that they didn't know existed on the internet.

The company was called Clearview AI. Their database had 3 billion faces in it when I first learned about it in the fall of 2019. It now has 40 billion faces. It was beyond anything created by the government or released by the big technology companies. And this is just when I first heard about it. It was via documents that came out in a FOIA. And this was an advertisement that they had sent to the Atlanta Police Department.

And this was a legal memo, amazingly, that was in that FOIA. It was-- I don't if any of you guys have done public records requests before. They're not usually very interesting. Usually, you get a lot of texts that you can't see. So when I opened this one, and the top of it said privileged and confidential, and it was written by Paul Clement, which was a name I recognized, it was a pretty exciting start to this story.

When I first learned about Clearview AI, they were primarily selling their technology to law enforcement. And they were keeping it a secret from the general public. The company didn't initially want to engage with me. They didn't want a big New York Times story about what they were doing. Paul Clement, along with many other people, including Peter Thiel, who turned out to be an investor in Clearview AI, did not respond to my phone calls.

So instead-- and this is what their website said at the time I discovered them, artificial intelligence for a better world. And it didn't say anything about facial recognition. So instead, I tracked down police officers who were using Clearview AI. And they said it worked better than any facial recognition tech that they had used before.

The technology they used before, it was basically state-run tools. It had access to maybe criminal mug shots or driver's license photos. One of the officers who you see here, a detective in Gainesville, Florida, told me he'd signed up for a 30-day free trial after hearing about Clearview on a listserv for financial crimes detectives. It had been described as a Google for faces.

He had a stack of unsolved cases on his desk, photos from ATMs and bank counters that he'd been unable to get hits for using Florida's state-based system. Then he ran the photos through Clearview. And he got hit after hit after hit. He said it was amazing and that he'd be the company's spokesperson if they wanted him.

He offered to show me a sample search and invited me to send him my photo. But after I did, he ghosted me. He suddenly didn't want to talk to me anymore. Something similar happened with other officers. I would eventually find out that the company had put an alert on my face. And they were getting a notification when an officer uploaded my photo. Then they were calling them, telling them not to talk to me.

Who was this strange company, and how had they built this powerful app? The desire to get computers to unlock the human face goes back decades. In the early 1960s, in the area not yet known as Silicon Valley, early computer scientists tried to build a facial recognition system with secret funding from the CIA. And the result was this paper from 1965.

It didn't really work very well. But the technology improved in fits and starts. But it wasn't until this century that it reached fruition thanks to more powerful computers, advances in machine learning, and an endless supply of high-resolution photos of faces brought about by the digital cameras and the internet.

One thing I do want you to notice on this slide that we're looking at right now is just, the middle photo is the example of the people they were testing the facial recognition system on. Is there anything you notice about that collection of people? Yeah, if you don't-- the Socratic method I'm told that you guys use in law school?

AUDIENCE: They're all white men.

KASHMIR HILL: They're all white men. And that would be something that would happen for quite some time. And we'll think about this when we look at another slide later. So there had been this long chain of people that were working on solving this problem of unlocking the face over years and decades. And I interviewed many of them for the book. Everyone in the chain assumed that someone else in the chain would deal with the societal implications of facial recognition once it was perfected.

This is not uncommon in the tech space. The concept is often called technical sweetness, based on a quote from Robert Oppenheimer, talking about his work on the nuclear bomb. He said, "When you see something that is technically sweet, you go ahead and do it. And you argue about what to do about it only after you have had your technical success." Or put another way by the great Dr. Ian Malcolm in Jurassic Park, "Scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

That said, some companies who achieved this sweetness opted not to share it with the world. Before Clearview AI came along, Facebook and Google had developed similar technology internally that could identify a stranger. They deemed it a tool that was too dangerous to release. And I have to say this surprised me, because I don't think of Google and Facebook as particularly conservative when it comes to new privacy-invasive technologies. But facial recognition was the line in the sand for them.

Clearview's edge was ethical arbitrage. They were willing to release a tool that others hadn't dared to. The company's primary founder was Hoan Ton-That, a computer whiz who grew up in Australia, and at age 19, dropped out of college to move to San Francisco and chase the tech boom. His experience was building Facebook apps, iPhone quizzes, and an app called Trump Hair.

It took some digging, which I talk about in the prologue of the book, to discover who was behind it. And when I finally found out his name, and I googled him, that was the kind of photo that came up. So again, I thought, this is a really interesting company. I'm going to have to dig into them some more.

That he was able to go from an iPhone game called Expando and this app called Trump Hair to building Clearview AI, this astounding technology, speaks to the power of open-source technology. The building blocks for powerful AI-based applications are increasingly widely available to anyone with technical savvy. Hoan Ton-That acknowledged that chain of AI pioneers who paved the way for him, computer vision experts such as Geoffrey Hinton, who has been called the godfather of AI. He told me, "I was standing on the shoulders of giants."

Facial recognition is on the leading edge of the debate about AI. Technical sweetness has been achieved. Now we must decide what to do about it. Here's what's happening right now with facial recognition. Police are using Clearview AI and other facial recognition vendors to identify criminal suspects. So in that bottom corner is a slide from one of Clearview's presentations that they give to police departments. It's a child exploitation photo where the guy included his face.

When they ran it through Clearview AI, it hit on this Instagram photo, where he is in the background. And the investigator was able to follow those breadcrumbs and eventually figure out who he was. And it just-- that was actually the first case where a Department of Homeland Security officer used Clearview AI. And his boss said, we have to get a subscription. And so they've been subscribing ever since. They just re-upped in the fall for a million-dollar contract.

We're using it at airports and at borders to verify our identities. Casinos can identify high rollers as they approach the doors and turn away cheaters and people with gambling problems. Macy's and grocery stores in New York are using it to kick out shoplifters. In one of the more troubling deployments in the US, the owner of Madison Square Garden has been using it against his enemies. I'm sure you about this, because they're lawyers who work at firms that have lawsuits against the company, using their headshots scraped from their firm's own websites.

I've actually seen this happen. I went with a personal injury lawyer to see a Rangers game, because I wanted to see this in action. And there's thousands of people that are streaming into Madison Square Garden. It's a huge venue. And we walked through the doors. We put our bags on the conveyor belt. And by the time we picked them up, a security guard walked over to us and asked her for her ID and then told her she wasn't welcome to come inside until her firm dropped the suit.

This is what is unique about facial recognition technology. Our face becomes a key to knowing everything about us, linking ourselves in the real world to what is knowable about us online. It allows for a new kind of discrimination that's based on things that are knowable about that way and that aren't protected classes, like who you work for, what your political beliefs might be.

Madison Square Garden is also a great example of what's called surveillance creep. When we create technological infrastructures for security purposes, the idea is always initially to use it for safety reasons. But once in place, it is often repurposed to monitor opponents, suppress dissent, and discourage other lawful activities.

China, which is farther along than we are in the rollout of the technology, rolled it out for safety and security reasons and now also uses it for identifying protesters and dissidents, automatically ticketing jaywalkers, and controlling how much toilet paper people take in public restrooms.

The question we face with this particular AI is where we should draw the line. Should police search a database of billions of faces every time a crime is caught on camera? Should the databases be smaller or more localized? Do we have effective systems in place to question the machine's judgment? Because while facial recognition technology is powerful, it still makes mistakes.

There are a handful of people who have been arrested for the crime of looking like someone else. Once a system identifies a person, investigators can fall prey to automation bias, where they rely too heavily on the computer's judgment and don't do a thorough enough investigation beyond that.

I asked you before what was the commonality between the people who had the facial recognition system used on them in the very beginning, in 1965. I think you can notice something very different about the people we know of who have been falsely arrested. The problems of bias and misidentifications have gotten a lot of attention, as they should. It's unjust and traumatic to be arrested for a crime you did not commit.

But the technology is improving rapidly and will likely soon work near perfectly. That, coupled with wide deployment, is an even more chilling possible future that we must grapple with. Should the facial recognition algorithms be running on all the cameras all the time, as they are in Moscow, allowing police to find missing or wanted persons, but also to identify protesters against the war in Ukraine? Should businesses be identifying us as we walk through the door, able to immediately link us to our online dossiers?

Should we in this room be able to recognize one another with Clearview-like software on our phones, or in augmented reality glasses like Hoan Thon-Tat has been working with? These are AR glasses that the Clearview app can run on. I've actually used them. And you can just look at somebody, tap a button, and find out who they are, and see other pictures of them available online. And Facebook has also talked about how it would be great to offer that to people. But they're a little bit afraid that it might be illegal.

Clearview is limited to police. But there are other face search engines with smaller databases available on the internet right now. That technology could mean the end of anonymity as we know it. This is a service called PimEyes. It is free to use. But if you want to see-- you upload somebody's photo, and then it shows you other photos of them online. If you want to be able to see where those other photos are, you need a $30 monthly subscription.

The site advertises itself as a way to protect yourself, to know what photos of you are available online. And you're supposed to only search for your own face. But when you have a subscription, you can do 25 searches a day. And there are no technical measures in place to make sure that you're only searching one face. And it works pretty-- it works incredibly well. I ran some searches of my colleagues at the times, with their consent.

This is Cecilia Kang. She covers technology and politics in DC. She sent me this photo wearing a COVID mask. It identified her. My colleague, Erin Griffith-- she covers venture capital in San Francisco-- sent me this photo with glasses and a hat. This is how I discovered she used to be in a band. This is my colleague, Mark Treacy. He covers culture, sent me this photo with glasses, scruffy face, weird angle. And it found him looking very different, and also in the background, in a scrum of people surrounding LeBron James.

It's quite powerful. And it's kind of amazing. It's able to find these people just out of 2 point-- when I do searches, it'll look for one person out of a database of 2.5 billion photos. And it often is right. And the way the facial recognition systems work is that they show you what they're most confident about at the top of the results. And then you get your doppelgangers at the bottom.

So with a technology like this, anyone on the subway, in a restaurant, in a pharmacy, could snap your photo and potentially learn who you are. Should face scraping companies have the right to put us and all of our photos scraped from the internet in these searchable databases? Europe, Australia, Canada have said no, that it violates our privacy, that it violates their privacy laws. But here in the US, the answer to that question is so far mostly yes.

You may be feeling hopeless at this point in the talk. But this is far from the first time we have confronted an invasive technology with scary implications. At the same time the CIA was funding those early engineers to work on facial recognition technology in the area that wasn't yet known as Silicon Valley, the nation was beset by the electronic listening invasion, wiretaps and tiny bugs that had Americans worrying that they could never have another private conversation again.

The country succeeded in passing regulation to rein in what could be done with that tech. And it's the reason why the vast majority of the surveillance cameras that dot our urban landscape record only our images and not sound. We decided as a country that conversational privacy was important and that we wanted to protect it. We can and should choose the future we want, not simply let what technology makes possible dictate it for us. If we think anonymity is important, we'll have to protect it.

There's also things that you can do as an individual if you're feeling very worried right now. One, be careful about what you post publicly online, because there are so many companies out there, not just Clearview and PimEyes, but others that are scraping it. You also are lucky in Virginia to have a state privacy law that gives you the right to access and delete. So you can go to Clearview AI and get yourself out of their database. PimEyes also offers an opt-out.

And PimEyes doesn't really have to. Their corporate headquarters are in the UAE. Their legal services are somewhere in the Caribbean. Their founder lives in the country of Georgia. So they're a little difficult to regulate. But they're trying to be a good actor, I think, so they don't get shut down. So they do have an opt-out. And so yeah, I do recommend people do a PimEye search. And I have those 25 searches. So if anybody wants a little demo after the presentation, I'm happy to do it for you.

There also is a state that has a more protective law, which I imagine most of about it, the Biometric Information Privacy Act in Illinois. And how that law got passed in 2008, the rare law that moved faster than the technology, is actually a chapter in the book. So let's have our Q&A.

DANIELLE CITRON: This is such a pleasure, friends. So I've known Kash Hill for-- we were trying to figure this out. I think it might be 12 years or 13 years. But we met at the, I think, third Privacy Law Scholars Conference. And we had a confessional moment where we both agreed that we went to Duke, and we wouldn't tell anyone.

But she's amazing. Kash, you don't-- I can brag about for a second a little bit. But you are so-- you take a new, interesting technology. You try it. You bring it into your home. You chase down every privacy risk and talk to all of us in privacy about the broad risks that we all face and the risks to equality and democracy. Your work is so rich. You have written for Gizmodo. At Forbes-- I blogged for Kash. I always think that's my calling card, is that you ran the sort of tech blog for Forbes, and then I got to work for you for a little bit, though I said I didn't want to be paid, right? Yeah.

KASHMIR HILL: Why did you not want to be paid? I can't remember.

DANIELLE CITRON: I think I had to write more posts or something. It was like, I wanted the flexibility to be able to say it was so great to be able to blog for you. But then I could do it like once a month, and it was not a lot of pressure. But what has been so meaningful to me is in my work on intimate privacy, we've talked over the years about the way in which digital technologies, women are often the canaries in the coal mine.

And you saw that immediately from the start of all of our conversations, were incredibly empathic about your conversations with victims, and in reporting on nonconsensual intimate imagery, and just throughout your career. So I've always-- you always joke that any time you call, I'm like, I'm here. Anything you ever need, you're one of those people that I think we all should always say yes to, and we should all talk to you right away, because you come with such empathy and depth of thinking. So thank you. It's such a delight to be here with you.

So I thought, could we pick up-- it's interesting about PimEyes, that they're now allowing people to delete their photos. And as you said, you told us they don't have to, by any means, right? I thought I'd have you surface-- in the book, you talk about PimEyes. And of course, who is often impacted by these technologies are the most vulnerable.

So might you talk a little bit about how, when you first discovered PimEyes, who was being targeted and was being subjected to these searches in ways that was really physically and emotionally endangering?

KASHMIR HILL: Yeah, I got this-- so after I first did the Clearview reporting, I got this tip from somebody. And they said, PimEyes is way scarier than Clearview. You should check it out. And it was a kind of email where I wondered, was this written by the people who run PimEyes? Are they trying to get free advertising through the New York Times.

And then I looked at it. And I ran some searches. And it didn't seem like it worked that well at the time. And it definitely-- it was clear, just running a few sample searches, that it had heavily overindexed porn sites. And some of the results you would get would be like, explicit image, can only see with a subscription. So it was clearly kind of-- and so I thought it was really scammy.

And I kind of responded to a person. And I said, well, tell me more about the site. And I get on the phone with him. And he tells me-- he ends up being kind of a power user of PimEyes. He was not actually associated with the people who ran it. And he said that he had a porn addiction and also a privacy kink, and that he watched a lot of porn, and he wanted to know-- a lot of women who do that kind of work use an alias because it's so stigmatized.

He said, I want to who they really are. I want to know the Clark Kent behind the Superman. And so he would run their faces on PimEyes and try to find their kind of real identity. And he was really describing this to me in quite some detail about finding a photo of this one woman on Flickr. And all he knew was the high school she was at. So he was like, I'm just digging through all these photos. And yeah, basically, he would just find out who they were.

And he said, I don't want to do anything with that information. I'm just a digital peeping tom. I'm just interested in digging into them. He said, but then, eventually, I got kind of tired of that. And so I thought, oh, maybe I'll see if any of my friends have naked photos on the internet. And so he went through his Facebook friend list-- not his male friends, just his female friends-- and would look and see if there was anything he could find about them.

And he had that kind of eclectic Facebook friend list. So he's like, oh, this one woman I hooked up with once, I found out that she did a naked bike ride, and those photos are out there, not associated with her name, but they're there. A woman who wanted to rent a room in my apartment once, I found out she has like a secret porn star life. And I found those photos. Another woman had revenge porn on the internet, not associated with her name.

And so yeah, it was immediately-- and this has happened over and over again when facial recognition is made available to the general public. It tends to get used against women, and often women who do sex work.

DANIELLE CITRON: So just along those lines, and thinking about the relationship between privacy and equality, or privacy violations and then the invasions of-- or the disadvantages, especially for vulnerable groups-- you talk in the book about illustrations of, especially, the tech at first was pretty imperfect. The inaccuracies were profound, especially for people of color, and Black women in particular, and the studies that you-- Joy and Timnit did about the Gender Shades and how inaccurate facial recognition was if, depending on how-- the darker your skin, the less the tech got you.

And that has been so true of photography from the beginning. It's sort of meant for lightness rather than dark. Who designed these tools are white dudes. No offense to friends in the audience. I wondered if you could surface a story that-- because it's so powerful when you talk about examples of people whose arrests-- and even just in your work too in the New York Times, how someone goes and gets arrested, and they're not anywhere near the place where the person is accused of being shoplifted, but they're in jail. Can you just talk a little bit about the racial inequality story?

KASHMIR HILL: Yeah. And I think there's two things there. There's the question of how well the facial recognition works on different groups. And then there's a question of the uneven distribution of surveillance, and even if it works well, who will be subjected to it more. So yeah, so one of the cases where this has gone wrong is Randall Quran Reid. And he was-- it was the day after Thanksgiving. He was driving to his mother's house. And he gets pulled over on the highway by four cop cars.

And he has no idea what's going on. And they're like, oh, are you Randall Quran Reid? I need you to step out of the vehicle. And then they just start handcuffing him and arresting him. And they said, there's a warrant for your arrest for larceny. And he's like, what? What are you talking about? Where? And they said, in Jefferson Parish. And he's like, where's Jefferson Parish? And they said, it's in Louisiana. And he said, I've never been to Louisiana before.

And the police officer said, well, it might be an online thing. And so they take him. They arrest him, goes to holding. It's the day after Thanksgiving , so police officers aren't around in Louisiana. So it was a warrant that came with extradition. So he's sitting in prison, waiting to get extradited to Louisiana, and just trying to figure out why he's under arrest.

And so he hires a lawyer in Georgia. They end up hiring a lawyer in New Orleans. And they're just trying to figure out what is the evidence against him. And eventually, they find out that it was a shoplifting crime at some consignment stores in New Orleans. Somebody was buying designer purses with a stolen credit card, and they've been caught on a surveillance camera. And the surveillance still had been run through Clearview AI. And Randall Quran Reid looked a lot like the guy.

And when the police officers went to his Facebook page, they saw he had a lot of friends in New Orleans. And so based on that, they issued this warrant to have him arrested. He ended up spending a week in jail before this got cleared up, spent thousands of dollars on lawyers. It was just a horrible failure.

And I've covered quite a few cases like this. I would say there's been about a half dozen known cases where people have been falsely arrested. There are definitely more, I'm told by the defense attorney community. But not everyone wants to come forward when it happens to them or talk about an encounter with law enforcement.

And every time, the police say, it's not the technology that's the problem. It's the police work. And they're not wrong. There's bad police work every single time. They have this facial recognition lead, and they're not doing enough to support it. One of the ways that really breaks down is when they get an eyewitness.

And so you have this facial recognition system go through millions or billions of photos and find the person that looks most like the suspect. And then you show it to an eyewitness. And they agree with the computer. And so at least a few police organizations have said, we shouldn't do that anymore. That's not enough evidence. But yeah, it's really horrible when it happens.

And facial recognition technology has gotten better and more accurate thanks to criticism like Joy and Timnit's. One of the main problems is that when they were training the technology, they didn't have a diverse training set. They trained it very well to work on men, or white men, or white people. And so now all the facial recognition metrics do train on more diverse faces. And it works much better.

But it all depends on that-- what they call a probe image, the image that you put in. If it's a high-resolution photo from your iPhone, you're going to get great results. If it's a grainy surveillance still, not so much.

DANIELLE CITRON: And our students-- so my privacy students, we've read the consent decree in the Rite Aid case. So when you feed in a grainy photo, the match is never going to be accurate. So there's an inaccuracy problem. What about the accuracy problem? So you hear folks talk about how AI is the equality machine. And they claim, we've solved this problem. No more inaccuracy, right? We can use all of these tools.

You've warned us why we should not be complacent about how, yes, we may solve for the inaccuracy problem. There's another side of it, the accuracy problem. And you said, well, of course, it depends on what neighborhood you're watching. But let's assume for the sake of argument that every neighborhood everywhere is blanketed with cameras. What's the problem there?

KASHMIR HILL: Yeah. I mean, one activist I talked to said that primarily focusing on the kind of bias issues with facial recognition technology was a little bit leading with your chin, because then, once they address that, then they can say, OK, great, we fixed the accuracy problems. Let's roll it out everywhere. But you still have the privacy intrusiveness of this, all these big questions around, is it constitutional to search a database of 40 billion photos?

I mean, probably, if any of you have photos on the internet, you're probably in those searches. And yeah, just the fact that-- just the over-surveillance that happens, that some people will be subject to this more than others. They'll end up in these searches over and over again. Yeah, it's--

DANIELLE CITRON: It's terrifying, right? So we haven't solved for this at all, no? Yeah.

KASHMIR HILL: No, yeah, we haven't. We haven't figured any of this out yet. And it's so-- what's striking to me is, there is no federal privacy law. There is no kind of guidance for police. Like, every single police department is kind of just choosing their own way. A few states, a few localities, have kind of stepped in and made rules, like, oh, you need to get a warrant to run a search, or we don't want police using this until we figure out the bias issues, figure out the privacy implications. But it's really up to each individual department.

And these people-- the other problem is that police officers don't necessarily have training in how these things work. When you train officers to use facial recognition technology, you train them how the site works, like how to upload the photo, and not necessarily training them on how to evaluate the results they get, because human facial recognition is-- it really varies from person to person. Some of us are really good at remembering faces. Some of us aren't. And we're not testing police officers to make sure they're really good at facial recognition.

And even if they are, it's a really difficult challenge when you do get a bunch of doppelgangers. Looking at this photo and a bunch of doppelgangers and figuring out which one is the right one is just not something we're kind of evolved to do. We're not used to being in a room with a whole bunch of people who look like Danielle, and never having met you before, and figure out which one's the right one. That's hard.

DANIELLE CITRON: And we trust the tech. As you said, automation bias. We're beset by it. And so we're going to trust the machine. God, you can trust the computer. You can't trust me. I'm imperfect. But we trust the computer. And so that natural inclination to assume its accuracy also leads us-- so you talked about how, when you went to the Madison Square Garden with the lawyer whose firm was barred from going in, that you're a journalist. So you could be barred. What else-- we've seen this tech-- so you talk about in your book non-democratic or authoritarian regimes using the tech.

What could a future look like in which it's journalists and protesters-- we've seen other countries use it in really anti-democratic ways. Just talk a little bit about it so folks can appreciate that this is not you just saying, well, I got caught, right? They said, oh, you're a journalist. We're not going to talk to you, like in the book, when no one wants to call you back once they figure out who you are, or the law firm that we don't like.

It's not this sort of, what do they say? Anecdotal stories. We've got some pretty good data points about how it truly could be used in anti-democratic ways.

KASHMIR HILL: Yeah, yeah. And one-- sorry, I just want to put one more note on that last point, which is just, when we talk about AI systems, we often talk about, let's put a human in the loop. And that'll make sure that the AI doesn't make a mistake. But we, as humans, are not always well equipped to be in the loop. With facial recognition, for example, it's hard for us to know when the system's made a mistake.

So just as we're thinking about AI and other kinds of AI applications, whether it's in a justice system or elsewhere, once a computer is kind of telling you, this is the right answer, I think it's pretty hard for human beings to disagree with the computer. And I always think about the early days of mapping, and Google Maps, and how there was that case in Australia where a woman was trying to get somewhere 40 minutes away. And Google Maps routed her on a 14-hour journey.

And she just kept driving. She just did the whole thing, because Google Maps told her was the right way, or people that turn into lakes or turn off a bridge because that's what--

DANIELLE CITRON: Or train tracks.

KASHMIR HILL: Yeah. And it's because it's like, well, Google is really smart, and it told me to do that, so just being wary of that when we're talking about human in the loop on AI. Yeah, in terms of how we've seen facial recognition wielded in other places, China and Russia are way farther ahead than we are. And so in Moscow, they have facial recognition systems running on cameras throughout the city.

And so one thing that has happened is that there's been this black market that has sprung up there. Some police officers started selling that data. And so you could go on Telegram. And for like-- I can't remember what the price was-- I want to say $200 worth of Bitcoin, you could get a report on somebody. And you could find out where they had been over the past month, as spotted by the cameras. And they were just kind of indiscriminately selling it.

And so I talked to this one activist who bought her own report. And it showed all the places she was and said, this is her home. This is her work. So that was very frightening. And that was a clear abuse of the system. In terms of it being used appropriately, it has been the case that people who have protested the war in Ukraine, they go and have a protest. And then the next day, the police are at their door. And they get a ticket for unlawful assembly, because it's just that easy to know who the protesters are.

Same thing in China. When mainland China was kind of taking over Hong Kong, the protesters were wearing these masks. And they would actually scale the camera poles and try to paint over the facial recognition systems, because it was the same thing. They would go to a protest. And then the next day, they're being arrested. So I think that's one thing that's very scary about facial recognition.

I talk in the book about, Woody Bledsoe was trying to perfect it in 1965 and just imagining, well, what if they had? What if they'd had this perfect facial recognition system in 1965? Would we have the kind of social change that came from that period if the police had been able to take a picture of all the protesters who wanted civil rights and track them down?

One of my favorite examples-- my husband, who went to law school, is a lawyer that doesn't practice, like so many that I know--

DANIELLE CITRON: Don't tell them that. No, y'all will practice, we promise.

KASHMIR HILL: He reminded me of this Supreme Court case out of Alabama, where they outlawed the NAACP. And they wanted the organization to hand over their membership rolls and say who every member was. And it went back and forth. And it went to the Supreme Court. The Supreme Court ultimately said it was unconstitutional.

But if you had facial recognition, you could just take photos of people that were going in or people that appeared at protests. And then you know who they are. So it really is a powerful weapon of social control if we don't put some guardrails on it.

DANIELLE CITRON: "Privacy is power" kind of notion is so evident. And golly, I'm glad we didn't discover that then, or that, rather, that even Facebook, you were saying, that Facebook and Google have long had this technology. They've used it to tag photos, and that it was social norms that held them back, that the whole experience with the Google glasses, the Glassholes, nobody-- we sort of shunned it as unacceptable.

What was it about Hoan Ton-That-- the permission structure, the technological sweetness, the lure-- but was there something else too that you found in digging, and you could share from the book, that was-- something about him that also drove him besides the fact that he could, he wanted to solve that puzzle, having finder-itis or whatever that is that they have in Silicon Valley-- was there something about him? And what made him tick that you think, forget the social norms? Who cares?

KASHMIR HILL: Well, it's interesting. So I talk a bit about the origins of Clearview AI. And the idea for the company, it was kind of germinated at the Republican National Convention in 2016. Hoan Ton-That was a big Trump supporter. He would actually go around Brooklyn in a red MAGA hat and a big white fur coat. And people thought it was a joke, because there are very few people in Brooklyn wearing MAGA hats.

And so he was there. He was with his friend, Charles Johnson. And they were talking about, wouldn't it be nice if you could kind of be able to-- there's so many strangers here. And we kind of don't know who we should befriend, who is like a liberal infiltrator that we want to avoid. It would be really nice to have some kind of technology to be able to tell you about a stranger.

And that's where it came from initially, is it was really this idea of us and them, liberals versus conservatives. And once they started developing it, one of the first places it got deployed was at the Deploraball in Washington, DC, because they were really worried about-- it was like a big party to celebrate Trump winning. And they were worried about liberals coming in and kind of like messing it up. And so they were using it to try to see who had bought tickets and make sure that they said no to liberals.

And they deemed that a success. They did successfully turn away at least two people who were part of the anti-fascist coalition in DC. And the way I heard about it is they included it in a PowerPoint that they made for Hungary about how you could use their technology for border control.

So you could scan people's faces as they were trying to come in and keep out anyone who was a security threat. And the presentation said, we've already preloaded it with the identification of people that are associated with George Soros and the Open Society foundations, because Orban is not a fan of the Soros people. But they were like, they're pro-democracy. They're pro-human rights. So it was basically a tool to keep out human rights activists from coming into the country. So that's kind of-- that's what they were thinking as they were developing the technology initially.

DANIELLE CITRON: Just the whole back-- you guys have to read it. You now have it, so you must read it. And I listened to it over spring break. And I had read it already. So I have to say, hearing you speak it was-- it's even more powerful, friends. You can have the book. But listening to Kash talk about it was really, I think, a gift. So--

KASHMIR HILL: I mean, I don't that it's most people's takeaway from the book, but I actually think it's kind of a reassuring tale, in a way, that that was the initial inclination of the Clearview folks, and that ultimately, it's being used by police to crime solve. I mean, that seems like a better outcome than they could have gone.

DANIELLE CITRON: Yeah. Right, no, that the lawsuit and the civil litigation sort of shoves it into law enforcement's hands and says, OK, the Illinois Biometric Act-- we're going to protect consumers. We'll agree to protect consumers to a certain extent. And we're going to cover this tomorrow in class, so it's kind of fun.

But just one nugget before I open it up, because we only have 14 more minutes with you, but the free speech issue, right? So you got to meet Floyd Abrams and talk. He represented Clearview AI. Your argument about NAACP versus Alabama is a powerful one, that we're not going to have free association and free expression if we are immediately identifiable without pseudonymity or anonymity.

Tell us just a little bit about that, listening to the line of the First Amendment bar convince you that facial recognition software was free speech and that thou shalt not regulate it.

KASHMIR HILL: Yeah. I mean, Clearview's big argument is that they're just like Google and that they're searching the public internet in the same way that, with Google, you put in someone's names, and you get results. With Clearview, you put in someone's face, and you're just getting results, public images of them.

And so one of their-- they were sued after the story I did about them, like all over the place. And they hired Floyd Abrams to make this First Amendment defense for them, that they're just exercising their First Amendment right to go through public information, analyze public information, publicize public information.

And the way he put it was, he made this comparison to human beings doing this. And he said, we could just have billions of photos. And you could have thousands of human beings where you just went through the photos and tried to make the match. And we're just doing it the same way. And we're just doing it faster. And so this is kind of the argument that he made to the judge. And it just-- it hasn't worked that well.

DANIELLE CITRON: That's another hopeful story too, right? That the ACLU, having for years argued that non-consensual intimate imagery was protected by the First Amendment, sort of interestingly, like, won't protect your vagina-- my students know I'm going to say this-- but will protect your face. So god bless the ACLU for sort of changing their minds. But you go through this argument in the book. And I thought, wow, that's an interesting shift for having argued with Mary Anne Franks and I that we cannot regulate non-consensual intimate imagery.

KASHMIR HILL: But it's very particular to Illinois, because Illinois has this law against the use of the face print. And so Floyd Abrams is trying to say that that law is unconstitutional as applied to them. And the judge said, no. We can say that can't use the face print that way. You're welcome to scrape the photos. You're welcome to have human beings go through those photos and present the results. But you're not allowed to use people's derived face print without their consent.

DANIELLE CITRON: Keeping me analytically keen and honest. Love that. Sorry with my outrage.