For at least 10 years, which is how long I’ve been reporting on privacy, people have worried about a facial recognition app that could end anonymity.
I joined The New York Times as a technology reporter in July, but back in 2011, when I was reporting for Forbes Magazine, I attended a workshop at the Federal Trade Commission, where attendees discussed the rules that should govern the commercial development of facial recognition. Everyone agreed that companies should not make an application that could identify anyone at anytime.
“A mobile app that could, in real-time, identify anonymous individuals on the street or in a bar could cause serious privacy and physical safety concerns,” the commission said in a report that came out after the workshop.
In November, I found out that an app capable of doing what many had feared did exist. A small, little-known company called Clearview AI, which I wrote about in a front-page article, had scraped the open web, collected billions of photos of people, and made an app that could show you all the images it had of a person, along with links to the sites where they came from. It was equal parts desirable and terrifying: technology that could put a name to a face in seconds and dig up photos of you online that you didn’t know existed. But only law enforcement agencies seemed to know about and be using it.
I wanted to learn as much as I could about Clearview AI, including who was behind it and how the app worked, to help readers understand this groundbreaking tool.
I was tipped off to the company’s existence by Freddy Martinez, a researcher for Open the Government, a nonprofit focused on government transparency, and Beryl Lipton, who works for MuckRock, a nonprofit news organization that helps people file public records requests. Last year, when they requested public records from 112 police departments about their use of facial recognition, a few departments gave them invoices and marketing materials for Clearview AI. Clearview stuck out because it claimed to be scraping social media sites and the open web instead of using mugshots or D.M.V. photos, as was the norm with the other vendors.
Mr. Martinez has been doing research for years on how law enforcement uses technology. I had previously used documents unearthed by an organization he founded for a story in 2018 on the unregulated practice of undercover police officers friending people on Facebook. Mr. Martinez sent me an email in mid-November about Clearview, saying it “appears to be crossing the Rubicon on facial recognition technology” and provided relevant documents received from police departments.
First, I went to Clearview’s website, but it was a bare-bones landing page with a “Request Access” button. (I requested access but never got it.) The website listed an office address, a few blocks from The Times building in Midtown Manhattan. I walked over, but the address didn’t exist. (The company later told me it was a typo.) Business filings that my colleague, Kitty Bennett, found listed an address for a building on the Upper West Side. When I went there, a doorman told me it was someone’s home and wouldn’t let me go up.
These red flags initially suggested that the technology could be fake, but police officers using the app said that wasn’t the case. (I reached out to the police departments that had turned over public records about Clearview as well as those that had Clearview AI as a line-item on their public municipal budgets.) Detectives in Florida, Texas and Georgia said it worked incredibly well and had helped them solve dozens of cases in just the few short months they had been using it. I wanted to see for myself how well it worked, so I asked a few officers if they would run my photo through the app and show me the results.
And that’s when things got kooky. The officers said there were no results — which seemed strange because I have a lot of photos online — and later told me that the company called them after they ran my photo to tell them they shouldn’t speak to the media. The company wasn’t talking to me, but it was tracking who I was talking to.
After a month of being ignored, I decided to knock on more doors. A venture capital firm in Bronxville, N.Y., listed Clearview as one of its investments. The firm hadn’t returned my emails or phone calls so I took a 40-minute train ride from Manhattan to its office, and with a reporter on the doorstep, they finally agreed to answer some questions.
That same week, I received a call from Lisa Linden, who identifies herself online as a “veteran crisis communications strategist.” She told me she was now representing Clearview AI. She set up an interview with the company’s founder, Hoan Ton-That, and connected me with proponents of the app.
At the same time, as reporting continued, my colleagues shared their expertise. Investigative reporters at The Times helped map the company’s footprint by reaching out to their law enforcement sources. Metro reporters offered contacts to flush out the company’s background in New York. And an interactive news journalist did a forensic analysis of the company’s app to discover code that revealed the ability to pair it with augmented reality glasses.
As a relative newcomer to The Times, it was incredible to see the resources in this newsroom, and to use them to reveal a company that wants to unmask us all.