The long-raging debate around facial recognition software, with all the privacy worries it brings with it, has taken on new urgency as the technology has improved and spread by leaps and bounds.
On Tuesday, San Francisco became the first major American city to block police and other law enforcement agencies from using the software.
Here is a look back at some of the many controversies over facial recognition and its use.
The 2001 Super Bowl
In January 2001, the city of Tampa, Fla., used a facial recognition surveillance system as it hosted Super Bowl XXXV.
It was an early real-world example of how the technology could be used and prompted a backlash from privacy advocates including the American Civil Liberties Union.
The system identified 19 people thought to be subjects of outstanding warrants, though none were arrested. The police were not prepared for the number of matches, nor for the difficulty of finding and arresting the identified individuals.
“We thought we were ready to use it, but getting through the crowd and the architecture of the stadium proved overwhelming,” Detective Bill Todd said at the time.
The webcam that doesn’t see black people
A decade ago, Desi Cryer uploaded a video to YouTube in which he showed that Hewlett-Packard’s new face-tracking web camera did not appear to see black people.
The camera failed to track Mr. Cryer, who is black, but had no problem following his white colleague, Wanda Zamen, when she entered its field of vision.
At the time, Mr. Cryer said he found the situation amusing, but it portended the more serious potential of bias to be baked into facial recognition and artificial intelligence systems.
The Snowden revelations
In 2014, The New York Times reported that the National Security Agency had intercepted millions of images a day, tens of thousands of which were identified as “facial recognition quality,” according to documents obtained by Edward J. Snowden.
That revelation raised concerns among civil liberties advocates. At the time, the technology was still seen as nascent, though experts noted that methods to analyze such data were constantly improving.
“There are still technical limitations on it, but the computational power keeps growing, and the databases keep growing, and the algorithms keep improving,” Alessandro Acquisti, a researcher on facial recognition technology at Carnegie Mellon University at the time, told The Times.
The race problems continue
In 2015, Google apologized after its then-new Photos application labeled some black people as “gorillas.” The company said in a statement that it was “appalled and genuinely sorry,” but it was just one of many examples of facial recognition technology’s racial failings.
The problem, in part, is that facial recognition is only as good as the examples on which it is trained. And one widely used data set was estimated to be more than 75 percent male and more than 80 percent white.
Facial recognition at entertainment venues
Last year, The Times reported that Madison Square Garden had been quietly using facial-scanning technology for security.
But some vendors and team officials said that using the technology for customer engagement and marketing could be even more valuable for sports facilities than for security.
Advocates pushed back.
“We are in a kind of legal Wild West when it comes to this stuff,” Jay Stanley, a policy analyst at the A.C.L.U., told The Times. “I should know if I am being subject to facial recognition if I am going into any business, including a stadium.”
Such technology was also reportedly used at Taylor Swift concerts to identify potential stalkers.
Microsoft calls for regulations
In July, Microsoft became the first tech giant to join civil liberties advocates and others in calling for federal regulations on facial recognition.
In a blog post, Bradford L. Smith, the company’s president, urged Congress to take action.
“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” he said.
Mr. Smith suggested that Congress appointed a commission to study the issue and oversee the technology’s use.
Worries about a potential for bias in Amazon’s technology
A study this year raised concerns about Rekognition, a facial recognition system that Amazon had aggressively marketed in recent years to local and federal law enforcement.
In the study, the M.I.T. Media Lab found that the system misclassified women as men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time.
“Not only do I want to see them address our concerns with the sense of urgency it deserves,” Representative Jimmy Gomez, a California Democrat who has investigated Amazon’s facial recognition practices, told The Times, “but I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”
China using it to profile Uighurs
For years, privacy advocates have warned that governments could use such software for nefarious purposes. Last month, The Times reported that China was the first known government that used it for racial profiling, according to experts.
The country has been using a wide-ranging, secret facial recognition system to track and control the Uighurs, a largely Muslim minority, The Times reported.
The advanced system is integrated into China’s growing network of surveillance cameras and is constantly tracking where Uighurs come and go.
“If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity,” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, said at the time.