Why can’t the social networks stop fake accounts?

Every few months, social media companies say they removed another billion fake accounts. So how did a 21-year-old delivery driver in Pennsylvania impersonate Trump family members on Twitter for nearly a year, eventually fooling the president?

The answer has to do with the enormity of the social networks, the complexity of catching fakes, and the business incentives of the companies that run the sites.

Facebook said it blocked 4.5 billion accounts in the first nine months of the year, and that it caught more than 99 percent of those accounts before users could flag them. That number of accounts — equivalent to nearly 60 percent of the world’s population — is mind-boggling. It’s also inflated.

The vast majority of those accounts were so-called bots, or automated accounts that are often created en masse by software programs. Bots have been used for years to artificially amplify certain posts or topics so they are seen by more people.

In recent years, Facebook, Twitter and other tech companies have gotten much better at catching bots. They use software that spots and blocks them, often during the registration process, by looking for digital evidence that suggests the accounts are automated.

As Facebook has caught more bots, it has also reported increasingly colossal statistics on how many fake accounts it takes down. Those numbers have brought the company plenty of positive headlines, but “they’re actually not used internally that much,” said Alex Stamos, Facebook’s former chief information security officer, who left the company in 2018. “One person can blow out the stats, because there’s no cost to do an attempt.”

In other words, one person can create a software program that attempts to create millions of Facebook accounts, and when Facebook’s software blocks those bots, its tally of deleted fakes swells.

Facebook has admitted that these statistics are not that helpful. “The number for fake accounts actioned is very skewed by simplistic attacks,” Alex Schultz, Facebook’s vice president of analytics, said last year. The prevalence of fake accounts is a more telling metric, he said. And it shows the company still has a big problem. Despite removing billions of accounts, Facebook estimates that 5 percent of its profiles are fake, or more than 90 million accounts, a figure that hasn’t budged for more than a year.

The social media companies have a much harder time with fake accounts that are created manually — that is, by a person sitting at a computer or tapping away on a phone.

Such fakes don’t carry the same telltale digital signs of a bot. Instead, the companies’ software has to look for other clues, like an account sending multiple strangers the same message. But that approach is imperfect and works better for certain kinds of fakes.

That in part explains why Josh Hall, the Pennsylvania delivery driver, was able to repeatedly impersonate President Trump’s relatives on Twitter and attract tens of thousands of followers before the company took notice.

Manual fakes can be more pernicious than bots because they look more believable. Political operatives use such fakes to spread disinformation and conspiracy theories, while scammers use them to defraud people. Criminals have posed as celebrities, soldiers and even Mark Zuckerberg on social media to trick people into handing over money.

Twitter’s effort to catch impostor accounts is complicated by its policy allowing parody accounts. The company requires parody accounts to be clearly labeled.

Facebook also still struggles with accounts that pose as those belonging to public figures, but periodic reviews by The New York Times suggest the company has gotten better at removing them. Instagram, which Facebook owns, has not made as much progress.

One way to combat the fakes is to require more documentation to create an account. The companies have begun to often require a phone number, but they are loath to make it more difficult for people to join their sites. Their businesses are predicated on adding more users so they can sell more ads to show them. Plus, Twitter in particular prizes its users’ anonymity; the company said it enables dissidents to speak out against authoritarian governments.

So to whittle down the number of questionable accounts they should review, the companies rely on users to flag them. The strategy is far more efficient and cost-effective for the companies. It also means that as a fake account gains more attention, the more likely that it will be flagged for a closer look.

Yet it still sometimes takes a while for the companies to act. Mr. Hall gained 77,000 followers posing as President Trump’s brother and 34,000 followers as the president’s 14-year-old son before Twitter took down the accounts, which Mr. Hall had used to spread conspiracy theories. And from 2015 to 2017, people working for the Russian government posed as the Tennessee Republican Party on Twitter, attracting 150,000 followers, including senior members of the Trump administration, while posting racist and xenophobic messages, according to a federal investigation.

A Twitter spokesman said in a statement, “We’re working hard to ensure that violations of our rules against impersonation, particularly when people are attempting to spread misinformation, are addressed quickly and consistently.”

Still, most fakes fail to attract many followers. Mr. Stamos argued that impostor accounts that few people notice don’t have much of an impact. “It gets pretty Zen but: If nobody follows a fake account, does the fake account exist?” he said.

Mr. Stamos said that tech companies face so many threats, they must make difficult decisions on what issues to work on, and sometimes, the tricky work of rooting out each fake account isn’t worth it.

“The companies are usually putting effort behind the things that they can show are the worst, not just the things that look bad,” he said. “How do you apply the always finite resources that you have to the problems that are actually causing harm?”