Professor Acquisti and two colleagues, Laura Brandimarte and the behavioral economist George Loewenstein, published research on this behavior nearly six years ago. “Providing users of modern information-sharing technologies with more granular privacy controls may lead them to share more sensitive information with larger, and possibly riskier, audiences,” they concluded.
The phenomenon even has a name: the “control paradox.”
“Privacy control settings give people more rope to hang themselves,” Professor Loewenstein told me. “Facebook has figured this out, so they give you incredibly granular controls.”
This paradox is hardly the only psychological quirk for the social network to exploit. Consider default settings. Tons of research in behavioral economics has found that people tend to stick to the default setting of whatever is offered to them, even when they could change it easily. This applies to the share of their paycheck that will be deposited every month in a 401(k) retirement plan or to the amount of personal information they will share online.
“Facebook is acutely aware of this,” Professor Loewenstein told me. In 2005, its default settings shared most profile fields with, at most, friends of friends. Nothing was shared by default with the full internet. By 2010, however, likes, name, gender, picture and a lot of other things were shared with everybody online. “Facebook changed the defaults because it appreciated their power,” Professor Loewenstein added.
It is time that Congress appreciates it, too.
The question for members of Congress goes beyond how much Facebook and others may be exploiting our inability to rationally assess the pros and cons of sharing information — profiting from the difficulty we have measuring the immediate reward of the cute puppy video against the more distant risk of having our data sloshing around the internet for years.
As we devote more of our lives to online experiences, while offering data about ourselves in exchange for information, entertainment or whatever, the critical question is whether, given the tools, we can be trusted to manage the experience. The increasing body of research into how we behave online suggests not.
An experiment by Susan Athey of Stanford University’s Graduate School of Business, along with Christian Catalini and Catherine Tucker of the Sloan School of Management at the Massachusetts Institute of Technology, found that people who profess concern about privacy will provide the emails of their friends in exchange for some pizza. They also found that providing consumers reassuring though irrelevant information about their ability to protect their privacy will make them less likely to avoid surveillance.
Another experiment revealed that people are more willing to come clean about their engagement in illicit or questionable behavior when they believe others have done so, too. Warning consumers about possible privacy risks can encourage them to be more careful, as we might expect. But people can react counterintuitively to perceived security risks.
When people were exposed to one of three different websites that asked embarrassing questions like “Have you ever tried to peek at someone else’s email without them knowing?” the most dangerous-looking website — decorated with a horned devil and the words “How BAD Are U?” — got, by far, the most positive responses.
Those in the industry often argue that people don’t really care about their privacy — that they may seem concerned when they answer surveys, but still routinely accept cookies and consent to have their data harvested in exchange for cool online experiences.
Professor Acquisti thinks this is a fallacy. The cognitive hurdles to manage our privacy online are simply too steep.
This is all pretty novel. The idea of online privacy didn’t exist a generation ago. While we are good at handling our privacy in the offline world, lowering our voices or closing the curtains as the occasion may warrant, there are no cues online to alert us to a potential privacy invasion. It seems foolhardy to think we could determine the boundaries of data collection on our own.
Even if we were to know precisely what information companies like Facebook have about us and how it will be used, which we don’t, it would be hard for us to assess potential harms. Could we face higher prices online because Amazon has a precise grasp of our price sensitivities? Might our online identity discourage banks from giving us a loan? What else could happen? How does the risk stack up against the value of a targeted ad, or a friend’s birthday reminder?
Members of Congress have mostly let market forces prevail online, unfettered by government meddling. Privacy protection in the internet economy has relied on the belief that consumers will make rational choices. Give them a simple dial to manage their preferences — and accurate information, in a timely fashion and a big font, about what they are sharing online — and they will make the right calls to avoid discrimination, protect themselves from predatory behavior and thrive.
Europe’s stringent new privacy protection law, which Facebook has promised to apply in the United States, may do better than the American system of disclosure and consent. Data collectors in Europe will have to get explicit consent from users before harvesting their data, rather than simply offering them a chance to opt out. Still, the European system also relies mostly on faith that consumers will make rational choices.
The more that psychologists and behavioral economists study psychological biases and quirks, the clearer it seems that rational choices alone won’t work. “I don’t think any kind of disclosure or opt in or opt out is going to protect us from our worst instincts,” Professor Loewenstein argued.
What to do? Professor Acquisti suggests flipping the burden of proof. The case for privacy regulation rests on consumers’ proving that data collection is harmful. Why not ask the big online platforms like Facebook to prove they can’t work without it? If reducing data collection imposes a cost, we could figure out who bears it — whether consumers, advertisers or Facebook’s bottom line. That could help set societywide boundaries about what data to collect and what to leave alone.
It would no longer be up to confused users to protect themselves.