How to Exercise the Power You Didn’t Ask For
A version of this piece as it originally appeared in the Harvard Business Review on September 19, 2018 is accessible here.
I used to be largely indifferent to claims about the use of private data for targeted advertising, even as I worried about privacy more generally. How much of an intrusion was it, really, for a merchant to hit me with a banner ad for dog food instead of cat food, since it had reason to believe I owned a dog? And any users who were sensitive about their personal information could just click on a menu and simply opt out of that kind of tracking.
But times have changed.
The digital surveillance economy has ballooned in size and sophistication, while keeping most of its day-to-day tracking apparatus out of view. Public reaction has ranged from muted to deeply concerned, with a good portion of those in the concerned camp feeling so overwhelmed by the pervasiveness of their privacy loss that they’re more or less reconciled to it. It’s long past time not only to worry but to act.
Advertising dog food to dog owners remains innocuous, but pushing payday loans to people identified as being emotionally and financially vulnerable is not. Neither is targeted advertising that is used to exclude people. Julia Angwin, Ariana Tobin, and Madeleine Varner found that on Facebook targeting could be used to show housing ads only to white consumers. Narrow targeting can also render long-standing mechanisms for detecting market failure and abuse ineffective: State attorneys general or consumer advocates can’t respond to a deceitful ad campaign, for instance, when they don’t see it themselves. Uber took this predicament to cartoon villain extremes when, to avoid sting operations by local regulators, it used data collected from the Uber app to figure out who the officials were and then sent fake information about cars in service to their phones.
These are relatively new problems. Originally, our use of information platforms, whether search engines or social media, wasn’t tailored much to anything about us, except through our own direct choices. Your search results for the query “Are vaccinations safe?” would be the same as mine or, for a term like “pizza,” varied in a straightforward way, such as by location, offering up nearby restaurants. If you didn’t like what you got, the absence of tailoring suggested that the search platform wasn’t to blame; you simply were seeing a window on the web at large. For a long time that was a credible, even desirable, position for content aggregators to take. And for the most part they themselves weren’t always good at predicting what their own platforms would offer up. It was a roulette wheel, removed from any human agent’s shaping.
Today that’s not true. The digital world has gone from pull to push: Instead of actively searching for specific things, people read whatever content is in the feeds they see on sites like Facebook and Twitter. And more and more, people get not a range of search results but a single answer from a virtual concierge like Amazon’s Alexa. And it may not be long before such concierges rouse themselves to suggest it’s time to buy a gift for a friend’s birthday (perhaps from a sponsor) or persistently recommend Uber over Lyft when asked to procure a ride (again, thanks to sponsorship).
Is it still fair for search platforms to say, “Don’t blame me, blame the web!” if a concierge provides the wrong directions to a location or the wrong drug interaction precautions? While we tend not to hold Google and Bing responsible for the accuracy of every link they return on a search, the case may be different when platforms actively pluck out only one answer to a question — or answer a question that wasn’t even asked.
We’ve also moved to a world where online news feeds — and in some cases concierges’ answers to questions — are aggressively manipulated by third parties trying to gain exposure for their messages. There’s great concern about what happens when those messages are propaganda — that is, false and offered in bad faith, often obscuring their origins. Elections can be swayed, and people physically hurt, by lies. Should the platforms be in the business of deciding what’s true or not, the way that newspapers are? Or does that open the doors to content control by a handful of corporate parties — after all, Facebook has access to far more eyeballs than a single newspaper has ever had — or by the governments that regulate them?
Companies can no longer sit this out, much as they’d like to. As platforms provide highly curated and often single responses to consumers’ queries, they’re likely to face heated questions — and perhaps regulatory scrutiny — about whom they’re favoring or disfavoring. They can’t just shrug and point to a “neutral” algorithm when asked why their results are the way they are. That abdication of responsibility has led to abuse by sophisticated and well-funded propagandists, who often build Astroturf campaigns that are meant to look as if they’re grassroots.
So what should mediating platforms do?
An answer lies in recognizing that today’s issues with surveillance and targeting stem from habit and misplaced trust. People share information about themselves without realizing it and are unaware of how it gets used, passed on, and sold. But the remedy of allowing them to opt out of data collection leads to decision fatigue for users, who can articulate few specific preferences about data practices and simply wish not to be taken advantage of.
Restaurants must meet minimum standards for cleanliness, or (ideally) they’ll be shut down. We don’t ask the public to research food safety before grabbing a bite and then to “opt out” of the dubious dining establishments. No one would rue being deprived of the choice to eat food contaminated with salmonella. Similar intervention is needed in the digital universe.
Of course, best practices for the use of personal information online aren’t nearly as clear cut as those for restaurant cleanliness. After all, much of the personalization that results from online surveillance is truly valued by customers. That’s why we should turn to a different kind of relationship for inspiration: one in which the person gathering and using information is a skilled hired professional helping the person whose data is in play. That is the context of interactions between doctors and patients, lawyers and clients, and certified financial planners and investors.
Yale Law School’s Jack Balkin has invoked these examples and proposed that today’s online platforms become “information fiduciaries.” We are among a number of academics who have been working with policymakers and internet companies to map out what sorts of duties a responsible platform could embrace. We’ve found that our proposal has bipartisan appeal in Congress, because it protects consumers and corrects a clear market failure without the need for heavy-handed government intervention.
“Fiduciary” has a legalese ring to it, but it’s a long-standing, commonsense notion. The key characteristic of fiduciaries is loyalty: They must act in their charges’ best interests, and when conflicts arise, must put their charges’ interests above their own. That makes them trustworthy. Like doctors, lawyers, and financial advisers, social media platforms and their concierges are given sensitive information by their users, and those users expect a fair shake — whether they’re trying to find out what’s going on in the world or how to get somewhere or do something.
A fiduciary duty wouldn’t broadly rule out targeted advertising — dog owners would still get dog food ads — but it would preclude predatory advertising, like promotions for payday loans. It would also prevent data from being used for purposes unrelated to the expectations of the people who shared it, as happened with the “personality quiz” survey results that were later used to psychometrically profile voters and then attempt to sway their political opinions.
This approach would eliminate the need to judge good from bad content, because it would let platforms make decisions based on what their users want, rather than on what society wants for them. Most users want the truth and should be offered it; others may not value accuracy and may prefer colorful and highly opinionated content instead — and when they do, they should get it, perhaps labeled as such. Aggregators like Google News and Facebook are already starting to make such determinations about what to include as “news” and what counts as “everything else.” It may well be that an already-skeptical public only digs in further when these giants offer their judgments, but well-grounded tools could also inform journalists and help prevent propaganda posted on Facebook from spreading into news outlets.
More generally, the fiduciary approach would bring some coherence to the piecemeal privacy protections that have emerged over the years. The right to know what data has been collected about you, the right to ask that it be corrected or purged, and the right to withhold certain data entirely all jibe with the idea that a powerful company has an obligation to behave in an open, fair way toward consumers and put their interests above its own.
While restaurant cleanliness can be managed with readily learned best practices (keep the raw chicken on a separate plate), doctors and lawyers face more complicated questions about what their duty to their patients and clients entails (should a patient with a contagious and dangerous disease be allowed to walk out of the office without treatment or follow-up?). But the quandaries of online platforms are even less easy to address. Indeed, one of the few touchstones of data privacy — the concept of “personally identifiable information,” or PII — has become completely blurry, as identifying information can now be gleaned from previously innocuous sources, making nearly every piece of data drawn from someone sensitive.
Nevertheless, many online practices will always be black-and-white breaches of an information fiduciary’s duty. If Waze told me that the “best route” somewhere just so happened to pass by a particular Burger King, and it gave that answer to get a commission if I ate there, then Waze would be putting its own interests ahead of mine. So would Mark Zuckerberg if hypothetically he tried to orchestrate Facebook feeds so that Election Day alerts went only to people who would reliably vote for his preferred candidate. It would be helpful to take such possibilities entirely off the table now, at the point when no one is earning money from them or prepared to go to bat for them. As for the practices that fall into a grayer area, the information fiduciary approach can be tailored to account for newness and uncertainty as the internet ecosystem continues to evolve.
Ideally, companies would become fiduciaries by choice, instead of by legal mandate. Balkin and I have proposed how this might come about — with, say, U.S. federal law offering relief from the existing requirements of individual states if companies opt in to fiduciary status. That way, fiduciary duties wouldn’t be imposed on companies that don’t want them; they could take their chances, as they already do, with state-level regulation.
In addition, firms would need to structure themselves so that new practices that raise ethical issues are surfaced, discussed internally, and disclosed externally. This is not as easy as establishing a standard compliance framework, because in a compliance framework the assumption is that what’s right and wrong is known, and managers need only to ensure that employees stay within the lines. Instead the idea should be to encourage employees working on new projects to flag when something could be “lawful but awful” and congratulate — rather than retaliate against — them for calling attention to it. This is a principle of what in medical and some other fields is known as a “just culture,” and it’s supported by the management concept of “psychological safety,” wherein a group is set up in a way that allows people to feel comfortable expressing reservations about what they’re doing. Further, information fiduciary law as it develops could provide some immunity not just to individuals but to firms that in good faith alert the public or regulators to iffy practices. Instead of having investigations into problems by attorneys general or plaintiffs’ lawyers, we should seek to create incentives for bringing problems to light and addressing them industrywide.
That suggests a third touchstone for an initial implementation of information fiduciary law: Any public body chartered with offering judgments on new issues should be able to make them prospectively rather than retroactively. For example, the IRS can give taxpayers a “private letter ruling” before they commit to one tax strategy or another. On truly novel issues, companies ought to be able to ask public authorities — whether the Federal Trade Commission or a new body chartered specifically to deal with information privacy — for guidance rather than having to make a call in unclear circumstances and then potentially face damages if it turns out to be the wrong one.
Any approach that prioritizes duty to customers over profit risks trimming margins. That’s why we need to encourage a level playing field, where all major competitors have to show a baseline of respect. But the status quo is simply not acceptable. Though cleaning up their data practices will increase the expenses of the companies who abuse consumers’ privacy, that’s no reason to allow it to continue, any more than we should heed polluters who complain that their margins will suffer if they’re forced to stop dumping contaminants in rivers.
The problems arising from a surveillance-heavy digital ecosystem are getting more difficult and more ingrained. It’s time to try a comprehensive solution that’s sensitive to complexities, geared toward addressing them as they unfold, and based on duty to the individual consumers whose data might otherwise be used against them.