By Bloomberg View
“What happens to you here is forever,” warns O’Brien, an agent of the Thought Police, in “1984.” He would’ve loved the internet.
Online, everything is forever. All the sites you visit, articles you like, posts you share: They’re in the permanent file. Facebook’s most recent scandal - in which it allowed an outside company to furtively collect its users’ data on a huge scale - caused an uproar in large part because the people affected thought they were taking a harmless and ephemeral quiz; instead, their answers were stored for years and used in ways they never expected.
This kind of thing will keep happening so long as online privacy is governed by the concept of consent. It’s true that internet users “consent” to share their data in the abstract - they accept the privacy policies, click the right boxes, jump through the right hoops to keep on doing what they were doing. But this choice is not informed, and that’s so by design. The last thing companies such as Facebook want is for their users to think about what will be done with their data. Consent of this kind serves a narrow legal purpose, but is otherwise meaningless.
Is there a better way?
One idea, proposed by Jack Balkin of Yale Law School, is for online service providers to be deemed “information fiduciaries.” Much as a lawyer must protect a client’s confidentiality, and a financial adviser must give trustworthy advice, these companies would be required to act in the best interests of their users when handling sensitive data. This would acknowledge two important facts about the relationship between internet users and service providers.
One is that the relationship is asymmetrical. Tech companies know a great deal about their users, and have powerful tools to influence them, but the users are essentially in the dark about how the services work. Such power imbalances are a standing invitation to abuse.
Another fact is that the relationship is based on trust. Although tech companies assert that they’ll protect your personal information - “Privacy is very important to us,” Mark Zuckerberg once said, evidently in earnest - users have no meaningful way of evaluating such statements. You just have to trust them.
In other realms of professional life, relationships with those two characteristics are generally bound by duties. A patient trusts a doctor not to expose her intimate medical details, and the doctor is compelled by professional, ethical and legal standards to protect her interests. Likewise with accountants and attorneys.
Something similar could be applied to online service providers - perhaps, to begin with, on a voluntary basis.
The federal government could establish a set of best practices, and companies could choose to adopt them by agreeing to become fiduciaries. They could agree (say) to refrain from using data in unexpected or deceitful ways, pledge to share it only with trustworthy third parties for a limited purpose, and commit to handling it responsibly. “Their central obligation,” as Balkin puts it, would be “that they cannot act like con artists - inducing trust in their end-users to obtain personal information and then betraying end-users or working against their interests.” In return for accepting such obligations, companies could be offered tax benefits, immunity from certain lawsuits, and protection from America’s expanding patchwork of state and local privacy laws.
There’s no perfect solution to this problem. Case by case, the fiduciary approach won’t always give clear guidance on the proper use of data, for instance. Even with a grant of protection from some lawsuits, litigation would sometimes be needed to establish what the duties entail in specific instances.