None of the great civil rights pioneers and philosophers could have predicted how personal data — ethereal, floating on a server somewhere unknown — would have become the next great civil rights struggle. Personal data is essentially quantified human behavior — and it is incredibly powerful, coveted, bought and sold with no regard for those from whom it was harvested. With our newfound surplus of this quantified data on human behavior, our society will need to establish new laws and regulations to control the conduct of organizations and institutions.
One example of this is the General Data Protection Regulation (GDPR) that the European Union put into effect on May 25, 2018. The rule requires opt-in consent from users before data can be collected; that "by-default" sites use the highest possible privacy settings; and that personal data be made more difficult to tie back to an individual, either by using pseudonymization or a stronger form of encoding anonymization. Users must be informed of how long their data will be retained and be given the right to be digitally "forgotten." Controllers would also be forced to design data protection into all systems by default, thus reducing the chances of theft.
If the United States were to adopt its own version of GDPR, it would certainly help. Until then, users will have to do a lot of work themselves to ensure they are protected. On a website run by Oath, the media firm created by the merger of AOL and Yahoo!, a user is asked for consent to "to use your . . . data to understand your interests and personalize and measure ads." If the user does not agree, he or she has the option to follow links . . . only to then discover that had they agreed, the user would have granted Oath permission to share their data with more than one hundred ad networks.
It is highly likely that the GDPR will be improved and strengthened in the coming years. After all, it superseded weaker legislation—the Data Protection Directive—implemented in 1998. So one approach to solving the problem is to chip away at it, every decade or so bringing out a new version of the GDPR. In all likelihood this will be the chosen solution.
Still, we would argue that the GDPR is an all-too-typical example of using Industrial Revolution rules to deal with Autonomous Revolution issues.
If we really want to solve the problem of Internet privacy, we have to engage in a paradigm shift in how we think about it. What is really broken here is the defective freemium business model of the Internet — and the way to solve that problem is to give the user complete ownership and control over his or her data.
One way to do this would be to create information fiduciaries that would hold individuals' information. Think of the fiduciary as a personal information safety deposit box that can be unlocked only if the holder supplies a key or knowingly withdraws the information and sends it to the desired recipient.
These fiduciaries would have the right to collect all the personal information they can from legal sources. The owner, in turn, would have complete control over who can access the information stored with a fiduciary.
The fiduciary would organize the information by tiers or levels. The first level would be pretty innocuous stuff, while the highest level would be highly sensitive kinds of information that an owner would permit only a limited number of institutions to view.
The individual — the owner — would have the right to examine the information in his or her file at any time. For simplicity's sake, the owner might opt to use only one fiduciary, and he or she would have the ability to work with that fiduciary to correct any misinformation. Should the owner choose to release information to an Internet site, it would be illegal for that site to sell it or provide it to a third party.
Here is the way such a system could work. Suppose an information owner wants to get free services from Google. He or she would agree to provide Google with First Level information in return for its services. Google might deem the owner's First Level information not valuable enough to warrant those services and ask for Second Level information as well, or, if that's not acceptable, propose a fee, say $5 a month.
Perhaps an auto insurance company wants access to the owner's Fourth Level information to determine what price it will give her on her insurance. She could agree to give the company access, but it would not have the right to use that data for any other purpose.
We suspect the fiduciaries of the world would love this business model because they would charge information users for access to the information.
A new kind of privacy
What would it take to begin this transformation of the privacy model? It would begin by turning companies such as Equifax, Experian, TransUnion, Acxiom, and ChoicePoint into those fiduciaries. Even a company like Google or Facebook could establish and offer fiduciary services.
Suppose company's algorithm was very good at analyzing credit risk, and a lot of banks wanted to use its services. A bank could agree to pay the business for analyzing a person's private information. With the explicit permission of that person, the bank would get access to his or her information from their information fiduciary. Once the company had performed this task for the bank, it would by law erase the information and not use it for any other purpose.
This concept of an information fiduciary would have an added benefit: it would offer a high level of protection against illegal search and seizure by the government. The government could not buy our information on the open market. Instead, it would be required to obtain a search warrant to look at the information stored with the fiduciary.
Of course, this is a radical idea. Many people—and institutions—will object to it. But think about it this way: Our current situation just happened. No one really understood what was going to transpire, so we just sat back and watched. Now we are stuck with a system that has some very undesirable aspects. But suppose for a moment that, at the turn of the twenty-first century, the visionaries of the Internet had appeared before Congress and said:
As we go through the process of adapting to phase change, we have a choice between using our old rules to chip away at the problem or adopting a radical new approach that can cure it. We believe the right approach is the radical one, because our individual liberties are too precious to put at risk.
Had that occurred, had new institutions based on the new rules been instituted, the Internet would have evolved very differently from what it has become today. Had the freemium model never been allowed to emerge, many of our current threats to privacy might be all but unknown.
Purging the existing model as we have suggested above would be gut-wrenching and extremely difficult, but we believe it offers the best solution to the problem. But other approaches would also greatly improve the situation.
There is a lot of hard work to be done, but if we commit to aggressively attacking the problem we can have our privacy and our liberty for years to come.