Should We Sacrifice Privacy for the Sake of Security?
To address catastrophic risks, giving up some privacy may be better than the alternatives
Over at the Effective Altruism Forum, Maxwell Tabarrok has an excellent survey piece on the “Vulnerable World Hypothesis.” It’s a great primer for anyone who wants to know something about what academics think about catastrophic risks facing humanity. Much of the essay is a response to a 2019 paper by Professor Nick Bostrom, who runs the Future of Humanity Institute at the University of Oxford.
Bostrom is worried that we live in a “vulnerable world,” defined as one where the destruction of civilization as we know it becomes extremely likely due to society reaching a particular stage of technological development. Bostrom considers a number of potential solutions to this problem, including restricting certain technological advancements, censorship of various kinds, risk assessment, and also limiting the range of human preferences so as to eliminate counterproductive motives. He ultimately finds no option to be perfect, and some to be pretty flawed. Most of his potential solutions seem to be ineffective in at least some significant cases. He concludes that a system global governance involving strict surveillance of citizens may be the best approach to interdict and stop destructive acts before they can be carried out (although he acknowledges this solution is not perfect either).
Tabarrok sees this proposed solution as opposed to the “enlightenment values” of liberalism, and he offers a very reasoned response to Bostrom, essentially arguing that even if we do live in a vulnerable world, the surveillance proposal might not work given the appearance of low and possibly declining state capacity in many developed countries. My colleague Adam Thierer also rejects Bostrom’s proposal, both on the grounds that he is skeptical of the ability of the government to implement these kinds of regulations, and also because Thierer’s outlook is far rosier—he doesn’t appear to believe we live in a vulnerable world.
For this essay, I’d like to play more of a devil’s advocate, offering a perspective that is somewhat more sympathetic to Bostrom’s position though may not go all the way towards endorsing his solution. I think it is easy to dismiss his position on liberty or privacy grounds. However, if we take for granted the vulnerable world hypothesis is true and then ask ourselves, “Given this is the case then what do we do now?” From this perspective, the solution Bostrom offers may not be that bad relative to alternatives.
Most of us would likely agree that if, absent some control measures, the destruction of civilization is a real possibility or even a near inevitability, then some restrictions on human rights and privileges that we currently enjoy would be required. If the alternative is the end of the human race, or at least death and misery for large swaths of the population, some sacrifice is going to be needed. In that case, privacy actually seems like something we could perhaps learn to live without, given its rather nebulous form and also because ongoing technological changes may be eradicating it anyway. In fact, there may even be a few unintended benefits from eliminating privacy as we now know it, which I will discuss.
What would the surveillance state look like?
In his article, Bostrom discusses a “high-tech panopticon,” referencing the circular open prison system envisioned by the philosopher Jeremy Bentham, whereby an authority can view all of the inmates in the prison simultaneously from a privileged position at the center. Bostrom also envisions members of the public being tagged with a “freedom tag,” so that they can be tracked and monitored by the government. In his words:
Encrypted video and audio is continuously uploaded from the device to the cloud and machine-interpreted in real time. AI algorithms classify the activities of the wearer, his hand movements, nearby objects, and other situational cues. If suspicious activity is detected, the feed is relayed to one of several patriot monitoring stations. These are vast office complexes, staffed 24/7.
Personally, I doubt a “freedom tag” is even necessary. We already have such a tag, and it’s called our cell phone. Most people are more or less attached to it at the hip as it is now. It is also hard to imagine the government designing a device that people would want to use or find convenient. This has already been accomplished by private companies like Apple and Samsung, so why reinvent the wheel?
More realistically, I suspect what a system of high-tech government surveillance might look like is some kind of AI algorithm simultaneously monitoring a variety of devices that we already use. This might include reading communications via email on our computers, listening to conversations picked up through the microphone and video cameras on our phones, monitoring our Google searches, tracking websites we visit on our internet browser, and following our movements via GPS tracker. This monitoring system could also rely on external devices not in our possession, like the cameras now ubiquitous on urban streets. Facial recognition and gait surveillance technology could scan through data generated by these devices in order to locate us. Our purchases could be tracked through our credit card exchanges or, in the future, through cryptocurrency transactions recorded to a blockchain.
A monitoring system like this very quickly begins to sound like something out of “1984” or “Brave New World.” But at the same time, it is worth pondering how different this world actually is from the world we live in now. Already, most of these activities are monitored by some third party, albeit in a more decentralized fashion and in a voluntary manner, compared to the government-mandated system Bostrom envisions. Functionally, whether a government monitors us or whether private companies monitor us might not matter that much from the standpoint of how we behave in our daily lives. There might be some psychological cost to knowing our behavior is being tracked by the government, and that this information could potentially be used against us. But, beyond that inconvenience, assuming one stays within the confines of the law, our behavior might not in fact change all that much relative to the status quo. Let’s be honest. Most of us don’t lead very exciting lives beyond going to work, buying groceries, and picking our kids up at school. That’s unlikely likely to change much under surveillance. As a result, it is not obvious that we should unambiguously say “no” to Bostrom’s proposal, given what’s at stake. At least some further thought into the matter is merited.
Which rights matter?
Let’s for a moment take for granted that certain rights might have to be sacrificed to address catastrophic risks. What might some of those rights might be? (I understand not everyone will consider all the same things “rights;” some might be considered privileges.) Here is a list of potential options:
The right to life
The right to economic liberty
The right to freedom of movement
The right to speak freely (includes religion)
The right to privacy
Already, we sacrifice a number of these rights in a variety of contexts. I am not suggesting this is good, as the concept of “rights” seems to imply inviolability. Nevertheless, it’s worth noting that currently my economic rights are infringed upon by regulations of various kinds emanating from all levels of government. Even the right to life is routinely violated, given legalized abortion and the death penalty. Our movement is severely restricted through immigration laws, and our privacy can be invaded through a judge-issued warrant. All these examples highlight that, Americans don’t find these rights exactly sacrosanct, and many of us are willing to trade some of them off on the margin for more security (or perceived security).
There is clearly a tradeoff involved between some of the different rights on this list, though this is not always true. For example, its easy to see how freedom of movement and economic liberty tend to go hand in hand. In fact, you probably can’t have true economic liberty without freedom of movement, so perhaps the latter is implied by the former, much like I’ve listed that freedom of religion is implied by freedom of speech.
However, it is also conceivable that strengthened restrictions on some rights could allow for loosened restrictions on others. For example, if I allow my purchases, movement, and communications to be tracked, then conceivably I could engage in a wider variety of market activities, since what otherwise might be viewed as suspicious or risky activity, might be deemed safe once my intent is more clear. An obvious example would be that regulations restricting purchases of certain weapons could be relaxed, so long as my activities and behavior in purchasing the weapons is consistent with someone interested in self-defense and not mass murder. Something similar is probably true of free speech. We might be able to avoid explicitly outlawing the public’s ability to post or read information online about how to build a nuclear device if someone doing so knows that they are going to get noticed by the authorities. And when this behavior is combined with an attempt to buy fissile materials, FBI agents will likely come knocking at the door.
Would most people accept less privacy for more economic freedom and truly free speech? I’m not sure. But again, already a variety of corporations are able to track my movements, purchases, text messages, and reading habits. I share my internet passwords with a password management system, which potentially gives someone access to some pretty sensitive accounts. The government probably already has access to most or all of this information, too (we can thank Edward Snowden for making us aware of that).
If I was able to cut a deal with a central authority, whereby I continue to allow a limited range of actors access to my information, whilst in return I also receive benefits of some kind, such as protection, I might conceivably take that deal. Many people, myself largely included, have an attitude such as, “I don’t care if someone goes through my stuff, because I have nothing to hide.” This argument tends not to sway people who see privacy as a fundamental right. But for those of us who don’t see privacy as a right the statement makes a fair amount of sense, so long as the laws are just. Moreover, having an algorithm go through your photos or search history is not exactly the same as having a real person do it (even if a real person might eventually be called upon if a deeper review is triggered).
An unexpected benefit of a government monitoring system along the lines envisioned by Bostrom is it could lead to more pro-social behavior along a variety of dimensions. For example, if someone knows an algorithm is monitoring their internet searches, they might visit fewer pornographic websites. An abusive husband might be less likely to physically assault his wife or scream at his kids if he knows someone is listening to him through his cell phone. Someone committed to quit smoking might be less likely to buy cigarettes if she knows her purchases are being tracked. In a way, it’s kind of like how religious people behave when they believe God is watching them 24-7. Twenty-first century technology ends up being a stand-in for religion.
There are limits to how far we should reasonably allow this surveillance state to go. We wouldn’t want police officers to have the ability to search anyone’s house or car for any reason whatsoever without a warrant. Warrants are there to prevent abuses of power. So, to be clear, current legal protections for citizens make a lot of sense for situations involving conventional risks, but these are different from situations involving catastrophic risks. There is no ex-post punishment option when civilization is at stake; there are only ex-ante preventative measures. If surveillance has a limited purpose aimed solely at detecting catastrophic risks, and the government’s ability to act on the information it collects is restricted to that purpose too, then we are moving closer to a system that might be workable.
What do protections look like?
One concern sometimes raised about artificial intelligence is that an AI algorithm itself could potentially pose a catastrophic risk that destroys the entire world. Indeed, Bostrom himself has provided an example of a paperclip maximizer that destroys the world as it myopically devotes as many resources as possible to paperclip production. A super surveillance state relying on AI almost certainly requires an actual person at the helm of the ship in order to avoid this kind of nightmare outcome, someone who has the authority to shut down the AI or otherwise alter its algorithm in cases of emergency.
There is a parallel here with monetary policy. Many conservative and libertarian critics of the Federal Reserve System tend to endorse strict rules for monetary policy. But any such rule is going to be subject to the frailties of human choice. In other words, no rule is perfect. We do not magically escape human shortcomings by selecting a rule. Even if the rule works well, at some point it is not going to work well, in which case we will want someone to be able to step in and fix matters if it is possible to do so. We would not want to allow another Great Depression just for the sake of maintaining a rule, if avoiding a depression is feasible. We need some human discretion in the system. Completely turning our fortunes over to some algorithm or formula is pure folly. This is true with artificial intelligence just as it is with monetary policy.1
For this reason, any government surveillance mechanism must have a human being, or group of human beings, overseeing it. Their responsibilities and powers should be transparently outlined as pertains to the algorithm’s design, the information gathering process, and the enforcement or punishment system based on the information gathered.
Because the state would be collecting so much information, its ability to act on that information and restrict our liberties in other areas using the information it collects would also have to be severely curtailed. One could imagine, for example, an AI discovering the recipe for Coca Cola or McDonald’s “secret sauce.” It could also discover evidence of smaller crimes, like theft or illegal drug use.
It is easy to see how the state’s role in regulating the economy and in enforcing ordinary criminal statutes creates a conflict of interest with its surveillance activities. State actors could easily destroy businesses, marriages, and many other productive and collaborative aspects of life, using the information collected. Either there needs to be strict separation of surveillance activities and these other functions of government, or some of these other responsibilities would have to be eliminated, reduced, and/or privatized, so that the state’s more important security responsibilities can be effective and trustworthy.
While some state powers would obviously have to be limited to protect economic concerns or other liberties, others state powers would have to be expanded. Namely, in cases where there is a real risk identified, some human being (or beings) would likely need to be granted fairly sweeping powers within in a certain limited domain. This might include the power to, for example, round up a group of people without a warrant on very short notice and incarcerate them without a hearing. This is not to say this power should be completely unrestricted or indefinite. The Guantanamo Bay prison in Cuba is an example of a situation we’d want to avoid, where detainees can essentially be locked up without legal protection more or less indefinitely. But short term restrictions of rights along these lines, when sufficient evidence is present to demonstrate a real catastrophic threat, could be warranted in some situations.
If you are skeptical, consider that currently our criminal justice system is said to operate on a principle whereby one hundred guilty people should be allowed to walk free rather than let one innocent person go to jail. This standard simply cannot stand when it comes to catastrophic risks. If anything, the principle is entirely the reverse of what should be tolerated. We cannot afford to let a single “black ball” be drawn from the urn of potential catastrophic risks or else it’s lights out.
Who should run the ship of state?
If an authority is granted these kinds of sweeping powers it is going to need to be trustworthy, both for its effectiveness (these are the state capacity issues mentioned earlier) as well as for its long-run viability. Ironically, the sort of technological society we are moving towards may well be helpful when it comes to producing more ethical and trustworthy leaders. Today, even one significant mistake can end up ruining a person’s job prospects for the rest of their life, given the permanence of information found on the internet. Indeed, I often look around at the younger generation today and think about what saints they seem like compared to myself at the same age. I can’t help but think the internet may have something to do with the improvement in behavior.
Even if my observation is incorrect based on my small sample size, in the future a kind of technological Puritanism could take hold and result in a better behaved and more trustworthy citizenly, which could spill over and produce better leaders. This is speculative, of course, but is at least conceivable as a potential unintended benefit of reduced privacy. This brings us to Tabarrok and Thierer’s concern that even if the government had extensive surveillance capabilities, it might not be able to use them effectively to prevent catastrophic risks.
I think there is almost no doubt about the state’s ability to collect a substantial amount of information effectively. Even private parties may have the ability to monitor individual citizens in the way described in this article fairly soon. The government can simply adopt these technologies. Whether a government can act on that information effectively is another matter. Here, however, we may have no choice but to improve existing institutions.
Given the advance of technology, it might be only a matter of time before the surveillance state is upon us. Indeed, we already have a government that surveils its citizens in many contexts. If surveillance is an inevitability, we should be pre-emptively preparing for this future. Whether we like it or not, we need to be cultivating a smarter, more ethical generation of citizens capable of handling the immense responsibilities that are going to fall upon them. Otherwise, the next generation will be handed the responsibility anyway without any preparation.
With better leaders, empowered with expansive authorities to act in circumscribed domains, under specified conditions, and for limited amounts of time, I think the state capacity issues mentioned by Tabarrok and Thierer could at least be partially addressed. This will involve changes in current governance and culture, no doubt, but either we adapt or we suffer the consequences, which aren’t pretty.
Conclusion
My own intuition tells me we probably live in a vulnerable world where catastrophic risks are higher than is widely believed. As such, I think Bostrom’s ideas merit serious consideration. Ultimately, the questions we should be asking are things like: Which institutions are likely to be able to credibly collect information pre-emptively about catastrophic risks and then act on that information? And which rights or privileges are we willing to curtail towards that end?
In my mind, privacy is the most obvious answer to the second question. I don’t consider it a fundamental right, and the costs of rolling back privacy may be small relative to alternatives. On the first question, it’s conceivable, as Bostrom suggests, some entity is going to need to be monitoring the citizenry. This is most likely going to be government, but perhaps free market solutions will eventually present themselves.
If an expansive government surveillance state ever becomes a reality, certain government activities may need to be rolled back to prevent abuse, most obviously in its role regulating business and enforcing criminal statutes for lesser crimes. Additionally, to the extent possible, only the highest caliber of people should be entrusted with the state’s expanded powers. Interestingly, the surveillance state envisioned by Bostrom might be more likely to produce such citizens as we grow more accustomed to the idea of our everyday lives being constantly monitored.
I’m sure other potential solutions to these problems are possible beyond what I have mused over here. My intent has been primarily to focus on the issue of privacy and why some curtailment of it should perhaps not frighten us, and indeed may even be inevitable to a large extent. If that’s the case, then we need to be thinking about training the next generation of leaders so they have the institutional tools and more importantly the moral fiber to handle the profound responsibilities that will fall on them. The alternative is simply not an option we can consider.
To be clear, I favor rules in many contexts, including in the context of monetary policy. For example, I believe the Federal Reserve would benefit from following a nominal GDP targeting policy relative to being uncommitted to any particular policy philosophy. I might even favor requiring the Fed by law to follow an NGDP targeting policy, so long as leaders maintain some flexibilities. What I object to is complete removal of human discretion from monetary policy.