Expert Voice: Jeffrey Rosen on Privacy

CIO Insight Staff Avatar

Updated on:

In his book, The Unwanted Gaze (Random House, 2000), legal scholar Jeffrey Rosen, an associate professor at The George Washington University Law School and the legal affairs editor of The New Republic, traces the erosion of privacy in the U.S. In the wake of Sept. 11, Rosen worries that stepped-up surveillance technologies and strategies might make us feel good, but there’s little evidence they can work to quell terrorism. Since Sept. 11, Rosen has written numerous articles urging technologists to consider the social and workplace impact of the security strategies they’re espousing, from biometric face scans to wireless wiretapping and surveillance cameras in public squares. In a recent interview with CIO Insight Executive Editor Marcia Stepanek, Rosen cautions that the nation’s new antiterrorism law will mean more monitoring of workers by companies and could lead to security strategies that run counter to the need for higher productivity and on-the-job creativity.

CIO INSIGHT: How much will our legal and technological response to terrorism transform our notions of privacy in the U.S.?

ROSEN: There is nothing new about the fear that new technologies of surveillance and communication are altering the nature of privacy. A hundred years ago, Louis D. Brandeis and Samuel Warren worried that new media technologies—in particular the invention of instant photographs and the tabloid press—were invading “the sacred precincts of private and domestic life.” What shocked Brandeis and Warren was an item in the Boston Saturday Evening Gazette that described a lavish breakfast party Warren held for his daughter’s wedding. Although the news wasn’t inherently salacious, Brandeis and Warren were appalled that a domestic event would be described in a gossip column and discussed by strangers.

At the beginning of the 21st century, however, thanks to the Internet, and the Sept. 11 attack on the World Trade Center—which gave rise to the new antiterrorism law that gives authorities far more latitude to inspect logs of Internet use— we have vastly expanded the aspects of private life that can be monitored and recorded. As a result, there is increased danger that personal information originally disclosed to friends and colleagues may be exposed to, and misinterpreted by, a less-understanding audience. Gossip that in Brandeis and Warren’s day might have taken place in a drawing room is now recorded in a chat room or picked up by authorities operating under new laws that make possible the wiretapping of digitally recorded conversations online and off, all of which can be retrieved years later, from anywhere.

In times like these, it is important to think about the balance between our desire for security and our desire for privacy.

Will the new security technologies actually have the desired public-policy effect?

Even the most sophisticated surveillance technologies can’t begin to absorb, analyze and understand the sheer volume of information. Consider the new antiterrorist law, which allows wiretaps of the Net, among other things. This will vastly expand the FBI’s ability to surveil foreign agents. But they can’t possibly hire enough FBI agents to listen to the recordings from beginning to end, so the FBI will use data-mining technology to search for suspicious key words. But this simply increases the risk that information will be taken out of context. As “60 Minutes” reported, the Canadian Security Agency identified a mother as a potential terrorist after she told a friend on the phone that her son had “bombed” in his school play. Filtered or unfiltered information taken out of context is no substitute for the genuine knowledge about a person that can emerge only slowly over time.

The provision in the law most relevant to businesses would define computer trespassing as a terrorist offense. What burden does this place on corporate technology executives and how might this influence the culture of the workplace?

The first thing is to understand what this provision means. Section 217 of the law defines a computer trespasser as any person with access to a protected computer without authorization. The definition excludes any person known by the owner or operator of the protected computer to have an existing contractual relationship with the owner or operator for access to all or part of the computer. The point of this is to exclude people who inadvertently violate their Internet terms of service contracts. Certain Internet service providers don’t allow their customers to send spam or bulk e-mail. If you sent spam, you’d violate your terms-of-service contract. But you wouldn’t be a computer trespasser. The provision was designed primarily to apply to hackers—people who are trying to access protected computers who aren’t authorized to do so.

The bill that was actually signed into law is far less Draconian than the one originally proposed, but it still raises some concerns for companies and workers. Initial definitions of computer trespassing, for example, would have made it illegal even to violate your terms-of-service contract with your ISP, defining that as an act of terrorism. However, the law as passed now makes it clear that violations of terms-of-service contracts are not acts of terrorism. Under the new law, computer trespassing is defined broadly as the wrongful use of a computer, in ways that might make it illegal, for example, to download a copyrighted MP3 file without permission. According to the bill, if you are a computer trespasser—and the law leaves it a bit vague as to exactly what that means—then your ISP can authorize the government to monitor your online activities, often without notice, including e-mail and Web browsing, at work and at home.

This obviously creates an incentive for employers to ensure that their employees are not using their computers in ways that might run afoul of this new provision. There will have to be some discussion about what, exactly, is prohibited, but this will surely increase the incentive for companies to monitor their employees more closely. Thus, the bill creates another legal incentive for workplace monitoring, joining the ones that have been so powerful so far, such as fear of liability for violations of sexual harassment laws, the desire to protect trade secrets and the desire to ensure that workers are not misusing company time.

The concern is that this monitoring of the workplace, although justified in the name of productivity and efficiency, may have precisely the opposite effect. Recent studies by the American Management Association, for example, show that workers may be less efficient, less creative and less productive when they’re aware, always, that someone’s looking over their shoulders. So the concern would be that the penalties for computer trespassing may be so severe that they will encourage companies to engage in far more monitoring than they would otherwise be inclined to do, with a negative impact on the culture and the environment of the workplace.

How would this provision affect mobile workers?

If a worker is a computer trespasser as defined by the law, whether or not he or she is at work, then an ISP might authorize the government to monitor this person pretty extensively, so even people working from home on private computers would have an incentive not to violate this provision. This doesn’t mean they would be monitored by their employers, but they might be monitored by the government with the authorization of the ISP and the cooperation of their employers, which would create an incentive for workplace managers to be especially careful—and might inhibit some people in their work and browsing at home.

What would be the most prudent course for companies to follow as a result of this law?

Because it’s broadly drawn, companies right now are unsure about exactly what might not be permissible under the new law. The first thing to do is get a series of legal opinions about what’s covered and what’s not. Beyond that, there’s always the prudent step of warning employees—again—that they may be monitored. Right now, most company lawyers are encouraging companies to do that anyway, just because it’s a good protection against liability.

How could this new law change the culture of the workplace and the relationships between workers and managers?

I think it remains to be seen how this new law will actually be applied. Supporters of the new law argued that it was ambiguous under the previous wiretap statute whether computer owners could get the assistance of law enforcement officials in conducting monitoring of their own machines, in order to protect their own rights and property. The complaint was that this lack of clarity prevented law enforcement from helping victims to take natural steps to defend themselves. So the defenders of the law suggest this is just a way of ending the odd result that the hackers’ privacy rights could trump the privacy rights of the victims. But it strikes me as worth asking whether this rather broad definition of terrorism—which seems quite far from the kind of violence or threats of violence that we associate with terrorism—really should be in the bill at all. So the broad objection to the bill was that the statutory powers it granted would be justified if they were limited to violent crimes, but perhaps not justified when they’re extended to computer crimes. If the law is enforced sensitively, then perhaps the concerns will be overstated.

This is only one of a series of recent laws that creates greater pressure on employers to monitor more of what transpires on the Web. But of course, there are costs of increased monitoring. It may be the case, for example, that some workplaces will become more willing to tolerate more informal communications, more privacy, more space for their workers—and hope to attract certain workers on that basis in light of this new climate of fear. But there will certainly be others who take their interpretation of this law to the opposite extreme. That’s going to be one of a series of policy choices that will evolve over the next couple of years. This doesn’t eliminate any redresses that workers had earlier; it just increases the incentive for companies to monitor.

In this new climate, how much is privacy at risk?

Things that would’ve been unimaginable before Sept. 11 are now being contemplated. Surveillance cameras in public places, which were fought vigorously before Sept. 11, are now getting installed, as we speak, in U.S. airports. And the question of whether we are going to follow Britain, with hundreds of thousands of cameras in public places, is a big public policy debate that we need to have.

Two terrorist bombs planted by the IRA in 1993 and 1994 triggered the creation by British officials of its so-called “Ring of Steel”—a network of closed-circuit TV cameras in London’s financial district. What lessons are there in Britain’s example for us here in the U.S.?

That anxiety about terrorism didn’t go away, and the cameras in Britain continued to multiply. Originally, they were hailed as the peoples’ technology, a friendly eye in the sky. Local governments couldn’t get enough of them: In 1994, 79 city centers had surveillance networks; by 1998, 440 city centers were wired. By the late 1990s, as part of its campaign to be tough on crime, Prime Minister Tony Blair’s new Labour government decided to support the cameras strongly. According to one estimate, there are now 2.5 million surveillance cameras in Britain, and the average Briton is now thought to be photographed by 300 separate cameras in a single day.

The promise of cameras as a magic bullet inspired one of former Prime Minister John Major’s most successful campaign slogans: “If you’ve got nothing to hide, you’ve got nothing to fear.”

What of that argument?

Privacy is not primarily about secrecy. It’s about opacity. It’s about the difference between information about a person and knowledge of that person. Privacy is the ability to protect parts of ourselves in different contexts.

What is the lesson from Britain for Americans contemplating heightened surveillance of all kinds?

Though initially justified as a way of combating terrorists, the cameras in Britain soon came to serve a very different function, designed not to produce arrests but to make people feel that they are being watched at all times. Instead of keeping terrorists off planes, biometric surveillance is being used to keep punks out of shopping malls. The people behind the live video screens are zooming in on unconventional behavior in public that in fact has nothing to do with terrorism. And rather than thwarting serious crime, the cameras are being used to enforce social conformity in ways that Americans may prefer to avoid.

Surely none of that would be allowed to happen in the U.S.?

I hope not. There is, in the end, a powerfully American reason to resist the establishment of a national surveillance network: The cameras are not consistent with the values of an open society. They are technologies of classification and exclusion. They represent different ways of putting people in their place, of deciding who gets in and who stays out, of limiting people’s movement and restricting their opportunities.

Should liberties be sacrificed in times of national emergency if doing so gives us greater security?

Britain’s experience in the fight against terrorism suggests that people may give up liberties without experiencing a corresponding increase in security—which is the whole point. And if we go along with these schemes, these vast feel-good architectures of surveillance that have far-reaching social costs and few discernible social benefits, we may find, in calmer times, that they are impossible to dismantle.

There is no reason to surrender to technological determinism, to accept the smug conclusion of, say, a Scott McNealy, who said, “You already have zero privacy; get over it.” There is zero reason to conclude that in the war between privacy and technology, privacy is necessarily doomed. On the contrary, a range of technological, legal and political responses might help us rebuild, in cyberspace and in our new fear society, some of the privacy and anonymity that we have lost.

CIO Insight Staff Avatar