The Candy Apple
Google Tells Big Brother to Take a Hike
Last month we heard that Google had declined to provide some information to the United States Justice Department. The company had declined several months earlier but the news came to light when the Justice Department asked for a court order to force Google to comply.
Those are some of the relevant facts, as uncluttered by opinion as I can make them.
Now I will pose some questions that came to me as I pondered this issue, and last, I will propose some answers. I have consulted several acquaintances on this, but all the conclusions are my own unless I say otherwise.
Why is the Justice Department bothering Google and the others when it could be chasing the people who put pornographic content on the Web? Why waste energy looking at stuff that says what we already know? Why take Google to court when all they do is report what’s out there? Why not chase down the people who are breaking the Child Online Protection Act, and leave the search engines alone?
OK, that was more than one question. But it is all sort of the same question.
Why does the Justice Department believe it can ask a private company for information? Do they ask Coca-Cola for their “secret ingredient,” “merchandise 7X”? What if Coke really still has cocaine in it? Ooh, we need to get right on that. My question asked why the Justice Department believes it can ask a private company for information, but it sounds to me like they are not only asking, but compelling a private company to do what they say. I know Google is publicly-traded; I don’t mean “private” in that way. It all sounds rather heavy-handed. It might not sound that way if we could see any benefit to it, but the first question applies: What does the Justice Department think it will do with a week’s worth of searches on cat dandruff?
Why pester Google and the others? I could not figure this one out, but a lawyer friend provides this guess:
As I understand it, The Justice Department is attempting to lay the groundwork for a renewed attempt to sustain anti-cyberporn laws. These laws have been successfully attacked in the past on First Amendment grounds by arguing that the government, which bears the burden in a First Amendment context, cannot show it has selected the least restrictive means of accomplishing an important governmental objective, because filtering is less restrictive and would keep porn away from minors. I imagine the Justice Department is attempting to show that filtering would not work.
—Robert Shore, Liner Yankelevitz Sunshine & Regenstreif
If he is correct, I think the Justice Department is taking a shortcut. It is not the problem of the search engines to prosecute or regulate online porn. I appreciate that some of them try to screen it, but they are not the bad guys—if indeed pornographers are bad guys, which I am not addressing. They are breaking the COPA law if their content is available to children, which is all that is in question here.
Why pester companies that are not affiliated with the government? I can’t get a good answer to this one. If Google had a contract with the US Army, and the Army needed some information from Google, I might see how that could be construed as a relationship where you would expect some cooperation. But there is no such relationship between Google and the United States Justice Department, or none that I know of. It sounds like another shortcut to me.
Why did Google decline? My initial reaction was, “Good for Google!” I was pleased they didn’t turn over a bunch of probably worthless data to a government agency too lazy to do its own homework. Then I wondered, if it is just aggregated data, and none of the information is tied to me or any one user, what would be the harm?
Here’s one harm: Google’s search results come from a specific algorithm or technique that scores a hit higher the longer you stay on that site. It allows them to see which sites give you the information you asked for, and which ones don’t. That’s why the first few hits are often the most useful, because other users have validated that they work. If Google provides a week’s worth of searches and results, someone might figure out how they do that. It’s called proprietary information, and they have a right to keep it secret, like the possibly mythical Coke secret ingredient Merchandise 7X. Thanks to UNC-Charlotte’s Drew Arrowood for pointing me in this direction.
I’m not buying the second harm argument. This argument uses the slippery slope, to say, well, if we give you this information now, it makes it easier or “more right” next time to give you more specific information. That makes all the users cringe, because we don’t want the government reading our searches about cat dandruff. The flaw in this one is the slippery slope itself. As James Rachels shows in at least one of his introductory moral philosophy textbooks, the slippery slope argument relies on us to predict the future, and to say that something will be worse if we go this route. We can’t predict the future. Anytime you hear an argument that cites the slippery slope, discount it. We can all make up such arguments, but since we don’t know where a path will lead, we shouldn’t make them up or give them weight.
Also in This Series
- On Temptation · July 2010
- Beyond Pen Pals · July 2007
- Just Because We Can Do a Thing, Does Not Mean We Should Do a Thing · March 2006
- Google Tells Big Brother to Take a Hike · February 2006
- Wikipedia Is Not the Lovefest We Thought · January 2006
- Star Trek Gadgets Have Arrived · December 2005
- The Silver Screen Keeps Shrinking · October 2005
- It’s Just Business · July 2005
- Age Has Its Advantages · June 2005
- Complete Archive
Reader Comments (6)
I don't believe I said anything about China in the column, but I will suggest a way to think about it so that it is not inconsistent. You can play along or not, of course, as you like.
The issue in the United States is not one of censorship. The Justice Department wants to prosecute people who violate a pornography law. Google is not censoring anything, and neither is the government. Google has been asked to help with detective work, and refused. They are not cooperating with the government because it is not in their interest to do so (they want to protect users' privacy) long-term).
In China, the question is of whether it is appropriate for a search engine to screen out results because the government says so. I see this as censorship, but since users will get a notice on the webpage saying the results have been pruned, it does not seem so bad to me. Someday it will change. Google is cooperating with the government in this matter because it is in their interest to do so (long-term, they will have established themselves as a useful tool).
Does google risk either (1) misleading the government, which could potentially be a felony (a felony of the sort that destroyed Arthur Anderson), or (2) admitting openly that they don't have a foolproof (or even very effective) way of recognizing the automated queries, which would encourage more mischief. Even if the actual content of the queries is protected by court order, and not made part of the public record, it is hard to see how issues related to the second risk won't become public by way of inference from the google data.
What google does for us is something that most of us (outside of the rarified AI community) have never had a machine do for us before a few years ago: arrive at strategies to find the right data.
Can strategies exist in the absence of privacy? At the same time our government is attacking google, they are reserving for themselves a right of privacy for their own information gathering apparatus, which they maintain can't work appropriately without that privacy. In a democracy, can the government really have such an asymmetry in informational power compared to the people? In verifying that a particular system is secure and capable of preserving the privacy of its users, the consensus seems to be that open source software is superior to a closed source solution -- but for search engines, the opposite might well be the case.
Thanks to Ellyn for bringing this controversy a little more down to Earth.
This is a straw-man argument. Don't confuse moral philosophy with deductive logic. If we can't predict the future, why do we watch the weather forecast? We may not be able to do so with perfect accuracy, but if a prediction is right more often then it is wrong, it is important.
Regardless of that, consider this: many forces, including the US legal system, are based on the concept of precedent: it is assumed that what has successfully occurred in the past is a model for deciding how to handle the present. If the government is allowed to pressure Google into giving its data over simply because Google can't prove sufficient harm to the public, then the future litmus test will be that a company will have to prove harm to the public to avoid being forced to hand over its business databases in the future. This may not sound right to a moral philosopher, but a sociologist will recognize the truth in the pattern.
Discounting any position that is framed like a slippery-slope argument leads you to favor any expedient solution to any problem. As a philosophy, that is disastrous to rights and freedoms.
The real second argument is that many people are covetous of their privacy rights (or perceived privacy rights). They know, when they type their queries to Google, that they are sharing them with Google. They don't want the government to have the right, later, to alter the relationship and lay claim to the data, even if it will not purportedly be able to link to them personally.
It is important to Google to preserve the trust people have in them. Oddly, the prospect that Google might sell the data to a business interest is less concerning to many people than handing it over to the government: at least a business interest can be relied upon to have only business aspirations, but the government specifically has a moral agenda, and not everyone is convinced that the government's moral agenda matches their own in sufficient detail.
Add A Comment