The Washington Post* published a disturbing story yesterday about a piece of software called Beware, in use in Fresno, CA and elsewhere, that calculates a “threat score” on individuals of interest to the police based on their criminal records, property records, commercial data, and social media posts. The police use the threat scores—which are green, yellow, or red, harkening back to DHS’s terror alert levels—to prepare themselves for potential danger. No matter what data is in your file, you’re innocent until proven guilty in a court of law, but not in the mind of a police officer. This creates a chilling effect on speech online, and opens up comparisons to systems like China’s “social credit” monitoring system.
Algorithmic interpretation of this type of surveillance, a similar model to the profiles created by online advertisers to track clicks, uses sources that individually we may have known were public (e.g., tweets, property records, traffic cameras) but combines them to unsettling effect. The erosion of privacy under the assumption that 1 + 1 = 2 and individual data points are equally shareable as their combined total is a logical fallacy that may have progressed too far to stop. The law deals capriciously with questions of extent rather than existence, which is why, for example, although there seems to be an obvious difference between a person who owns one gun and a person who owns forty, there is no legal distinction. If we want to draw a line on how much data can be amassed and cross-referenced about a person before the 4th amendment—or laws against stalking—come into play, we have to define that threshold, and that has proven difficult to do.
A common refrain at computer security conferences is the problem of usability. Computer security needs to be robust, and relies on mathematics that is almost as difficult to calculate as it is to explain, but it also needs to be usable by everyone, and especially those who belong to the types of marginalized groups—the poor, racial and ethnic minorities, political dissidents—who are less likely to have been given the instruction, time, and resources to figure out how to use it. This fundamental conflict produces two types of invisibility: the invisibility of injustices that occur without the knowledge or comprehension of the wronged party, and the invisibility of the screen of privacy that the knowledgeable user believes he or she is constructing. Tools like Tor, Ghostery, Privacy Badger, and Do Not Track may be the impenetrable wall between our activities and prying eyes that we hope they are, or they may be mosquito netting, the equivalent of the fourth wall that the audience envision prevents the actors from looking out and noticing them fidgeting in their seats. Revelations like those of Edward Snowden break the fourth wall: what we thought was secure was really all too transparent. Enabling law enforcement to bring to bear so many tools of surveillance against any (and every) individual calls into question the sturdiness of our privacy tools—as does the fact, mentioned in the WP article, that even the police don’t know everything that goes into Beware’s algorithm, or how the scores are calculated—and casts a pall of suspicion over tools that we may have regarded as benign. One may have thought red light cameras were there to help catch hit-and-run drivers, for example, not to help profile the reputation of a 911 caller.
As the capabilities of algorithms and big data analysis evolved, a paradigm shift was missed: we don’t want computers to know more about us, and remember us when we return. We don’t want an internet “where everybody knows your name.” We used to be told that “cookies” are used to “remember our preferences and provide a better user experience”; we now log on and see beacons that are used to track us and invade our privacy. Might this shift of opinion lead to a change in policy? Though some of our habits, like the sharing of vacation photos, have become less private over time, some of our most treasured institutions—including voting, and religious practice—have become more private, not only from one another, but from public officials as well. Let’s take steps toward more privacy for our data, rather than assuming that good intentions or piecemeal technological solutions will provide a solid barrier between our privacy and the state.
*Interestingly, neither the story nor the use of Beware software or threat scores are mentioned on the Fresno Bee this week, the paper of record for Fresno, CA.