Skip to content Skip to navigation

Friday Cyber News, April 14 2017

Cyber technology-related news and links from around the web, for the week of 4/8 - 4/14:

1. On the heels of Raw Data S02E05, exploring Professor Jennifer Pan's work examining Chinese censorship on online platforms, Ron Diebert's lab at the University of Toronto released a report this week detailing how automated censorship of a recent crackdown on human rights lawyers and activists works on Weibo and WeChat. Both images and phrases containing keywords like "Xie Yang Torture" or "709 human rights" are censored, but individual words are not. [Raw Data podcast; Citizenlab]

2. Complicating the definition of "hack-back", Germany wants to "proactively attack" foreign hackers and servers, raising the question of how you identify a foreign hacker who has not yet hacked you. Relatedly, one argument made by advocates of hacking back is that it could be used to shut down a botnet, thereby providing a public service. The heavy-handed version of that technique is Brickerbot, which permanently disables vulnerable IoT devices. Hopefully Brickerbot becomes more discerning, or patches instead of smashing; botnets aren't going away, and a variant of the Mirai botnet is mining bitcoin. [Infosecurity; DarkReading; Eweek]

3. Aadhaar (Hindi for "basis") is a digital national identification system for India, combining a 12-digit number with a fingerprint and linking citizens to state services as varied as tax payment, food aid, and banking. (Some e-commerce, too, though remarkably e-commerce growth in India was flat in 2016). Introduced in 2010, it has now reached 99% enrollment, and government officials are angling to make it mandatory, but India's Supreme Court has said that such programs must be voluntary. Add to that policy concern a tech concern: an NYU researcher has developed MasterPrints, patterns that match a variety of partial fingerprints, particularly in systems where multiple fingers, or multiple impressions per finger, are enrolled. A set of five MasterPrints can impersonate 26%-65% of users in such a system. [Economist x2; IEEE]

4. You've heard enough about United this week, but it's also a point in favor of algorithmic transparency: "The United episode gets at a more general problem with algorithms. Even if the selection of seat loser is 'truly random,' it will not always look random to the outside world [...] In essence, individual companies under-invest in perceptions of fairness, and reliance on 'truly random' algorithms can make this worse rather than better. A deliberate human chooser might well have done better, if only by knowing that a public defense of the choice would have been required [...] companies may be oversupplying 'reliance on randomness,' not taking the collective negative externality into account. Counterintuitively, relying on algorithms can increase perceptions of unfairness, and many of the costs of unfairness come on the perceptions side, even if 'the true model' is making choices using a fair process." In fact, many deep learning algorithms are something of a black box even to their creators, making the concept of a fair process up for debate. If a medical learning algorithm can predict the onset of schizophrenia when doctors cannot, can we say anything about its fairness? [Marginal Revolution; Technology Review]

5. YouTube's response to advertisers upset that their ads were being shown next to videos with offensive content was a relatively low-tech one: they are restricting advertising to channels with 10,000 views, thereby assuming that in the time it takes for a video to reach that threshold, a moderator will have noticed the video's objectionable content. Before this policy change, discussion of this problem centered around how to teach algorithms what was offensive (and even what might be objectionable to a particular advertiser; e.g., a company making a vegan meat substitute probably doesn't want to advertise on a video of a bacon taste-test). The 10,000 view cap is a good stopgap measure, but I wonder to what extent we're giving up on this type of 'messy' learning problem. Research published this week has trained AI to identify stereotypes in text, implicit-association types of female=domestic, male=science correlations. [WSJ; Science]

6.​ Uber, whose head of PR quit this week, has a knack for naming its programs: "Hell" tracked Lyft drivers through the creation of a grid of fake Lyft accounts that sat idle but saw other nearby Lyft drivers. This view would allow Uber to undercut Lyft's pricing in areas of driver saturation, or offer incentives to its own drivers to cover areas with low Lyft coverage. Until more details of this strategy emerge, it's not clear why Lyft wouldn't have identified the fake accounts as fake (a driver who's always online, never accepting rides, and stays in the same place the whole day?), but the bigger picture is whether this strategy would have violated Lyft's terms of service (likely yes) and how to ensure that these practices are auditable by regulators. Using software to evade regulators or feed unfair business practices (a class-action lawsuit alleges riders are shown higher fares than drivers are actually paid) is a tech problem that requires sophistication on the part of investigators to identify and understand. [SFist; Ars Technica]

7. The latest Shadow Brokers dump includes information about a program called JEEPFLEA that tracks Swift financial data from Middle Eastern banks, leading to speculation that the NSA is using the program to track terrorist financing (which would be useful). In other cross-border information tracking news, taking advantage of a 4th amendment loophole, digital border searches are booming: 8,503 searches in FY15 to 19,033 in FY16 and a projected 29,986 in FY17. [Motherboard; Atlantic]

8. If we compare Facebook to the early days of television, the problem with its "vast wasteland" of less-than-intellectual content may be the lack of competition, as the influx of new, competing channels brought us from Mister Ed to, ah, My Cat From Hell. (I like the idea of robust competition for Facebook, but I don't buy the lack of quality across 1961 TV--The Twilight Zone? The Flintstones?) [Tech Review]

9. First the kid ordered a dollhouse on Alexa, and it was kind of cute; now the TV commercial is activating your Google Home, and it's not cute at all. Here's an opportunity for an FTC clarification (definitely wouldn't want the Playmobil commercial to order a dollhouse on Alexa), or an opportunity for Amazon and Google to add to their devices Siri's ability to activate only in response to pre-registered voiceprints. Until someone makes MasterVoices, as in #3. [Techcrunch]

10. Thumb-fiddling around the drafted cyber executive order has devolved to the point of reporting on what various officials think is delaying the cyber executive order. [FCW]

Thanks for reading,

Allison
Stanford Cyber Initiative

(To suggest an item for this list, please email aberke@stanford.edu. You can view news from past weeks, subscribe, and unsubscribe at https://tinyletter.com/CyberNewsBytes)