Skip to content Skip to navigation


Events of interest to the Cyber Initiative community

Distributed weekly through, to be added please email or visit this page.

Cyber Initiative and Related events:



Cyber researcher Herb Lin has published a new edited volume on offensive cyber operations, called Bytes, Bombs, and Spies, based on a Cyber Initiative-supported workshop on this topic held in 2016. Read more about the book at

Sign up for our weekly newsletter on cybersecurity headlines at


Stanford Bug Bounty Program Launch - Saturday, January 19th, 10am-4pm, in Lathrop 282. Lunch provided

The Information Security Office (ISO) is excited to announce the University’s first-ever bug bounty program -- a rarity in higher ed. Beginning on January 19th, students and full-time benefits eligible employees can responsibly hunt for cybersecurity vulnerabilities (subject to the terms of the program) and earn rewards up to $1,000. To mark the official launch of the program, ISO is hosting a hackathon style event on Saturday, January 19th (10am-4pm in Lathrop 282) where participants can submit discovered vulnerabilities and meet members of the Information Security Office.  Lunch will be provided. Participants should familiarize themselves with the details of the program in advance by visiting

Stanford Blockchain Conference - Jan. 30th - Feb. 1st, 2019. Stanford, CA: Arrillaga Alumni Center
The Stanford blockchain conference ( will take place on Jan. 30 - Feb. 1st.  This conference will explore the use of formal methods, empirical analysis, and risk modeling to better understand security and systemic risk in blockchain protocols. We aim to foster multidisciplinary collaboration among practitioners and researchers in blockchain protocols, distributed systems, cryptography, computer security, and risk management. To register please visit the registration page ( 

Understanding the limitations of AI: When Algorithms Fail - Jan. 18th, 1:15pm, Packard 202
Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one’s suitability for a job, to health diagnostic systems trained to determine a patient’s outcome, machine learning models are used to make decisions that can have serious consequences on people’s lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, and discuss recent work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of an AI datasheet to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability. Timnit Gebru is a research scientist in the Ethical AI team at Google and just finished her postdoc in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York.
To suggest a related event to be listed here, please email