Skip to content Skip to navigation

Raw Data Podcast Episode Eleven: What's On Your Mind?

One of the first chatbots, ELIZA, programmed in 1966 and still available online, was designed to imitate a Rogerian psychotherapist. ELIZA primarily reflects the user’s statements back to him in the form of a question, but can recognize certain words (e.g., a statement including the word “mother” leads ELIZA to ask questions about “family”) and continually asks for your thoughts. If you tell ELIZA “I’m depressed”, she responds “Do you think it’s normal to be depressed?” In some ways, the ability of technology to address human ennui has progressed since 1966; as this episode shows, the crisis text line virtually connects users with volunteers who want to help, and newer chatbots like Tess aim to form emotional bonds with users and provide an array of mental health resources. In other ways, tech developers have ignored mental health; a recent paper by Stanford medical researchers tested the responses of AI assistants like Siri, Cortana, and Google Now to medical crises, and found that only Siri provided a helpful response to “I am depressed”, among other serious omissions. The media response to this paper indicates that we want our technology to help us when we’re down, and to respond with empathy as well as links to human-staffed resources.

Is your computer responsible for you?

However, as soon as we hold technology responsible for responding to mental health crises, we also open them to blame. If Siri is trained to recognize suicidal pronouncements, is she partially at fault if she overhears worrying statements but fails to prevent a suicide, or to call 911? To what extent are monitored systems like work email or school-provided computers responsible for monitoring mental health, and intervening? Some school districts are using software like GoGuardian on students’ computers, which will alert authorities when students google terms related to self-harm or suicide. While the school districts claim these tools have helped hundreds of students, others worry about students’ privacy and the implied restriction of what students are “allowed” to search for and learn about.

Suicide reporting has serious consequences that technology is not equipped to handle

Another reason we may not want our technology monitoring our mental health is linked to the problem of “swatting”. In this form of online harassment, targets’ locations are identified by their IP addresses or by hacking, and false emergency reports are called into local police, who sometimes respond with SWAT teams (hence the name). Police are often required to respond to reports of potentially suicidal individuals, and an artificially intelligent system designed to detect suicidal statements would also be required to report them—but would be vulnerable to hackers attempting to use the system to falsely report a targeted person as suicidal. Some states, including California, even have statutes allowing the nonconsensual detention of individuals deemed a risk to themselves, and evidence for this detention can take the form of statements provided to an online crisis chatline or website. Just as online fraudsters can intercept an insecure connection to a bank website and send false data to that website (or capture the real data a user is sending to the website), a malicious hacker could take advantage of such a system by intercepting communications to a mental health website or chatbot and sending false or inflated statements.

A lack of privacy and a lack of regulation plague mental health apps

Mental health apps have proliferated without data that they work, and in many cases without privacy protections that would be afforded a traditional visit to a clinician or psychiatrist. Many mental health apps don’t encrypt identifying information like addresses, names, and birthdates, and some apps offer incorrect or unhelpful advice. Others have unintended consequences, amplifying symptoms or problematic drinking by offering a reminder of the user’s problems every time he opens his phone.

Developers of mental health tech tools have good intentions, for the most part. We spend enough time communing with our phones as it is; they ought to be able to help us in return. But the sophistication of our artificial intelligences hasn’t evolved sufficiently for them to provide health advice, and perhaps we shouldn’t expect Siri to be a first responder.   
 

 

Listen now: