News
Filter:
Show Hide
Ex: author name, topic, etc.
Ex: author name, topic, etc.
By Topic
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Region
Show Hide
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
  • Expanded
By Type
Show Hide
By date
Show Hide

A new preprint paper looks at the ways Facebook Page operators are using AI image models to create surreal content and generate online engagement.

The Stanford Internet Observatory and Social Media Lab will hold a March 13 convening with the Biden-Harris Administration’s Kids Online Health & Safety Task Force and leading experts

A guide to the approaches adopted or proposed by legislators to protect children from online harms.

The seventh issue features four peer-reviewed articles and four commentaries

Texas and Florida are telling the Supreme Court that their social media laws are like civil rights laws prohibiting discrimination against minority groups. They’re wrong.
(Lawfare)

From Tech Policy Press, by Dave Willner and Samidh Chakrabarti, both of the Program on Governance of Emerging Technologies at the CPC.

President of France, Emmanuel Macron has announced his intention to regulate minors' access to screens, whether on phones, computers, tablets, or even game consoles. It has brought together a group of experts, including Florence G'sell of the Program on Governance of Emerging Technologies.

A new report identifies hundreds of instances of exploitative images of children in a public dataset used for AI text-to-image generation models.

Daphne Keller of the Program on Platform Regulation, and Francis Fukuyama, Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies and Director of the Ford Dorsey Master's in International Policy at Stanford, have filed an amicus "friend of the court" brief in the NetChoice Supreme Court case(s)

New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.

The Kids Online Safety Act (KOSA) has bipartisan support from nearly half the Senate and the enthusiastic backing of President Joe Biden, but opponents fear the bill would cause more harm than good for children and the internet.

A new research initiative seeks proposals from researchers studying trust and safety in the majority world. Applications due January 30, 2024

Schaake will serve alongside experts from government, private sector and civil society, and will engage and consult widely with existing and emerging initiatives and international organizations, to bridge perspectives across stakeholder groups and networks.

Decentralized social networks may be the new model for social media, but their lack of a central moderation function make it more difficult to combat online abuse.

City authorities are trying out new methods to eliminate 'undesirable' content

The Journal of Online Trust and Safety published peer-reviewed research on privacy, deepfakes, crowd-sourced fact checking, and what influences online searches.

YouTube rabbit holes are rare, but an SIO Scholar finds the platform can still help alternative and extremist channels build audiences.

Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments. Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true. (From the New York Times)

Judge's decision on protest anthem puts ball back in government's court. (Published in NIKKEI Asia by Charles Mok)