This op-ed, authored by CDT’s Gabriel Nicholas, first appeared in Foreign Policy on July 8, 2024.
With AI, companies and policymakers focus less on the harms that people experience through everyday use and more on how bad actors could harness the underlying technology. Companies and governments have encouraged this shift by focusing on a practice called red teaming, during which researchers are given access to an AI system and attempt to break its safety measures. Red teaming helps make sure that malicious actors cannot use AI systems for the worst possible cases, such as accessing information on developing bioweapons or fueling foreign influence operations. But it reveals nothing about people’s real-world experiences with the technology.
Many AI companies make their products available to researchers, but no major company shares data on how people use their products, including through transparency reports or making chat logs available to researchers. Instead, researchers are left to guess how people use the technology. They may play the role of a user seeking medical advice, asking for a public figure’s personal information, or evaluating a resume, and then analyze the results for inaccuracy, privacy breaches, and bias. But without access to real world usage data, researchers cannot know how often the problems that they uncover actually occur, or if they occur at all.
This data void also makes it difficult for government agencies to use research to inform policy. Agencies have limited resources to address AI-related concerns, and without information on the prevalence of different harms, they may struggle to allocate these resources effectively. For instance, the U.S. Cybersecurity and Infrastructure Security Agency, which is partly responsible for helping safeguard elections, may have to decide whether to prioritize educating the public about the dangers of using AI to access election information versus its existing efforts to educate election officials about the risk of being tricked by deepfakes and voice cloning.