Election integrity on Facebook in the face of AI (Guest Author: The Ethical Reckoner)
Meta’s Local Election Integrity Plans
I have the pleasure of having a guest author, Emmie Hine, who writes
, a weekly take on tech news. She’s also a PhD candidate in Law, Science, and Technology at the University of Bologna and KU Leuven, where she researches the ethics and governance of emerging technologies, including AI and extended reality, with a particular focus on China. Emmie previously worked as a software engineer; she speaks English and Mandarin, but her favorite language is TypeScript.The A.I. Ethicist
Elections determine who will lead a country to prosperity or decay, and artificial intelligence may be used to help in electing our leader(s). It is imperative that we, as citizens of our respective countries, be aware of the challenges and risks of artificial intelligence systems determining our destiny. Below, the Ethical Reckoner gives her synopsis and analysis of AI’s potential influence in elections and its use on the social-media platform, Meta.
The Synopsis ( The Ethical Reckoner)
Meta’s Nick Clegg is downplaying AI as a threat to elections, saying that there haven’t been concerted efforts to subvert elections yet, although it has been used in elections in Taiwan, Bangladesh, and South Korea, among others. Additionally, cybersecurity researchers caution that China is ramping up its disinformation efforts, especially with AI. It’s true that most misinformation out there is not AI-generated, and the AI disinformation that’s been most successful has been AI-enhanced rather than generated, audio-based, and of lesser-known people. But this kind of disinformation must be addressed as well, and just because more sophisticated campaigns haven’t happened doesn’t mean that they can’t happen, so let’s look at Meta’s anti-misinformation election toolkit, and pay particular attention to how it can be used to counter AI disinformation.
Meta’s ecosystem goes beyond the Facebook, Instagram, and Threads apps. Facebook alone has Messenger, Facebook Marketplace, Facebook Live, Facebook Gaming, and (news to me) Facebook Dating, creating many different vulnerability surfaces that must be addressed. Meta also owns WhatsApp, the world's most widely used messaging app. All of these have different ways they can be used for disinformation of all stripes.
Meta knows that its products can be—and have been—used for disinformation, so it’s taking steps to counter it. It’s issued documents detailing how it’s safeguarding elections in Europe, India, South Africa, and the US. A lot of this involves partnering with local fact-checking partners to assess local misinformation and label or flag posts. This includes context-specific misinformation, like banning “false claims about someone from one religion physically harming or harassing another person or group from a different religion” in India or supporting the development of digital literacy tools in South Africa. If fact-checkers flag a piece of information as false, Meta reduces the distribution of the content and adds a warning label, which Meta says keeps 95% of people from clicking through. They’ve also taken measures to reduce the spread of misinformation on WhatsApp by limiting the number of groups you can forward a message to.
The Analysis (The Ethical Reckoner)
Meta is no stranger to its platforms being used for disinformation from COVID disinformation to Chinese influence operations. But is it prepared for AI-related disinformation? Its AI-generated content policy was criticized by the Oversight Board as being too narrow because it only covered videos that are created or altered to make someone “appear to say something they didn’t say,” which would be taken down, but generative AI now can make fake audio or generate content that makes someone appear to do something they didn’t do. Meta’s AI ad policy was created more recently, which created a disconnect. AI-generated or manipulated content that showed someone doing or saying something they didn’t do or say had to be labeled in political ads, so you had a situation where some AI-generated content was permitted in ads but prohibited on Facebook, and other AI-generated content had to be labeled in ads but not on the rest of Facebook.
After the Oversight Board decision, Meta will in May start requiring more kinds of AI-generated content to be labeled across its platforms and, instead of removing it, will rely on labels to add context. However, while Meta will be able to detect some content as AI-generated if it was generated with partner companies’ AI tools (including OpenAI, Google, and Microsoft, among others), it will otherwise rely on users to disclose when they’ve used AI to manipulate content. In the absence of labels or metadata, there’s no way to detect with 100% accuracy when content is AI-generated or altered, and not every generative AI tool will have the markers that Meta will be trying to detect. While this will probably catch most of the AI-generated content, bad actors will deliberately avoid using tools that create content that can be automatically detected and flagged, creating the risk that manipulated media, potentially part of disinformation campaigns, will end up in the Meta ecosystem without labels.
Then, we’re effectively back in the situation we’ve been in until now—relying on Meta’s system of fact checkers to figure out what’s AI-generated and what’s not. And while Meta does have fact checking partners in many countries and languages, non-English content moderation has historically been worse than English content moderation, which might put elections elsewhere in the world more at risk. As I’ve written before in the Ethical Reckoner, if the most attention is paid to marquee elections in the US, then down-ballot races and elections in other countries will be vulnerable. It’s good that Meta is issuing documents about how it’s addressing elections in India and South Africa, but in the biggest election year in human history, we won’t know how effective its policies will be until the dust settles—and if they don’t work, it will be too late.