Online terrorism and violent extremism are cross-platform and transnational by nature. Nobody has just one app on their phone or their laptop, and bad actors are no different. These trends are evident in case studies—from the international recruitment of foreign terrorist fighters, including women, by the Islamic State, to the violence-inducing conspiracy theories from QAnon.
Any efforts trying to effectively counter terrorism and violent extremism need to similarly go beyond one-country, one-platform frameworks. The next big challenge for governments and nongovernmental organizations (NGOs) working with tech companies is to embrace the reality that the internet and its services are highly heterogeneous, and platforms with global users are increasingly not based in the United States.
.
State of Play
The ability of tech companies to share risk mitigation tools across platforms, as well as to work with governments and civil society to share trends and advance compatible crisis response frameworks, has come a long way in recent years. These endeavors have been fostered through initiatives like the Global Internet Forum to Counter Terrorism(GIFCT), where I work; Tech Against Terrorism (TAT); and the Global Network on Extremism and Technology (GNET). These organizations and networks work collaboratively with wider government-led forums—such as the EU Internet Forum, the United Nations’ Counter-Terrorism Executive Directorate and the Christchurch Call to Action—in an effort to advance tech companies’ efforts to self-regulate and increase proactive responses.
To date, GIFCT and others have worked on cross-platform efforts for its member companies. This includes things like a hash-sharing database to share hashes, or “digital fingerprints,” of photos and videos that have been identified as terrorist content so that platforms can track and remove it if necessary. Learning from progress made in the child-safety space, hashed versions of labeled terrorist content allow the identifiers of terrorist content to be shared without sharing any user data or personally identifiable information.
Shared efforts also include a cross-platform incident response framework to react quickly to real-world attacks that have online elements, like the 2019 live-streaming of the Christchurch shootings, and a series of international working groups. Working groups bring together government officials, tech leaders, members of civil society, first responders, and other stakeholders to build best practices and understanding on topics such as crisis response, content algorithms and positive interventions for countering online extremism, transparency reporting frameworks, and technical approaches to countering terrorism.
Research on terrorist and violent extremist trends shows that a single online terrorist campaign often uses three or more platforms. A smaller, less-regulated platform is usually used for private coordination—such as an end-to-end encrypted chat platform. A second platform is used for storing original copies of propaganda and media; think cloud storage or similar file-sharing sites. Additionally, core members or sympathizers disseminate strategized content on larger social media platforms, or “amplification” outlets, which will inevitably be the well-known platforms that everyone uses to gain the most traction. Fighting terrorism online requires addressing this interplay, but any single platform or company lacks visibility into the trends elsewhere online.
Academic insights into adversarial trends and efforts to map which platforms are being exploited are key for tech companies to adapt their safety efforts accordingly. Government efforts to convene internet companies around counterterrorism and counterextremism also need to draw on expert insights to cast a wide net that brings smaller and more diverse platforms to the table.
Social media platforms, despite what some observers fear, are not psychic. Without strong on-platform signals, such as text or images that might be shared alongside a link, they don’t inherently know that a URL link shared on their platform leads to violating content if that content is hosted on a third-party platform. They also often can’t tell that a user is a “terrorist” or “violent extremist” without obvious signals on their platforms. Research looking at the outlinks associated with one Islamic State publication showed that URLs shared on Twitter alone linked to 244 different content-hosting platforms, largely housed on lesser-known micro-sites, such as 4shared.com, cloudup.com or cloud.mail.ru.
.
Beyond Social Media
Large, Silicon Valley-based social media companies will always rightfully be subject to scrutiny, but if policymakers are going to effectively challenge terrorism and violent extremism online, they need to think laterally and globally.
Violent extremist organizations are savvy about online branding and membership. These organizations, and their followers, increasingly engage in “swagification” as their audiences grow. Logos and subculture slogans of neo-Nazi and white supremacy groups are put on T-shirts, flags, caps, manifestos, and even “survival kits” for members and supporters to purchase. Financial technology and online marketplace platforms are often shocked when human rights groups and social justice NGOs flag these monetized products.
Looking at current efforts and advocacy for increased regulation online, the focus has been almost entirely on user-generated content on social media. Legislation in the EU, Australia, the United Kingdom and elsewhere focus almost exclusively on the fast removal of images and videos. However, many of the platforms being used to further terrorist and violent extremist efforts have less to do with official propaganda on social media and more to do with funding and coordination.
Conference dial-in services, hospitality platforms for room bookings, smaller chat platforms, gaming-adjacent communication platforms, and transportation applicationshave all been implicated in terrorist and violent-extremist plots and events in recent years. To help guide these platforms, attention will have to shift to a wider range of safety tools. Logo detection, text classifiers, network deplatforming and URL-sharing efforts are just some of the ways collaborations can further safety-by-design and proactive work.
We can’t simply algorithm our way out of the problem. Algorithmic oversight and tools that can enhance efforts are certainly needed, but tooling will always have to be paired with context and human oversight. Open source intelligence and research insights are necessary so that platform moderation teams have resources to help guide them. It is then the responsibility of platforms to take those insights and act on them in accordance with legal guidance and their policies. While tools and algorithms help platforms solve for scale and speed, human oversight and context resources are needed to ensure nuanced understanding and to mitigate any potential over-censorship.
.
Global Users, Global Platforms
For the past 10 to 20 years, people have associated global tech with Silicon Valley, or at least with big American companies. These companies have to adhere, at the very least, to U.S. laws, and often implicitly accept U.S. norms on free speech and other rights. But as they have expanded globally, their user base has also expanded exponentially among non-U.S. audiences. For counterterrorism and counterextremism efforts, this has meant the need for a huge scale-up in language comprehension and nuanced understanding of how social and cultural norms of hate speech and violence manifest internationally.
The indicators of violent extremism in the United States can’t be expected to look identical to indicators across Europe, Asia and Africa. Every region and country has its own violent extremist and terrorist organizations, each with specific sociopolitical histories that often include coded symbols, slogans and slurs. Yet not every tech company has the capacity to hire tens of thousands of moderators around the world. Only the largest monetized companies can afford this internal support infrastructure. Cross-sector efforts and public-private partnerships will remain key, particularly for the smaller platforms relying on third-party intel and tooling.
Lastly, global companies are emerging from non-U.S. markets more frequently. In an Organization for Economic Cooperation and Development survey of the top 50 social media platforms, 13 were based in China. These companies are coming from countries with different rules and oversight infrastructure, as well as different cultural norms around privacy and security. This is not inherently bad. However, there are no real supranational global mechanisms to institute digital norms, and even when human rights NGOs and UN bodies highlight the need for “human rights” oversight, GIFCT and those monitoring tech platform progress are already seeing varying interpretations of what that means.
Are tech companies solving for free speech, privacy or safety? Oftentimes solving for one is at the expense of the other two. Platforms navigating among these three necessary pillars have followed different paths. Ultimate user privacy with end-to-end encrypted communication has been criticized as giving a free pass for child exploitation and terrorists. Safety- and security-focused policies to remove bad content faster on platforms are criticized for potential over-censorship with possible ramifications for activists when data is handed over to governments. Free speech without parameters inherently leads to dehumanization and violence when unchecked.
.
Cross-Sector Collaboration
There is, as always, no one solution to solve all the nuanced problems around countering terrorism and violent extremism online. However, to evolve and mitigate risk with the times, a big-tent approach is clearly needed. Multistakeholder forums will have to become more comfortable with non-U.S. companies at the table.
The next big question is, who is willing to come to the table and what can they do together? It will always be easier to judge those companies already at the table, those trying to provide some level of transparency and those willing to admit when something has gone wrong. What to do with companies unwilling to take part in dialogues or provide transparency will increasingly be a question for governments and lawmakers.
.
By Erin Saltman, July 11, 2021, Published on LAWFARE