Why Infrastructure Providers Should Stay Out Of Content Policing.
Cloudflare’s recent headline-making decision to refuse its services to KiwiFarms—a site notorious for allowing its users to wage harassment campaigns against trans people—is likely to lead to more calls for infrastructure companies to police online speech. Although EFF would shed no tears at the loss of KiwiFarms (which is still online as of this writing), Cloudflare’s decision re-raises fundamental, and still unanswered, questions about the role of such companies in shaping who can, and cannot, speak online.
The deplatforming followed a campaign demanding that Cloudflare boot the site from its services. At first the company refused, but then, just 48 hours later, Cloudflare removed KiwiFarms from its services and issued a statement outlining their justifications for doing so.
While this recent incident serves as a particularly pointed example of the content-based interventions that infrastructure companies are increasingly making, it is hardly the first:
- In 2017, GoDaddy, Google, and Cloudflare cut off services for the neo-Nazi site Daily Stormer after the site published a vitriolic article about Heather Heyer, the woman killed during the Charlottesville rally. Following the incident, Cloudflare CEO Matthew Prince famously stated: “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.”
- In 2018, Cloudflare preemptively denied services to Switter, a decentralized platform by and for sex workers to safely connect with and vet clients. Cloudflare blamed the decision on the company’s “attempts to understand FOSTA,” the anti-trafficking law that has had wide repercussions for sex workers and online sexual content more generally.
- In 2020, as Covid lockdowns made in-person events largely untenable, Zoom refused to support virtual events at three different universities, ostensibly because one of the speakers—Leila Khaled—participated in airplane hijackings fifty years ago and is associated with an organization the U.S. government has labeled as “terrorist”. The company had previously canceled services for activists in China and the United States regarding the Tiananmen Square massacre commemorations, citing adherence to Chinese law.
- In 2022, during the early stages of Russia’s invasion of Ukraine, governments around the world pressured internet service providers to block state-sponsored content by Russian outlets, whilst Ukraine reached out to RIPE, one of the five Regional Registries for Europe, the Middle East and parts of Central Asia, asking the organization to revoke IP address delegation to Russia.
These takedowns and demands raise thorny questions, particularly when providing services to one entity risks enabling harms to others. If it is not possible to intervene in a necessary and proportionate way, as required by international human rights standards, much less in a way that will be fully transparent to—or appealable by—users who rely on the internet to access information and organize, should providers voluntarily intervene at all? Should there be exceptions for emergency circumstances? How can we best identify and mitigate collateral damage, especially to less powerful communities? What happens when state actors demand similar interventions?
Spoiler: this post won’t answer all of those questions. But we noticed that many policymakers, at least, are trying to do so themselves without really understanding the variety of services that operate “beyond the platform.” And that, at least, is a problem we can address right now.
The Internet Is Not Facebook (or Twitter, or Discord, etc)
There are many services, mechanisms, and protocols that make up the internet as we know it. The most essential of these are what we call infrastructural, or infrastructure providers. We might think of infrastructural services as belonging to two camps: physical and logical. The physical infrastructure is the easiest to determine, such as underwater tubes, cables, servers, routers, internet exchange points (IXPs), and the like. These things make up the tangible backbone of the internet. It’s easy to forget—and important to remember—that the internet is a physical thing.
The logical layer of internet infrastructure is where things get a little tricky. No one will contest that internet protocols (like HTTP/S, DNS, IP), internet service providers (ISPs), content delivery networks (CDNs), and certificate authorities (CAs) are all examples of necessary infrastructural services. ISPs provide people with access to the physical layer of the internet, internet protocols provide a coherent set of rules for their computers to communicate effectively across the internet, and CDN’s and CA’s provide the necessary content and validity that websites need in order to remain available to users. These are essential for platforms to exist and for people to interact with them online. This is why we advocate for content-neutrality positions from these services: they are essential to freedom of expression online and should not be empowered with editorial capability to decide what can and cannot exist online, above what the law already dictates.
There are plenty of other services that work behind the scenes to make the internet work as expected. These services, like payment processors, analytics plugins, behavioral tracking mechanisms, and some cybersecurity tools, provide platforms financial viability and reveal a sort of gradient gray area between what we determine as essentially infrastructural versus not. Denying their services may have varying degrees of impact on a platform. Payment processors are essential for almost any website to collect money for their business or organization to stay online. On the other hand, one could argue that behavioral tracking mechanisms and advertising trackers also provide companies financial viability in competitive markets. We won’t argue that tracking tools are infrastructural.
But when it comes to cybersecurity tools like DDoS protection through reverse proxy servers (what Cloudflare provided to KiwiFarms), it’s not so easy. A DDoS protection mechanism doesn’t make or break a site from appearing online—it shields it from potential attacks that could. Also, unlike ISP’s or CA’s or protocols, this type of cybersecurity tool isn’t a service closely guarded and defined by authoritative entities. It is something that anyone with technical expertise (no platform is guaranteed a right to good programmers) can accomplish. In the case of KiwiFarms, they’ve transitioned to using a modified fork of a free and open source load balancer to protect against DDoS and other bot-driven attacks.
Interventions Beyond Platforms Have Different Consequences
It’s hard for infrastructure providers to create policies that uphold the requirements for content moderation as established by international human rights standards. And it’s particularly challenging to create these policies and monitoring systems when individual rights appear to conflict with one another. And the consequences of their decisions vary significantly.
For example, it’s notable that far less ink was spilled by Cloudflare and by the tech press when it made the decision to terminate service to Switter, in just one example of SESTA/FOSTA’s harmful consequences for sex workers. Yet it’s these types of sites that are impacted the most. Platforms that are based outside the global north or which have more users from marginalized communities seldom have the same alternatives for infrastructure services—including security tools and server space—as well-resourced sites and even less-resourced online spaces based in the U.S. and Europe. For those users, policies that support less intervention, and the ability to communicate without being vulnerable to the whims of company executives, may be a better way to help people speak truth to power.
Online actions create real world harm—and that can happen in multiple directions. But infrastructure providers are rarely well-placed to evaluate that harm. They may also face conflicting requirements and demands based on the rules and values of the countries in which they operate. Cloudflare noted that previous interventions led to an increase in government takedown demands.
We don’t have a simple solution to these complex problems, but we do have a suggestion. Given these pressures, the thorny questions they raise, and the importance of ensuring that users have the ability to speak up and express themselves without being vulnerable to the whims of company executives, providers that can’t answer those questions consistently should do their best to stay focused instead on their core mission: providing and improving reliable services so that others can build on them to debate, advocate, and organize. And policymakers should focus on helping ensure that internet policies support privacy, expression, and human rights.