Earlier this month, I submitted formal evidence to the US Federal Trade Commission’s (FTC’s) inquiry into the demonetisation of users by tech platforms, on behalf of the Free Speech Union (FSU).
The submission documents how companies like PayPal, Ko-fi, and Eventbrite have removed users simply for expressing or defending lawful political and philosophical beliefs that dissent from progressive orthodoxy.
In many of the cases cited, the FSU has provided legal, strategic, or media support. That frontline casework has given us a close-up view of how ideologically driven moderation policies, often far more restrictive than the law requires, can financially cripple heterodox creators, journalists, campaigners, and non-profits across the Anglosphere. In the most serious cases, these organisations are stripped of their ability to build an audience, raise funds, publish, organise, or campaign.
The impact is particularly severe for public-interest initiatives without institutional backing. For them, the loss of even modest funding can mean the difference between survival and closure.
I really hope this submission will help inform the FTC’s thinking as it considers how the US Government might act to prevent similar abuses and protect consumers from ideological deplatforming and financial exclusion.
It’s a long document — probably too long — but I wanted to share it with you for two reasons.
First, because in an increasingly digital society, politically motivated financial censorship is one of the gravest threats to basic liberties in the West. True, the worst corporate offenders may have kept their heads down for a year or two, but the threat remains embedded in their Terms and Conditions and Acceptable Use Policies. So it’s surely only a matter of time before we take another step towards something resembling China’s social credit system: a world where those who profess the “right” ideological views are rewarded with banking services, while dissenters are quietly cut off. To be forewarned is to be forearmed.
Second, because after spending nearly three weeks digging up the shocking case studies cited in our submission, it struck me that there is no single source that documents this bizarre and faintly Orwellian period between 2021 and 2024, when political views became grounds for financial exclusion. This submission may not be widely read, but it offers a unique account of what happened, how it happened, and why it could happen again at any moment.
For those who’d like to dig deeper, the full document is available below. You can also view it on the FTC’s website here.
Introduction
This submission from the Free Speech Union (FSU) responds to the Federal Trade Commission’s request for public comment on technology platform censorship by presenting three case studies of enforcement against lawful speech. The platforms examined—PayPal, Ko-fi, and Eventbrite—are not social media sites, but infrastructure services acting as important gateways to participation in today’s increasingly digital civic life.
The FSU is a non-partisan organization that defends lawful free speech in the workplace and public square, and has more than 30,000 members. We’ve provided legal, strategic, and/or media support in most of the cases cited in this submission and our work gives us a detailed insight into how ideologically driven platform policies can inflict real harm on users.
The three companies we discuss illustrate distinct but converging forms of speech control. PayPal is a global payments giant with more than 40 percent of the digital payments market and the power to cut off individuals and organizations from their income entirely. Ko-fi, a mid-tier fundraising site, defers to upstream providers such as PayPal but adds its own content moderation policies. The event-hosting site Eventbrite exemplifies how “Trust and Safety” enforcement has extended into removing events not for illegality but for challenging progressive orthodoxy.
One consequence of the digitization of civic life has been to concentrate power in the hands of a few key intermediaries like these. For many years, concerns that such de facto control over the infrastructure might be used to marginalize dissent were dismissed as speculative. Our examples suggest otherwise.
Although these examples are mostly drawn from the UK, they concern global platforms whose moderation policies affect creators, journalists, campaigners, and non-profits across the Anglosphere. Collectively, they show the way that vague content policies, opaque procedures, and unchecked corporate discretion are being weaponized throughout the digital economy to exclude users for their political or belief-based views—far beyond anything the law requires. What is lost is the ability to build and sustain an audience, to fundraise, publish, organize, and campaign—in short, the right to be heard.
The impact is particularly severe for public-interest campaigns without institutional backing. The loss of a few thousand dollars can mean the difference between survival and closure. Just as troubling is the chilling effect created by fear of the loss. The withdrawal of essential services on ideological grounds marks a new front in the campaign against pluralism.
The case studies that follow document individuals and organizations whose lawful speech has been suppressed through financial exclusion. As such, they speak directly to the FTC’s concern under Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices.”
And, in reality, what we document here is only the tip of the iceberg. Many instances of financial exclusion go unreported, not necessarily for lack of harm, but because users are afraid of reputational damage, retaliation, or further exclusion. As the FSU’s founder and director, Lord Young of Acton, has written: “When I got a message from PayPal telling me my personal account had been closed, my initial reaction was shame. Being deplatformed by a financial-services company, whether a bank or a payment processor, is a mark of Cain... Based on my own experience, I reckon the debanking phenomenon is far more widespread than we’ve been led to believe, mainly because most people it has happened to are too embarrassed to talk about it.”
This submission tells the stories we know. The many we do not may be more serious still.
Case Study: PayPal
In September 2022, PayPal orchestrated a wave of high-profile account terminations, withdrawing its services from organizations and individuals whose views challenged dominant political orthodoxies. Among those targeted was the FSU—a striking departure, since we do not challenge orthodoxies ourselves. PayPal, it seems, was no longer just punishing people who’d lawfully criticized powerful institutions or prevailing ideas; it was targeting an organization that defends their right to do so.
When our account was terminated without warning, we received only a generic message citing PayPal’s Acceptable Use Policy (AUP). No specific clause was identified, and no examples of offending content were provided. We were left to wonder if the company had confused our role with the views of those we support. But whether it was something we said, or something said by a client we defended, the episode illustrates how easily discretionary corporate speech codes can be weaponized to penalize users whose views the corporation disfavors.
At the time, PayPal was one of our primary donation channels and the platform through which we sold memberships and merchandise. The sudden and wholly unexpected loss of service placed real pressure on what was then still a fledgling organization. Our financial viability—and with it, our ability to continue operating—was put in jeopardy.
Rather than engage with us directly, PayPal briefed The London Times that our removal was because we had the same director as The Daily Sceptic, a news blog owned and run by Lord Young, which the company had accused of promoting “misinformation” about Covid-19. This content—which questioned lockdown policy and vaccine efficacy—was entirely lawful. That PayPal chose to manage its public image through selective media briefing, while refusing to communicate with us as its client, exemplifies the deep asymmetry of power which defines this form of opaque enforcement.
But PayPal’s briefing about “misinformation” did at least offer an insight into the wider pattern of account closures it carried out at the same time. And in many of these cases, personal accounts were terminated alongside organizational ones. Indeed, that’s what happened to Lord Young, whose own account was closed at the same time as those of the FSU and The Daily Sceptic, despite each being legally distinct. In the end, it took an intervention from the UK government to get all three reinstated.
The UK Medical Freedom Alliance, which promotes informed consent and medical transparency around Covid-19 vaccines, was another such case. Its founder, Liz Evans, had her personal account terminated alongside the organization’s.
Covid-19 was not the only trigger for PayPal closures. In America, the independent website Consortium News (founded in 1995 by the late investigative journalist Robert Parry) and the left-wing MintPress News were both deplatformed. So too were Colin Wright—an evolutionary biologist and public intellectual known for criticizing gender ideology—and Moms for Liberty, a US-based conservative group focused on parental rights.
One of the most striking cases in PayPal’s sweep, however, was Covid-related—and again targeted an individual as well as an organization. During the pandemic, Molly Kingsley, co-founder of the UK children’s advocacy group UsForThem, had her personal account frozen and the group’s access to its funds abruptly severed. After initially saying that the organization could withdraw its remaining balance, PayPal informed UsForThem that the funds would be held for 180 days while it considered whether to impose “damages.”
The financial impact was immediate. As Kingsley wrote at the time: “Like many small organizations, we relied on the funds trickling into our PayPal account to keep the lights on. That stream of donations was the difference between financial viability and insolvency—the difference, that is, between being able to campaign and having to stop.”
Both accounts were reinstated 22 days later, following an intervention from the UK’s Financial Conduct Authority, which also requested an urgent explanation from PayPal. Nonetheless, no explanation beyond the generic one relating to the company’s AUP was provided until January 2025 after Kingsley finally initiated pre-action legal correspondence following years of PayPal obfuscation.
Only then did the company confirm that the termination was due to “content published by UsForThem relating to mandatory Covid-19 vaccinations and school closures.” Between May and September 2022, PayPal had even compiled a dossier of Kingsley’s published work, including excerpts from a book she co-authored, The Children’s Inquiry, criticizing pandemic policies that, in her view, prioritized adults at the expense of children. These criticisms were, of course, both lawful and clearly within the bounds of public interest advocacy.
In response to this belated explanation, Kingsley said: “PayPal appears to have admitted what we had suspected all along: that it was engaged in politically motivated debankings of those of us who criticized the government’s response to Covid, and the lockdown narrative in particular.”
The fact that it took pre-action legal correspondence (and more than two years) to extract even this limited account of its actions from PayPal is telling. Very few individuals or small organizations possess the legal knowledge, resources, and persistence required to pursue such disclosures.
So how did we get here—a world in which a financial intermediary can bully a campaign group, freeze its assets, and threaten to extract damages for publishing lawful opinions?
The answer lies in the far-reaching discretionary powers that PayPal’s AUP and User Agreement (UA) grant it to limit accounts and withhold funds. The AUP prohibits activities involving the “promotion of hate, violence, racial or other forms of intolerance that is discriminatory,” a phrase that remains undefined and therefore open to broad interpretation. The UA goes further still, allowing the company to take action against any user who controls “a PayPal account that is linked to another account that has engaged in any of these restricted activities,” including providing “false, inaccurate or misleading information.”
Enforcement can also be triggered by a user’s “interactions with another user or the other user’s activities,” even where the original user has not violated any rule themselves. This logic—a sweeping form of guilt by association—appears to have underpinned the simultaneous termination of Lord Young’s personal account, the FSU’s, and The Daily Sceptic’s.
Most concerning of all have been PayPal’s attempts to formalize punitive financial penalties for speech. As early as 2021, the company’s AU authorized it to deduct up to $2,500 in “liquidated damages” for each alleged breach of its AUP—a clause later mirrored in its UK terms, and which allowed the company to lock UsForThem out of its account while considering “damages.” In effect, the company had granted itself the ability to impose fines on users for lawful expression, with no oversight, no definition of what constituted a violation, and no route to appeal.
On 26 September 2022, the company published a revised AUP, designed to codify all of this and scheduled to take effect on 3 November. Under the proposed update, it was explicitly stated for the first time that users could be fined for spreading “misinformation,” for engaging in speech deemed “harmful,” “objectionable,” or “a risk to wellbeing,” or for sharing material it deemed “otherwise unfit for publication.” Any of these would have triggered the same $2,500 penalty per violation—at PayPal’s sole discretion.
The backlash was immediate. Google searches for “delete PayPal” surged by 1,392 per cent. The hashtag #BoycottPayPal trended globally. David Marcus, the company’s former president, condemned the changes: “A private company now gets to decide to take your money if you say something they disagree with. Insanity.” Elon Musk, PayPal’s co-founder, publicly agreed.
On 9 October, PayPal reversed course. A spokesperson claimed the language had been included “in error” and that the company never intended to fine users in this way.
That PayPal backed down under pressure is welcome. But without regulatory scrutiny, there is little to stop such clauses from quietly reappearing in future revisions.
Case Study: Ko-fi’s Suspension of Conservatives for Women
Ko-fi is a UK-based crowdfunding platform that allows creators, writers, and campaigners to receive small donations. In theory, it offers a user-friendly alternative to larger services like Patreon. In practice, it has become a risky place to be for users with gender-critical or otherwise heterodox views—due to its extensive enforcement powers, vague content policies, and upstream reliance on providers like PayPal. Its Terms of Use state that users are bound by these third parties’ rules, including the same AUP that triggered PayPal’s egregious enforcement sweep in 2022.
In the wake of that sweep, Ko-fi euphemistically “unpublished” the donation page of Conservatives for Women (CfW), a right-of-center UK campaign group that works to “raise awareness of issues which threaten the safety and dignity of women.” According to the group, the Ko-fi page “enabled us to receive occasional small donations which were very welcome as we get no funding from anywhere else apart from the sales of our postcards.” The “unpublishing” effectively ended the group’s primary income stream, including its ability to collect donations at conferences and public events.
At first, Ko-fi offered no explanation, linking only to a generic “help” article stating that pages are “usually (but not always)” removed for violating Ko-fi’s content policies, which prohibit, among other things, “culturally insensitive language,” “hate speech,” “harassment,” and “harmful misinformation.”
As CfW noted at the time: “We hope this is just a genuine mistake, but we are concerned it may be yet another example of the withdrawal of financial services from groups who refuse to adhere to groupthink on certain issues and those who speak up for women in particular.”
Unfortunately, those concerns were well-founded. When CfW pressed for more detail, Ko-fi responded: “Our guidelines prohibit the targeting and/or undermining of specific groups—in this case those who identify as trans. Ko-fi, in the interest of keeping our community a safe and inclusive space for creators, cannot provide a platform for such views/narratives, either directly on Ko-fi pages or on individual web pages and profiles associated with a Ko-fi page.”
This is significant for two reasons. First, it confirms that CfW was removed not for unlawful speech, but for expressing gender-critical views—views that UK courts have repeatedly affirmed are protected under the Equality Act 2010. And by late 2022, the law was abundantly clear. In Forstater v CGD Europe (2021), the Employment Appeal Tribunal ruled that a belief in the immutability of biological sex is a legally protected one.
Second, Ko-fi’s rationale extended beyond content hosted on its platform. The company mention of “individual web pages and profiles associated with a Ko-fi page” indicated that enforcement can be triggered by off-platform views.
The FSU offered support to CfW, and continues to assist other gender-critical people removed from Ko-fi for expressing protected beliefs. In every case, users’ pages were suspended or deleted without clear explanation and with no right to appeal.
These cases confirm that Ko-fi’s enforcement does not just target specific statements, but entire belief systems, especially those at odds with dominant ideological norms. They also reveal a structurally unaccountable system: one with no obligation to show harm or illegality, and no meaningful procedural safeguards.
As of May 2025, Ko-fi’s Terms continue to prohibit users from engaging in “any other activity that Ko-fi may deem in its sole discretion to be unacceptable,” and reserve the right to remove content that “in our opinion, is reasonably necessary to protect our business.” Again, these clauses permit broad, subjective enforcement without requiring evidence of misconduct.
The result is a system with no clear limits, no external oversight, and no reliable protection for lawful dissent, enabling a form of censorship that is not just unaccountable, but also ideologically selective. Users—such as CfW—expressing belief-based views, even if lawful and protected, can be removed without process, remedy, or redress.
Case Study: Eventbrite’s Deplatforming of Gender-Critical Events
In late 2022, ticketing platform Eventbrite abruptly removed listings, refunded tickets, and banned events hosted by women’s groups and authors critical of gender ideology and its impact on women’s rights, single-sex spaces, and freedom of belief. Each takedown was carried out by Eventbrite’s Trust and Safety Team—which it describes as serving “a critical role in allowing our community to experience Eventbrite in a way that fosters trust, safety, diversity, and inclusiveness.”
Event organizers received identically worded boilerplate notices citing violations of the platform’s ban on “hateful, dangerous, or violent content”—a grouping that collapses the distinction between illegal and lawful categories of speech.
Events affected included a screening of the documentary Adult Human Female in Nottingham, and the London launches of two books: Defending Women’s Spaces by Karen Ingala Smith, a long-term campaigner against domestic violence; and Transpositions: A Personal Journey into Gender Criticism edited by barrister Sarah Phillimore and with a foreword by the comedy writer Graham Linehan. In each instance, the event was removed, the tickets refunded, and the generic takedown notice issued without reference to specific content or any opportunity to appeal.
These and other removals took place even though, as mentioned in the section on Ko-fi, the UK courts had by then ruled that gender-critical beliefs are protected under the Equality Act 2010. The removals also left the organizers publicly associated with hate-based policy violations, despite no such violations having been found.
Phillimore launched a crowdfunder to pursue legal action, alleging that Eventbrite had breached Section 29 of the Equality Act, which prohibits discrimination by service providers on the basis of protected belief. The company’s legal team responded by denying that it was a service provider at all, and insisted that any legal dispute be heard in San Francisco.
“I managed to raise about £20,000 via crowdfunding,” Phillimore later wrote, “which was soon swallowed up in solicitors’ letters and conferences. The thought of embarking on litigation in San Francisco, with the inevitable risk of costs, was terrifying; I let it go.”
But by then, she had already submitted a Data Subject Access Request (DSAR)—a measure introduced in the UK in 2018 to give individuals the right to access information held about them. Among 168 heavily redacted files, she found internal notes showing that Eventbrite staff had pre-emptively labelled the speakers “transphobic.” One note read: “The event violates our hate speech policy due to the event organizer and multiple speakers confirmed for the event having had [sic] engaged in transphobic activity and content.” Another said: “The book is not out yet so we don’t know what content the book has, but it is clear the book is about gender identity and recognition of trans individuals.”
This admission—that the event was prevented not on the basis of the book’s content, but on reputational assumptions—offers a rare glimpse into Eventbrite’s internal enforcement logic, which enables pre-emptive censorship and ideological filtering under the guise of “Trust and Safety.”
As of May 2025, Eventbrite’s Community Guidelines continue to support this approach. The company may delete from its listings any content that “in our sole discretion,” it deems to jeopardize “trust, safety, diversity, and inclusiveness.” There is still no reference to a structured appeals process.
The guidelines define “hate” to encompass not only threats or incitement, but also “expressions of contempt,” “disparagement,” and “claims that a person or group is less than adequate.” These categories align with activist framings of harm and are broad enough to cover lawful and mainstream political or philosophical positions—including gender-critical beliefs.
The policy also prohibits “intentionally misclassifying specific people’s preferred identity... in a manner intended to degrade, shame, or insult.” While aimed at preventing harassment, this clause can—and has—been used to deplatform users who assert that biological sex is real and immutable.