Zeitgeist

The Deplatforming Wars, Part II: A New Hope

December 11, 2018

The Deplatforming Wars, Part II: A New Hope

As a weapon of leftist ideology enforcement, deplatforming is proving to be strikingly effective, because it accomplishes two things: It removes opposing viewpoints from the marketplace (if one team can’t get to the playing field, the other team wins by default), and it scares potential dissenters into silence, forcing them to self-censor so as not to lose their online presence. Deplatforming is how the left is taking its bubble, that safe terrarium occupied by Hollywood elites and mainstream journalists in which opposing views are out of sight and mind (the “rather special world” lovingly spoken of by the late Pauline Kael), and making it everyone’s bubble.

I devoted last week’s column to the federal law that makes deplatforming possible: Section 230 (c)(2)(A) of the 1996 Communications Decency Act (CDA), which gives “interactive computer service providers” full and complete immunity to remove any content, including constitutionally protected speech (yes, the law specifically states that), for any reason. Over the past sixty years, the Supreme Court has generally expanded First Amendment protections beyond what a strict reading of the text would suggest. The amendment speaks only of Congress, but over the decades, SCOTUS has declared that other entities—state and local governments, schools, and any institutions that accept public money—also cannot censor constitutionally protected speech. Even in the private workplace, there are limits (due to antidiscrimination laws) to the extent to which an employer can penalize an employee for expressing certain beliefs. But CDA 230 provides an out for online service providers. They can ban your content on a whim, they can kick you off a platform on a whim. It’s an immunity law, and a thorough one.

Last week was the history lesson. This week, strategy. Just how ironclad is CDA 230? Well, pretty damn ironclad. But that doesn’t mean there isn’t hope. The thing is, though, that “hope” requires rejiggering certain cherished conservative and libertarian principles. Specifically, the concept of governmental nonintervention in the affairs of the private sector. See, leftists have no problem buggering their principles on this one. Their most prized goal is in sight—the complete silencing of opposition thought—and all that matters is the win. Have you noticed how the left, always so quick to call for regulation and nationalization, has yet to do so with internet giants like Google, Facebook, and Twitter? Leftists realize that online companies only have the awesome power of CDA 230 speech suppression because they’re private. So all that “government runs things better and restrains the wild impulses of the robber barons” rhetoric gets chucked out the window. Because in this case the “robber barons” are leftists. So by all means, let’s keep the government’s sticky little fingers out of their business!

The online silencing of the right depends on the left embracing deregulation and nonintervention. Conversely, for the right to fight back, it’ll have to embrace regulation and intervention.

As Bugs Bunny would say, “Ironic, ain’t it?”

San Diego attorney Al Rava recently scored a major victory against the online dating service Tinder, which was forced to abandon its policy of higher pricing for older users. What made the victory important was that Tinder, an online entity, was held to the standards of a brick-and-mortar (B&M) business and subjected to state law (California’s Unruh Act, which prohibits discrimination by “business establishments of every kind whatsoever”). In the Tinder case, the 2nd District Court of Appeal ruled that Unruh’s “every kind whatsoever” overruled the CDA’s sweetheart immunity deal for online businesses.

“The point is, there’s no legal consensus here. This is brand-new territory.”

But the Tinder case was about pricing, not speech removal. Following the Tinder ruling, Facebook hastily retreated from its policy of allowing paid advertisers to target one specific race, following a legal challenge on the grounds that such ads were in violation of federal antidiscrimination laws. In an amicus brief in the Facebook case, U.S. government attorneys stressed that CDA 230 immunity did not apply to Facebook ads, because Facebook was a partner in the ad creation process (i.e., a content creator). The amicus stressed that CDA 230 content removal immunity was still sacrosanct. All the same, another precedent was set, because another online entity was restrained by antidiscrimination laws (this time federal).

When Roommates.com was accused of discrimination because of an online questionnaire for prospective roomies that forced them to detail things like gender and sexual orientation, the 9th Circuit held that CDA 230 did not provide immunity from content produced by a site that could lead to potential discrimination. In other words, the court held that CDA 230 immunity could be restrained by federal antidiscrimination laws. But then, damn near ten years later, a different court ruled pretty much the opposite in a suit against Airbnb. In that case, what was at issue was the site’s policy of letting property owners demand a photo of prospective renters. The plaintiffs argued that this allowed property owners to discriminate against blacks. The initial judge was like, “That’s so covered by CDA immunity, bro,” but then a district court judge was “Whoa, that’s totally not covered by CDA immunity, brah.”

Faced with these contrasting rulings, Airbnb retreated, just like Facebook did. Now property owners can only demand photos of guests after they’ve already booked the reservation.

The point is, there’s no legal consensus here. This is brand-new territory. Courts have gone back and forth regarding CDA immunity for content a site creates (the Roommates questionnaire) and content it allows (one user demanding another user’s photo). But there’s yet to be a successful challenge to CDA 230’s content removal immunity. The Tinder case shackled an online entity to state law, the Facebook case demonstrated that CDA 230 immunity can be constrained by federal antidiscrimination laws, and the Airbnb case showed that the courts don’t know their ass from their gavels regarding the scope of CDA immunity.

So, is there any way to use the left’s beloved antidiscrimination laws, either state or federal, against CDA 230 content removal immunity? Using Unruh, as Al Rava did in the Tinder case, is “problematic.” Unruh does not prohibit discrimination based on ideology or politics, and Rava told me he’s doubtful that Unruh could ever be applied to viewpoint discrimination claims. Eugene Volokh, my go-to legal whiz whose WaPo blog is one of the best things on the ’net, straight-out said, forget it. CDA 230 immunity absolutely trumps state law, and likely trumps federal antidiscrimination laws. However, that was last year, before the Tinder and Facebook cases were resolved. The Tinder case did see state antidiscrimination laws successfully held against an online provider, and the Facebook case saw federal antidiscrimination laws held against an online provider.

So if antidiscrimination laws are the way to go, how can they be used regarding content removal? It’s unlikely (no, impossible) that any court would place viewpoint discrimination on the same level as racial, religious, or gender discrimination.

Ah-hah! But what if it can be shown that sites like Facebook and Twitter hold different races to different standards, and punish members of one race (or religion or gender) from saying things that members of a different race/religion/gender can say without sanction? When I floated that idea to Jeremy Malcolm, former senior global policy analyst for the Electronic Frontier Foundation, he replied, “I think that would be an interesting case to try,” adding that he believes antidiscrimination laws “would apply” in some cases.

I reached out to law professor Eric Goldman, whose Technology & Marketing Law Blog is a must-read for anyone interested in this topic. As a strong advocate for an internet unburdened by governmental interference, Goldman favors a robust interpretation of CDA 230 immunity. But what about content removal that creates a double standard whereby one “protected” group is allowed to say things that a different group isn’t? Does CDA immunity cover that?

“This is a tricky area, so I’m not sure what the answer would be. Prof. Volokh is right that Section 230 routinely applies to claims that a service exercised its editorial discretion over third party content, even if the intent or effect was to discriminate on the basis of some protected classification. However, it’s not clear Section 230 preempts all claims based on anti-discrimination laws.”

In other words, maybe. And getting a “maybe” from Goldman, with his pro-immunity bias, is encouraging to those who might want to forge this trail.

What’s abundantly clear is that even though the CDA was written in 1996, the relevant cases that will eventually determine CDA immunity limits are just now starting to come in. That’s what happens when you write defining internet law during the period when people were still using Mosaic to jack off to dial-up porn on Angelfire sites. The courts are currently grasping for consensus. So using antidiscrimination laws against CDA content removal immunity may work. Jared Taylor’s problem when he sued Twitter was that he went about things rationally (in typical Taylor fashion). “You say I’m a violent extremist, yet there’s not one piece of evidence that I am. Restore my account.” Sorry, old friend, but CDA immunity absolutely allows arbitrary banning based on no evidence.

What’s needed is a test case about content removal that focuses on discrimination against an identity group (race, religion, gender). For example, members of one race banned for saying things that members of a different race can say with no sanction.

I can’t answer the question of whether people on the right should use antidiscrimination laws to challenge CDA immunity. Is it ethical to take regulations that already bedevil brick-and-mortar businesses and expand them so that they similarly hobble online entities? Wouldn’t B&M establishments love blanket immunity when it comes to removing customers? Should we fuck up the one remaining marketplace in America where that immunity exists? I mean, yeah, at present it smarts because the internet giants are run by leftists. But a libertarian would say that the remedy is to start our own social media platforms. If we eventually develop a right-leaning Twitter or Facebook, won’t we want that immunity?

On the other hand, if we agree that society suffers when vigorous debate is silenced and ideological hegemony enforced, we have to ask ourselves, which is the greater long-term risk? Expanding already-existing government regulations, or watching the slow death of the free and open exchange of ideas? Which is the greater sin—doing something, even if it means suspending previously cherished principles, or doing nothing, and leaving the game to the team that never respected principles in the first place?

Those are questions I can’t answer for you. What I can say is, using federal antidiscrimination laws against social media giants is as good a plan as any, and even a recognized scholar who hates the idea admits that he can’t rule out that it might work.

And speaking of work, mine is done on this topic. The rest is up to any bright, ambitious attorney who decides to give this strategy a try. Should that happen, it’ll be damn fascinating to see the results, however they shake out.

Pay to Play - Put your money where your mouth is and subscribe for an ad-free experience and to join the world famous Takimag comment board.