Last week I wrote a TechCrunch article about Section 230 of the Communications Disclosure Act, the U.S. law that many argue is responsible for the existence of “Web 2.0” in its current form.
Simply put: I don’t like it.
It’s the law that states that no one who runs a website or online service of any kind counts as a “publisher” in the old-school sense of the term--no one who hosts content online is responsible for content other people create, even if the content is libelous, even if it’s harassment, even if it contains threats.
If you want to sue someone for something they say on the Internet, you have to find the original creator. Facebook, Twitter, Yik Yak, or whoever owns the site and profits from the content bears no responsibility--not even for helping you find the person. Court cases have decided over and over again that websites are free to host anonymously contributed content and bear no responsibility for making the anonymous contributors difficult to find.
That was a problem in 2007 when the first big modern Internet case about widespread harassment on an anonymous forum broke, the AutoAdmit case, where women who had vile rumors and smears spread about them couldn’t do anything against the site that spread them and had to laboriously track down each individual poster hiding behind pseudonyms.
It’s worse today, with the proliferation of imageboard-style services designed to maximize the anonymity and minimize the accountability of their users, hosting communities for the express purpose of spreading personal information, organizing targeted harassment and in some cases trying to get people killed.
Many people besides me have expressed concern about how Section 230 enables crime on a massive scale. 47 State Attorneys General in 2013 talked about the rampant spread of child pornography and advertisement of child sex services on sites like Backpage and Craigslist. More recently, we’re seeing the Jane Doe No. 14 v. Internet Brands, Inc. case work through the system, in which current case law seems to state the website ModelMayhem bears no responsibility at all for hosting a ring of criminals pretending to be casting agents plotting to drug and rape models on the site.
On the other side you have all the tech advocates saying that the legal system is a quagmire, that opening up the door to more lawsuits will have a “chilling effect” on speech, and that Facebook and Twitter and many other blue-chip companies simply couldn’t exist if they had to vet all their user-generated content for liability.
Among the people who came out to yell at me on this issue were EFF, Techdirt and the Popehat legal blog. And I admit, putting an article about increasing the scope of legal liability for people who run tech companies on an Internet run by tech companies was, as they say, asking for it. It’s like going into Vatican City and setting fire to an icon of the Virgin Mary, or venturing into rural Ohio and blowing your nose on the Second Amendment.
Let’s clarify a few things. It’s totally possible to be in favor of limiting liability for platforms without shielding them from liability completely. This is the standard held by most countries that aren’t the United States, which is why Section 230 is to Internet speech as the Second Amendment is to gun rights--it’s us taking as an immutable freedom something most other countries see as a negotiable one, and therefore turns us into a liability haven.
As for whether my article poses an immediate threat to free speech online, well. The financial incentives to keep Section 230 exactly the way it is are enormous, and are incentives held by some of the biggest, best-connected companies with the best track record of getting special treatment from the government. So much like repealing the Second Amendment, I have to accept the depressing truth that repealing Section 230 isn’t going to happen anytime soon, while still doing the best I can to move the Overton Window.
I don’t particularly advocate the immediate and total destruction of all Web 2.0 sites, although I do stand by my irritated response to Twitter critics that most of Web 2.0 is garbage anyway. (A sentiment that in other contexts many of my critics agree with.)
What I mostly ask for is moderation. In both senses of the term.
The problem is Section 230 is defended primarily by people who truly believe in the dream of Web 2.0 (which by necessity includes everyone who works at a Web 2.0 startup). It’s a dream of a kind of alchemy, that if you just make code that makes it “effortless” and “addictive” for users to churn out content, and you sit back and let the market decide which content rises to the top of the heap, you will eventually get the equivalent of a well-edited, well-curated, intelligent and thoughtful Web 1.0 publication without having to actually read or write a damn thing yourself.
It’s an intoxicating image, one that’s deeply attractive to investors who want to make huge amounts of money for very little work. It’s one that, I’ve argued, is mostly false, especially for the companies who’ve been most deeply invested in a pure free-speech model.
You can only push your users to perform the unpaid labor of content creation, curation and propagation so far based on the imaginary rewards of “karma” or “+1”s. When this leads to your content mostly being stupid memes or repetitive magic spells to protect your copyright or low-quality clickbait, that’s annoying and depressing but not a matter for the law.
When it leads to people abusing, harassing, stalking, doxing and swatting each other, though, it’s another matter. Tech companies love to brag about the absurd ratio between the size of their userbase and the size of their staff like that’s a good thing--but when it leads to Facebook’s anti-abuse police for over a billion users being a tiny staff of contractors from Morocco being paid $1/hour, it’s not really a plus.
I always tell people who lose their temper with the futility of reporting abuse on Twitter or Facebook not to take it out on the content managers themselves--those poor 21-year-old kids were hired to do an essentially impossible job. An anti-abuse initiative that was actually effective would require a well-paid, well-trained abuse team that would eat up a lot of the rapid exponential growth tech gurus like to sell to VCs.
Instead, we have “anti-abuse theatre.” We have platforms that treat anti-abuse as, essentially, a marketing/PR expense, since there’s no actual litigation risk they’re offsetting. Their job is to do exactly as much as it takes to make it look like they’re doing something. What’s worse, they have to justify the fact that these companies spend very little on anti-abuse by straight-up gaslighting victims, repeatedly telling them their reports of abuse do not qualify as abuse so that they can go on to tell people that abuse is a minor problem on their platforms that they have well in hand.
I’m not okay with this. I don’t see this improving without a fundamental change to the business model of Web 2.0. The increasing size of the userbase and the increasing effortlessness of publishing and propagating information can only make the problem worse. Twitter is designed to let things go viral much faster than Facebook, which is why it’s so addictive and also why it’s so destructive. The successor to Twitter will be whatever platform they come up with that’s even more frictionless, that reduces the “thought-tweet gap” to an even smaller fraction of a second.
Yes, in the past I’ve defended “outrage culture”; I’ve spoken positively about the changes wrought by Web 2.0, like how Twitter enabled #BlackLivesMatter to emerge as a movement. But even the most positive examples of Internet outrage I can think of were disturbingly casual about collateral damage. I’m still haunted by the time I retweeted a post misidentifying the man who shot Mike Brown and put a family in fear, though in my defense that was less egregious than Spike Lee tweeting full dox (including address) of the wrong George Zimmerman.
Web 2.0 moves incredibly fast, and incredibly recklessly. It does so because it’s allowed to do so, because it’s easy for individual posters to hide behind a mask of anonymity or, even if they’re not anonymous, to get overlooked in a sea of voices.
Twitter and Facebook didn't create the idea of grassroots protest, as much as some tech VCs like to pretend they did. They enable the kind of "weak-tie activism" that can be and has been built into powerful political movements with effort and leadership - but they also enable grotesque missteps like doxing an old man who shares a killer's name, that send those movements flying off the rails.
And the same law that enables activism creates ample room for purely destructive applications - using the cloak of anonymity to stalk and bully people physically close to you, publicly "rating" human beings using the same system Yelp uses to enable intimidation and retaliation against businesses, organizing whole forums around clever techniques for being a peeping tom, or just egging on a possible mass killer for the hell of it knowing nothing can happen to you if you do.
This isn't just "how the Internet works." This is how we built the Internet. Had we chosen to do so, we could’ve passed a Section 230 regulating print publishers, and allowed newspapers and magazines to print as many anonymous articles containing salacious, defamatory content as they wanted. It would likely have been quite profitable, despite lacking the “scalability” of online platforms--but we didn’t allow this because no one argued that this increased “freedom of speech” would outweigh the societal harm. Even so, defamation law has hardly succeeded in turning print media into an Orwellian dystopia where no one ever says anything controversial or harmful for fear of being sued.
The ultra-rapid, zero-accountability Web we’ve built--the one responsible for pretty much everything Jon Ronson decries in his book— was created by a legal shift masquerading as a technological shift. It’s not the only time this has happened. The whole tech industry is largely founded on finagling a business model based on brazenly ignoring existing laws or regulations on the grounds that they simply don’t count anymore if you’re using the Internet.
Xerox machines, VHS bootleggers and so on all existed before the Internet, but it was the“effortlessness” and “addictiveness” of Napster that convinced a bunch of people who previously thought buying bootleg videotapes was a sketchy thing to do that copyright was now obsolete. The heavy regulation of the taxi industry may be a mistake; that doesn’t make Uber’s claim that their cars simply aren’t taxis and have the right to pretend the regulations don’t exist any less bullshit. You might think it’s unfair that your landlord stipulates in your lease you can’t sublet; that doesn’t make AirBnB encouraging people to violate their leases en masse because they don’t apply to “disruptive” services that run on an app any less of a dick move.
The toxicity of the modern Internet is a set of legal choices we’ve made disguised as inevitable technological progress. Ideas like the “Streisand effect” and “The Internet never forgets” aren’t some kind of immutable law of physics, they’re the result of a series of cumulative legal decisions we’ve made, often without being aware we were making them. And sure, we can laugh about the Streisand effect and how the Internet never forgets when it’s a wealthy celebrity mad about photos of her house--it’s less funny when it’s a regular person getting their own “Pandora’s dox” blasted all over Twitter and remembered forever.
All this is to say that as a child of the Internet, a '90s kid who remembers being plugged in enough to actually care when Section 230 passed, a Twitter early adopter and all the rest, if you really confront me with the binary choice between the status quo and “No Web 2.0, at all” I’m willing to think the unthinkable, that maybe all of this was a mistake. If push came to shove I’d grudgingly say I’d accept a return to the gatekeepers of Web 1.0 if it meant the daily horrorshows of Web 2.0 came to an end.
But, of course, nothing is so black-and-white. There’s a lot of ways to change or reinterpret Section 230 without giving ISPs and hosts the full liability a print publisher has. One possible solution that’s been brought up in courts is challenging the Zeran v. AOL ruling in 1998 that says the fact that Section 230 gives online services “publisher liability” means they must also have “distributor liability.”
Publishers, who participate in creating content, are liable for defamatory content even if they take it down when asked. By contrast, distributors, who are seen as only providing access to the content, are shielded from liability as long as they swiftly remove access to that content as soon as they are made aware of it.
The concept of “distributor liability” is the basis of another well-known bit of Internet legislation, the Digital Millennium Copyright Act (specifically Title II of the DMCA), which is specifically designed to make copyright law practically enforceable without exposing tech companies to too much legal hassle.
People can, and do at great volume, argue that the DMCA is flawed and much abused. But it remains the only form of meaningful copyright enforcement online at all--and despite how powerful its detractors claim it is, copyrighted material is still quite easily available online for people who know how to find it.
I don’t really care about taking sides on that issue though. I don’t care much about copyright at all, while I do care a great deal about defamation and harassment. I think doing the tech industry “Better to ask forgiveness than permission” thing to subvert laws against destructive and harmful speech is way creepier than doing it to subvert laws about intellectual property ownership, and yet DMCA is one of the most heavily entrenched and scrupulously obeyed laws while the only specific law regarding defamation or harassment for platforms is “None of our business.”
If I were an anonymous blogger I’d find it far safer to libel or encourage violence against people I don’t like than making videos ripping off someone else’s IP. There’ve been far more YouTube shows threatened over fair use disputes than over harassment or defamation, despite harassment and defamation being some YouTubers’ bread and butter, because DMCA takedowns are much faster and more effective than trying to track someone down to serve them with a suit.
This set of priorities is completely fucking backwards.
Compare how the European Union--that oppressive, communist hellhole--explicitly carves out a DCMA-style safe harbor provision in Directive 2001/30/EC, the EU’s guiding law on Internet services. The difference is that it applies to all illegal content, not just copyright infringement--the platform is shielded from any obligation to actively monitor content in return for the promise that once the platform is made aware of illegal content they act swiftly to take the content down.
It’s hard to see how this is objectionable compared to the repeated rulings that Section 230 says platforms are free to do nothing at all even when they’ve received complaints and have knowledge of defamatory content. (I find the first and most momentous of these cases, Kenneth Zeran’s case, to be most troubling, especially given the recent revelations about similar stunts uber-troll Joshua Goldberg got away with for years with a similar total lack of redress for the victims.)
It’s certainly not quite the horrifying vision of a return to Web 1.0 some people paint whenever you breathe any criticism of Section 230. I do have to admit, though, that it isn’t costless. It’s apparently costly and difficult enough for Twitter to conduct harassment investigations that when the Most Hated Man in America gets someone’s phone number and puts it up telling people to threaten him the doxing tweet stays up for days. The fear of having to hire anti-abuse staff that actually do the jobs Terms of Service say they will is apparently great enough that social media companies continue to base themselves in America for the liability protection despite the many incentives to go elsewhere.
Section 230 is an enormously beneficial law for tech companies--on Volokh David Thompson, the general counsel for ReputationDefender, calls it a “subsidy for libel.” It also serves as a subsidy for platforms that enable abuse, allowing them to benefit from the extra traffic and clicks abusive content provides without having to pay the basic cover-your-ass costs they’d have to pay in any other jurisdiction in the world.
We’ve arguably benefited from this subsidy--like any other subsidy it’s greatly accelerated growth, given us an economic advantage over our allies, and let anonymous people tweet and blog and post with wild, reckless abandon.
But all subsidies end up costing somebody somewhere. This past year we’ve done little else but talk about that cost. Books describing the problem in gory, lurid detail shoot up the bestseller lists, despite containing nothing approaching a solution. The famous cases we namedrop in thinkpieces are only the tip of the iceberg.
I would love for companies to voluntarily beef up their anti-abuse tools and policies to a point where they’re halfway effective. I don’t see it happening as long as they get that sweet subsidy that lets them reap the short-term benefits of hosting trolls while pushing off the long-term costs on random individuals.
I don’t know which proposal is best. You have Thompson’s proposal to require recordkeeping by platforms, Franks’ proposal to narrow the definition of a “platform,” the EU’s proposal for a safe harbor in return for honoring takedown requests, Zeran’s even more generous proposal allowing only takedown requests from law enforcement. Or there’s the nuclear option of restoring full publisher liability. I doubt that one will ever happen but, on days when Web 2.0 is particularly bad, I like to fantasize about it.
But the current state of discourse on US-based online platforms is, shall we say, overheated. It’s impossible to bring us closer to the rest of the world--to basic sanity--without exerting a so-called “chilling effect.” Given the tremendous power individual abusers currently have to ruin lives with impunity, some degree of “chilling” is necessary and, I would argue, inevitable.
The same people who argue that speech on the Internet is so valuable it mustn’t be regulated or restrained by any law turn around and argue that the monstrous abuse the Internet runs on--the overheated crackling runaway trash fire that is modern social media--is “just words,” “just hurt feelings.” You can’t have it both ways.
Sooner or later, people will decide they’ve had enough. I still think that a centralized regulatory regime is logistically unworkable and ethically a bad idea, but if companies have no incentive to police themselves due to liability, a fed-up public will eventually try to create one. As always in history, failure or refusal to self-regulate leads inevitably to regulation from above.
After all, what’s the alternative?
The alternative is a world where everyone is running around with a loaded gun, where any of us is a moment away from going down in history as a Nazi or becoming the permanent target of a hate mob because a mildly clever bored channer or angry ex had an endless supply of anonymous trolls they could crowdsource their “operation” to. The alternative, if platforms aren’t pushed into policing themselves by liability, is to watch the rapid escalation of abuse in just the past few years continue to spiral until we find out what the logical endpoint is.
Maybe I’ve misjudged the public and people really are devoted enough to their freeze peach to stay in the trash fire indefinitely. Maybe I’ll live to see what rock bottom really looks like.
I hope not.