The trove of hacked nude photos of female celebrities released this weekend was the capstone of a rough month for women on the Internet. In the past few weeks, there have been threats to media critic Anita Sarkeesian, violent rape GIFs proliferating on Jezebel, and trolls driving Zelda Williams off Twitter. That online abuse has tangible, real-world effects on the people experiencing them, ranging from stress and psychological trauma to reputational harm, is becoming clearer every day.
The commonly used terms “online bullying” and “harassment” suggest gender neutrality where none exists, and also lend themselves to the trivialization of what can be life-altering events. “Harassment” can include slut-shaming of teenage girls, proxy stalking, rape and death threats, revenge porn, cybermob attacks, malicious impersonation, rape videos used for extortion or shaming, and the sexual surveillance of women in public spaces. Free speech does not protect most of this activity.
The highly gendered character of most of this abuse, with primarily women being targeted, almost always by men, suggests that the spectrum of behavior could be considered a hate crime.
And now we have an all-too-timely new book called "Hate Crimes in Cyberspace," by law professor Danielle Citron, about the differences between criminal harassment and what’s commonly referred to as “trolling."
What was your first reaction to the news about the hacked photos?
Disclosing someone's nude images without consent is fundamentally an invasion of that person's sexual privacy. It is a destructive practice that is costly to careers, emotional well-being and physical safety. It is one way that harassers try to ruin victims' lives, and most often those victims are female. And as the recent hacks and exposure of celebrities' photos show, everyone is at risk, from the most powerful celebrity to the ordinary person -- the teacher, graduate student and business owner.
For years women have been coming forward with stories about online harassment that clearly shows the real-world effects of virtual harassment that is often casually dismissed with “Ignore the trolls.” Is there one case that illustrates particularly well the difference between trolling and cyberharassment?
My book truly deals with actionable harassment, not abuse that cannot be regulated (often called "trolling," a loose term). Consider one of the earliest cases, of game developer and blogger Kathy Sierra, where there are repeated credible threats of rape, doctored photographs of her being strangled, and lies about her. Then, a cybermob posted her Social Security number and home address, as well as defamatory lies about her. Whoever was responsible for those actions, even just some of them, would be treading on legal grounds --we can regulate true threats, defamation and certain privacy disclosures such as the disclosure of SSNs (which is like publishing your bank account number). To be sure, some of the folks who doxxed Sierra and published her SSN were self-proclaimed trolls, but nonetheless they engaged in unprotected activity by spreading defamatory lies and publishing her SSN.
By contrast, there is the case of Zelda Williams. The person who repeatedly told her her father was ashamed of her may be called a troll, but that person is engaging in protected speech. So, too, the person who posted pictures of dead bodies. Even if that person did it repeatedly, it might not even amount to regulable intentional infliction of emotional distress, because Robin Williams' death could be said to constitute a matter of public importance, rather than a purely private matter.
In several recent cases, such as Caroline Craido-Perez, Anita Richards and Anita Saarkeesian, trolling and harassment were carried out by what you call a cybermob largely made up of strangers for whom the fact of these women’s gender was not just salient, but paramount. How do you define hate speech vs. crimes, particularly as they relate to gender in your book?
What ordinarily comes to mind when people hear "hate crimes" and "hate speech" are acts and expression that send demeaning messages about a group. Not because of any particular individual traits, but purely because they are a member of that group. Men are not typically harassed on this basis, because they are men, the way women are, because they are women. So, I think the idea of calling it a hate crime is a little under-inclusive, because I’m not just talking about crimes, I’m talking about civil right violations, and civil wrongs. We’re dealing with a serious problem that’s proscribable, that we ignore.
So much of online harassment of women, without question, is because they are women. It is sexually demeaning, it’s sexually threatening, it reduces the victims to basically their sexual organs, and sends the message that all they’re there for is to be sexually abused, used and thrown away, that they offer nothing.
One particular aspect of online harassment that you’ve spent a lot of time writing and thinking about is revenge porn and malicious online impersonation. You write regularly about how hateful and offensive speech enjoys First Amendment protections, but make a distinction between “trolling” and cyberharassment or cyberstalking. Can you elaborate on these differences?
Hate speech is fully protected speech. So if the message is someone is saying all women are stupid, they belong in the kitchen, they all should be raped, they all are worth nothing, we would say, based on both firm First Amendment doctrine, as well as free speech principles, that that is protected. But what is so fundamentally different about the type of online harassment I’m talking about is the targeted attacks on individuals. They might constitute true threats or constitute intentional infliction of emotional harm or violate people’s rights to privacy, sexual invasions of privacy, defamation.
You cite several examples in your book, including one case in which a man pretended to be his ex-girlfriend and placed a false rape fantasy ad in Craig’s list that resulted in her brutal rape.
Yes, these are crimes of extortion and solicitation, like when someone’s impersonated, and said that they’re interested in anonymous sex and that leads to people stalking them, that’s a crime of solicitation. So, the kind of speech we’re talking about is a subset of speech that, as a categorical matter, either already gets no protection or a much lower level of protection, because it isn’t higher-value speech about matters of the public interest.
Many women leave online spaces to avoid hostility, some of which includes graphic depictions of violence against them or other women. When does trolling become actionable harassment?
Harassment as a whole, and the way in which it impacts individuals, fundamentally changes the arc of people’s lives. Economically, socially and politically. Trolling can turn into harassment. Say one of the members of the cybermob that went after Sierra published her home address and called her names -- you are an ugly whore, for instance. In that case, it would amount to trollish, abusive, protected speech. But if the troll then published her SSN and spread defamatory lies about it, it would have escalated into unprotected harassment.
Harassment includes real threats, privacy invasions, cruelty amounting to intentional infliction about purely private matters, nude images and reputational damage. People lose their jobs, they can’t get new jobs, because Google searches turn up destructive information. They often have to move because they’re afraid of confrontation by strangers. They lose investments and professional opportunities. These are coercive acts. So I’m very modest, in fact. I’m not asking us to expand or change First Amendment doctrine.
Last year, when Laura Bates, Jaclyn Friedman and I undertook a campaign to make Facebook recognize misogyny on its platform, I realized that the chilling effect on women’s free speech represented by online hostility was not well recognized or understood.
We ignore the fact that there is free speech on both sides of the equation. On the one hand, we have the expressive interests of the harassers, to express rape threats, post nude images of someone else without their permission, lie about them in ways that hurt their reputations, to spread defamation, which is unprotected, about purely private issues. And on the other hand, we have the free-speech interests of the victims who are silenced, driven offline by threats. They withdraw from online life, because they’re terrified.
The 1964 Civil Rights Act included gender, as well as race, national origin and religion, as a way in which individuals can be discriminated against, and deprived of rights and opportunities. Sometimes that discrimination takes the form of crimes or it takes the form of civil interferences, like the intentional infliction of emotional distress. But it manifests itself in depriving people of important life opportunities, because they’re a member of that group.
We have a long history of dismissing gendered harms. Think of domestic violence. We once thought of it as a man’s right to discipline his wife. It wasn’t a public problem, it was a matter of family government. We had traditional officers telling women to put their makeup on, and make their men dinner, and so they wouldn't be hit as much. The court called domestic violence a “trifle.” And that view persisted until the 1970s when the women’s movement, or battered women’s movement, said enough is enough. And the women’s movement in the 1960s and '70s said, look, this is not something women can prevent and cure, they can’t walk away from it, they can’t be blamed for it. This is something that’s a public harm that we must take seriously. They named domestic violence for what it is, and explained its harms. Then said, look, there are laws on the books we just enforce. They exist. Assault and battery. We, as a society, changed our mind and we changed what the norm.
Intentional infliction of emotional distress is another example of how gendered harms have been marginalized. How has that affected the course of what’s happening online?
For decades we ignored emotional distress because [those who experience it were considered] "fragile," and by fragile we meant women. We started, in the 1940s and '50s, to recognize that intentional infliction of emotional distress was both real, a real concrete harm. It was a tangible injury in which we could measure it, it was serious and it wasn’t frivolous.
The same is true for sexual harassment in the workplace. For so long the conversation was, hey, if women are going to work, this is just a perk that men enjoy. Women shouldn’t be forced to choose between having a job and enduring harassment and leaving, and not making a living. Feminists got judges to understand workplace harassment as an interference with someone’s equal opportunity to work, so it became a legal problem, in which we said this is a problem for society, this is not the norm, we no longer think this is acceptable. And we stopped minimizing and ignoring women’s unique suffering in that way. And we have come to the same place with online harassment.
There is a great deal of debate over the role that social media and other Internet companies are playing, or should be playing, in regulating this speech. You work closely with many of these companies and your book includes your recommendations for Internet governance. How do see them working, ideally?
These are platforms that we use in every aspect of our life. These are not just speech platforms, they are speech, and work, and play. I think we started off by recognizing that Internet intermediaries have incredible amounts of power and that they’re our access to an exorbitant amount of prosocial public discourse. They’re non-governmental actors and, as such, they can do as they see fit. Their platforms are private ... Some are doing their best effort to create positive cultural alarms on their sites, because it’s as bad for business to do otherwise, and it’s just frankly immoral or destructive.
Given the fact that they can do whatever they want, I recommend that they exercise their enormous power with great care. I think that it would be a mistake for them to simply have a strike-orientated approach, which is that anything complained about is removed.
To just remove content means you miss a lot of speech, which I think is unnecessary. Then you don’t help people learn.