As a follow-up to yesterday’s post regarding privacy, the 6th Circuit Court of Appeals laid bare a flaw in the Communications Decency Act 0f 1996, granting websites immunity over liability for content uploaded by individual users. Apparently, it’s a license to exploit people. The case involves former Bengals cheerleader Sarah Jones and defamatory material uploaded to the gossip site thedirty.com. Jones claimed mental anguish stemming from posts related to the sexual history of her and her ex-husband, sued owner/publisher of The Dirty, Nik Richie, and was awarded $3338,000 by a federal court. The appeals court overturned the ruling on the grounds that Jones should not have been abel to sue Richie in the first place owing to the protections afforded him by the Communications Decency Act.
Perhaps the 6th Circuit Court of Appeals ruled appropriately according to the law, but with regard to the spirit of the law, we’ve lost our goddamn minds, and the law needs revision. Given the number of news-format sites that crowd-source (i.e. nearly all of them), and the number of sites that trade on salacious garbage (i.e. way too many of them), and the fact that everybody is fair game, it is simply insane to provide blanket immunity to website owners who profit on the misery of others. But then, Sarah Jones is an attractive cheerleader, so I guess she deserves it, right?
It’s a world gone mad.
See article on the case here.
David–
The effect of section 230 isn’t news. This has been well known since it was passed in the 90’s, particularly when the Supreme Court killed off the other major provisions of the CDA, relating to obscenity and indecency.
The way that state defamation laws tend to work is that not only is the author of a libel responsible for it, but so were publishers of the libel, a label which was being applied to ISPs which provided services where users could post messages, much like the comments to your stories, which would make you the publisher around here. The courts were still in the slow, arduous process of adapting to the new technologies when a fairly significant case, Stratton Oakmont v. Prodigy, came down. Basically, someone on the old Prodigy BBS, which was owned by IBM and Sears, IIRC, posted a message that Stratton Oakmont, an investment firm, had engaged in criminal and fraudulent acts.
Stratton Oakmont was very concerned about their reputation, and sued, not the user, but Prodigy (the deep pocket) on the basis that Prodigy was a publisher. And the court found that Prodigy was in fact liable, in no small part because they had guidelines for what could be posted, moderators exercising editorial control, and filters for crude language, all of which are commonplace features all around the Internet today. Even filtering out comment spam might give rise to publisher liability, if the issue were to arise today.
Happily, it does not; section 230 is an absolute keystone of free speech online. It protects the operators of services, such as yourself, from most liability caused by users. In the case of a large or busy site, which can’t be policed without an investment of resources that might not be viable, it’s absolutely essential, lest content giving rise to liability be posted without the owner realizing it. The EFF likes it, the ACLU likes it, and honestly I can’t say I’ve heard of the owner of a site that permits comments saying that he doesn’t like it, at least until now.
What section 230 doesn’t do, is it doesn’t protect the actual person who first originated the libelous speech; only later people are protected. So the plaintiff in this case should’ve sued (and if the statute of limitations hasn’t run out, still can) whoever it is that is actually penning these messages about her. It also doesn’t protect against things like copyright infringement, which is why the DMCA safe harbor came onto the scene just a little later.
The ability to allow users to post comments, and to be able to edit them, but not to suffer liability for them is exactly what the spirit of section 230 always was, and it fit into the otherwise horrible CDA pretty well, in fact. And the letter and the spirit of the law are so unambiguously clear that it’s shocking that the case even had to go as far as it did. After a number of well-known cases involving section 230, most people don’t even bother trying to go after websites now.
The chilling effect of losing section 230 would be to cause every US-based site that allows users to post content — from Twitter, Facebook, and YouTube — to WordPress, Yelp, and Amazon’s reviews — to shut down immediately, or to ditch anything that users might have posted (which would range from everything to most everything of note) and become far less useful to everyone. It would be a disaster for free speech, and God only knows how much benign, legitimate commercial activity would shut down because it could not be insulated in a cost-effective manner from activity that might give rise to liability.
As for Stratton Oakmont, which more or less started this; they won their case against Prodigy, but it turns out that they probably should’ve lost. They engaged in a massive amount of crime and fraud. If you saw the movie ‘Wolf of Wall Street,’ that’s about Stratton Oakmont. But by only going after the ISP as publisher, which had no information upon which to know whether the claim was true or not, which didn’t really care, and just wanted to minimize its liability, it managed to preserve a good reputation that it didn’t deserve and get a nice check.
Free speech sometimes means ugly results. We have to put up with porn. We have to put up with people expressing stupid opinions which we know to be wrong. We have to tolerate the existence of advertising. We have to put up with Illinois Nazis marching down the street. The existence of deplorable yet popular sites like the one in this case is no different; it’s the price we have to pay for something better. We have to stand up for them, or else we put ourselves at risk. And it’s not as though the plaintiff here didn’t have a remedy against the actual person who was defaming her; she just chose to ignore him.
If you can’t hold politicians, the rich and the powerful to account without writing shit about ordinary people then you’ve not quite got it right.
Anonymous, there is no question that you are right about the law and about free speech in general; and I do appreciate your posts. Certainly, I don’t quarrel with the concept that free speech means sometimes you get offended, etc. I also did not mean to suggest that Section 230 ought to be eradicated or something so mad as that. I do, however, believe that we’ve entered new territory that goes beyond protecting (let’s say) unpalatable speech in the name of protecting all speech, a principle I’ve long defended. We have a new paradigm in which there is a strong profit motive in potentially causing harm to an individual, who likely has fewer resources than a movie star being smeared by a supermarket tabloid. The user who initiated the harm might be anonymous or incapable of making reparations, while the site owner is free to earn whatever click-driven revenue the slanderous or salacious material generates. What concerns me the blanket protection afforded the site owner without any room for making distinctions in what is clearly a new technology-enabled phenomenon. I wouldn’t dream of removing the protections, but perhaps in light of the fact that some average citizen could be severely harmed by a business model that didn’t exist at this scale 10 years ago, there could at least be grounds for a suit to proceed.
When people are unable to get the law to address grievances some will take out their own form of private justice.
David–
“We have a new paradigm in which there is a strong profit motive in potentially causing harm to an individual, who likely has fewer resources than a movie star being smeared by a supermarket tabloid.”
I haven’t used the site myself, but I suspect that the profit motive doesn’t involve any specific victim so much as it involves an aggregation of victims. Further, there is some degree of security in obscurity; I would hazard a guess that most of the victims on the site are only of interest to the very same people posting about them, and maybe a few others. Absent something more, I could hear all about the private lives of total strangers and not care at all due to a lack of any connection to them; gossip is more interesting when it’s people you know. Traditionally, commercial gossip has had to involve celebrities, since a wide base of people know who they are and are interested. What the Internet has enabled, apparently, is a long tail of commercialized gossip.
“The user who initiated the harm might be anonymous or incapable of making reparations, while the site owner is free to earn whatever click-driven revenue the slanderous or salacious material generates.”
True. But merely making money is not the touchstone for liability from defamation. The traditional split has been publishers and pass-through distributors (bookstores, newsstands, etc.).
(Although I should point out, and it’ll be relevant shortly, that these are being used as terms of art here, and are very confusing. Technically, what we’ve got are only publishers in the broadest possible sense, in that they provide works to the public. Both what we think of as publishers and bookstores fall under this aegis. Then within that category, we split it into publishers and distributors because the broad rule of libel would act against distributors, and we’re trying to craft two different standards of liability)
A publisher is as liable for libel as the author is. A distributor is not liable unless it has, or should have, knowledge.
A publisher is closely involved in the preparation of a book for publication. It has early access to the MS. It can ask the author to provide supporting material to satisfy it that the claims are legitimate. It knows who the author is (no one just prints an anonymous MS that shows up in the mail). It has a deep commercial interest in the work, since it probably won’t be able to find buyers for it if it’s inaccurate. And it can take all the time it wants.
A distributor is not involved in the creation or publication of the book at all; it’s just a distributor. It doesn’t have early access, it doesn’t have a chance to talk with and verify the author’s claims. It doesn’t have much of a commercial interest in the book; if it doesn’t sell, it’ll get remaindered and something else will go on the shelves. And the staff of the distributor doesn’t have time to read everything that comes in the door, research it to verify that it’s accurate, and only then put it on the shelves.
Most sites that users can post their own content to resemble distributors; it’s as if the Internet is a consignment shop for pamphlets and such. There’s no checking up, there’s no editorial oversight, there’s usually no profit which is attributable to particular works as opposed to the site as a whole.
So the way the difference between publishers and distributors plays out is that both are liable for libel, but the latter is only liable when it has or should have knowledge of the libel. Both are ultimately publishers though, and have some degree of liability.
The Zeran court found that when section 230 protected publishers, it did so in the broad sense, not the narrow one. Which means that even if the site has actual knowledge, so long as it isn’t the author, it is protected. This actually makes sense, in that otherwise you’d have authors enjoying no immunity, publishers with editorial control enjoying immunity, and passive distributors not enjoying immunity once put on notice. It would be bizarre to have more protection for more involvement in the libel!
I’d suggest reading through Zeran, it’s a leading case on this subject, and it’s like 15+ years old or something, so again, this is fairly well settled law.
“I wouldn’t dream of removing the protections, but perhaps in light of the fact that some average citizen could be severely harmed by a business model that didn’t exist at this scale 10 years ago, there could at least be grounds for a suit to proceed.”
At a minimum I know 4chan existed 10 years ago, and there can’t be many places worse than that. (Knock on wood)
Anyway, no, the law is really clear on this. Here’s the operative bit from 47 USC 230(c)(1):
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This has even been applied to protect people who wrote down slander which hadn’t been online at all. It’s a serious protection. Feel free to suggest ways around it, though I remain unconvinced that it would be good to get around it.
Anonymous, thanks for spawning an intriguing hypothetical exercise, if nothing else, and for pointing me to Zeran. No question it’s interesting stuff. I’ll grant that the statute is pretty airtight, and you’ll take my word for it that I care about free expression first and foremost. But since you posed the question, I like the challenge.
To frame the discussion, I would start by disagreeing slightly with your assessment of the value of gossip in the present market. Once upon a time, the subject had to command fairly wide interest, but even an NFL cheerleader is already several steps away from even the lowliest of Kardashians. In the current market and current state of the technology, I would argue that even relatively localize relevance can be good for a few days worth of high traffic to a site. As a hypothetical, let’s take a high school math teacher in Anytown USA and a disgruntled student who sucks at math and is bitter about it. She’s conveniently young and kinda hot. He can pull down some photos from social media or (speaking of 4Chan) doctor some suggestive photos and launch a smear campaign on flummery.com claiming she’s having sex with students. In the best of all possible worlds, everyone acts appropriately, school officials demand proof, and the kid is caught committing a crime. In reality, though, we know there are places where her guilt might be presumed without proof and her job, even her career, would be in jeopardy without due process.
Just a note on 4Chan & timing: I believe that 10 years ago, the 4Chan was still a gamer fan site and had not yet become the cesspool that it is today. Regardless, the traffic to so many sites that trade in short-attention-span dross would simply not exist without the critical mass of mainstream social networks like Twitter and Facebook, and Facebook only launched at Harvard in 2004. As such, I think my assertion that growth of sites akin to The Dirty over the last ten years is fairly accurate.
So, if you might agree that sites of this kind have expanded over the last decade, one way in which I would argue that we’re in a new paradigm, different even from 1996, is that given the right circumstances, an average citizen can harm another average citizen with relative ease by means of a website whose bread and butter is the salacious and the tawdry. (And for what it’s worth, even mainstream sites like The Huffington Post now use titilating links in the sidebars and footers to a lot of nonsense in order to increase time on site, I’m guessing.) The point I’m trying to make is that a relatively new model exists for the kind of shenanigans described in my hypothetical teacher example, and the site owners actually have a profit motive in what might be the most egregious abuses. So, the question is whether or not this implies grounds to update S230, or is that simply too dangerous?
If you and I were a pair of legislators looking at the issue, I’d hope we might agree on three principles: 1) that free speech comes first; 2) that a person or entity which profits directly from real harm done to an individual might be held liable in the public interest; and 3) that society has no compelling interest in the survival of any one particular site akin to The Dirty. It’s right to exist is paramount to free expression, but if it’s sued out of existence on legitimate grounds, we haven’t lost anything of value.
So, my question is whether or not there is a compelling public interest in defining a new class of website that is defined somewhere between the broad “Interactive Computer Service” and an entity as extreme as a revenge porn site, which California appears ready to clarify as being every bit as illegal as it is morally repugnant. Certainly, it feels counter-intutive that a publication which employs journalistic standards can be potentially liable, while indemnification is granted to the owner of a site that is fundamentally automated and subject to little or no human oversight. Granted that lack of oversight is part of the reason for the indemnification, but it is also a way of saying that it’s okay to profit off the harm done to another as long as the harm is done blindly through a machine.
I don’t know at the moment how I would propose defining this class of site, and my only interest in this exercise is opening a narrow portal through which the site owner could be held liable. Still thinking, but I welcome your thoughts.
David–
“As a hypothetical, let’s take a high school math teacher in Anytown USA and a disgruntled student who sucks at math and is bitter about it. She’s conveniently young and kinda hot. He can pull down some photos from social media or (speaking of 4Chan) doctor some suggestive photos and launch a smear campaign on flummery.com claiming she’s having sex with students. In the best of all possible worlds, everyone acts appropriately, school officials demand proof, and the kid is caught committing a crime. In reality, though, we know there are places where her guilt might be presumed without proof and her job, even her career, would be in jeopardy without due process.”
Was that a joke? It’s just that this is perhaps not the most hypothetical of hypotheticals, given that Jones, the plaintiff in this very case, was also a high school English teacher who wound up being convicted for having had sex with one of her students, a minor. But we can ignore that, as the defamation against her was as to a separate matter, and occurred beforehand.
Anyway, I think that the issue here then is not with the underlying offense, but with a lack of due process. Most people are employed at will and can be fired without any reason at all, which amounts to being able to be fired for any reason at all. Even though some reasons are prohibited, it can be difficult to show that that is the reason at issue. While of course it’s of no help for mere reputation — people can and do form long-lasting opinions about people instantly and often with little thought, and there’s not much we can do about that — in a more formal context, giving people the chance to explain, to offer counter-evidence, to have a full hearing, are useful not only to gather more evidence but also to calm down so that decisions can be made cooly and rationally. I’d certainly welcome such a wholesale change to US labor law.
“Just a note on 4Chan & timing: I believe that 10 years ago, the 4Chan was still a gamer fan site and had not yet become the cesspool that it is today. ”
A moment’s quick googling produced this: http://tanasinn.info/wiki/Complete_History_of_4chan
I have no idea if it’s accurate, but it certainly seems thorough. Anyway, it seems that the idea for the site came along on September 29, 2003, and the domain was registered that day. The site actually opened on October 1. Here’s the relevant entries:
“September 29 – moot registers 4chan.net, just because it looked like 2chan.net (Futaba Channel) which was an ADTRW pastime. Originally, his only intention was to own an e-mail address @4chan.net, but he quickly gets thinking.
regging 4CHAN.net
FOUR CHAN
brace for faggotry
September 30: Moot announces 4chan.net in Something Awful’s ADTRW forum. Thread is titled ‘4chan.net – English 2chan.net!’
October 1 – 4chan is founded by moot, a member of the Something Awful forums, intended to be used as an English version of 2chan, a Japanese imageboard created in 2001 out of an extremely popular Japanese BBS called 2channel. moot creates /b/ (Anime/Random) and makes a topic at Something Awful and world2ch about the website, which is received extremely well. moot also holds a contest to decide what 4chan’s logo should be. In three hours of creating the contest, around 50 banners are submitted, after which moot decides to simply make it so that it would cycle through the best banners randomly every time a page was loaded.
October 2 – 4chan’s hosting company receives an e-mail complaining about ‘lolikon and guro posted in /b/,’ to which moot writes back that neither of the two is illegal. Later in the day, moot creates a second board, /h/ (Hentai). The email was sent by Shii/Menchi because the people of world2ch felt 4chan ripped them off (Which is inaccurate, since world2ch was a sister site to 2ch and 4chan was a sister site to Futaba/2chan) moot later befriended them. The rules page describes /b/ as a test board which will later become an anime board. moot describes /b/ in a news post as ‘a retard bin’.”
I think we can probably just say that it went from 0 to Horrible in record time.
“If you and I were a pair of legislators looking at the issue, I’d hope we might agree on three principles: 1) that free speech comes first; 2) that a person or entity which profits directly from real harm done to an individual might be held liable in the public interest; and 3) that society has no compelling interest in the survival of any one particular site akin to The Dirty. It’s right to exist is paramount to free expression, but if it’s sued out of existence on legitimate grounds, we haven’t lost anything of value.”
I’m not sure that I’d agree with your second point (and the way in which it interacts with the third), but then I have long flirted with First Amendment literalism.
In a traditional public forum, like a sidewalk, or a park, people can speak freely to one another, often do, and we all seem to celebrate this as a good thing. While run by private entities, online sites where users can post material serve a similar function.
Imagine that someone speaks slander, or distributes libel while standing in Boston Common or the National Mall. We would not for a moment hold the venue liable for the actions of the person using it. Not even if the venue had in some way aided the individual in question (e.g. by being a place where lots of people go, therefore being a good place to distribute information from, or even if the venue specifically advertised ‘If you’ve got something to say, come say it here’), nor if we ignored issues of sovereign immunity. If you’d see websites shut down for failing to thoroughly and perfectly police their users, why would you not also want public venues shut down where equally defamatory material can be spread?
“and an entity as extreme as a revenge porn site, which California appears ready to clarify as being every bit as illegal as it is morally repugnant.”
California is not the final arbiter of whether it’s illegal or not.
Assume that a particular image constituting revenge porn is not a selfie (which would give rise to a copyright issue), nor of a minor (which would give rise to a child pornography issue), nor has been altered (which means that it would have a defense of truth against defamation), nor is obscene (which has a specific legal definition too long to be worth getting into here, but I assure you that not all porn qualifies as obscene), nor uses the person’s image for commercial purposes (which means that there’s no publicity rights issue).
This leaves what? Tort of disclosure, maybe? That’s like betting on the boniest horse to win the race. It might work, but you want something better. That might be hard. In US v. Stevens, just a few years ago, the Supreme Court found unconstitutional a law that criminalized the creation, sale, and possession of so-called crush videos, which are apparently where people torture and kill animals on screen (e.g. stepping on them), usually for the benefit of people who get off on that kind of thing.
“Certainly, it feels counter-intutive that a publication which employs journalistic standards can be potentially liable, while indemnification is granted to the owner of a site that is fundamentally automated and subject to little or no human oversight.”
Well, section 230 is a blanket protection for the online sphere, except for the person who originates the defamation. (And some other categories unrelated to defamation) While it protects sites that don’t police themselves, it was actually originally intended to also protect sites that did police themselves (police themselves inadequately, I guess). The Communications Decency Act was an attempt to ban indecency from the net, and part of that was encouraging sites to self-censor, and to be protected from suit where they had taken action. The Court overturned the really bad parts, but section 230 escaped that. Anyway, the point is that a site that did employ journalistic standards would be equally protected. And indeed, many sites do have some degree of moderation, and section 230 helps them too.
“my only interest in this exercise is opening a narrow portal through which the site owner could be held liable”
I don’t think that’s a good approach to take; you’re prioritizing the punishment, not the rationale.
Instead, we should look for the solution that best protects people from suffering harm due to defamation. A culture of incredulity would work well without limiting speech. Obviously it’d be difficult to accomplish, and people should choose for themselves, but it would solve the problem. Picking a remedy and trying to cram it in may be easier, but I think that a higher level approach to the underlying issue is a better, if harder and slower, approach.
A few quick responses while I have time. The first is that you spent a fair bit of real estate proving the exact time when 4Chan became what it is, but in the effort you skipped my main point that opportunity for sites like The Dirty have grown over the last decade. Second, I get the Boston Common/website analogy with the exception that Boston Common doesn’t earn any money from someone’s slander. I would compare a website to a public space more like a mall, which is publicist, but actually private. If I put racist posters on the mall wall, the mall owners responsible unless they leave the posters up there and then went so far as to profit from increased traffic by local haters. Third, if revenge porn isn’t established as a crime in this country, I may defect, but that’s another matter. I suspect the most serious offense by at least one site owner is the attempt to extort money from the victims in return for removing the photos.