July 17, 2018 at 02:57PM Go to the source
Given that, Rutherford asked the group, “Do you see any other areas where those kinds of exceptions to the protections under 230 should be examined?”
Several seconds of silence followed. Facebook’s Monika Bickert stared blankly across the dais. YouTube’s Juniper Downs managed an uncomfortable, closed-lip grin. Twitter’s Nick Pickles avoided eye contact by scribbling who-knows-what on his notepad.
Rutherford continued on, offering some options. “How about sedition?” he asked. Still, nothing but the sound of keyboards clacking filled the wood-paneled room.
It was not the first time Republicans on the committee questioned the merits of Section 230, which was passed as part of the Communications Decency Act in 1996. Throughout the hours-long hearing on social media moderation practices, during which the three tech companies were repeatedly accused of censoring conservative voices, Republican lawmakers made one thing clear: Decades ago, Congress gave tech companies sweeping power to decide what does and doesn’t belong on their platforms, and now they want that power back. But according to legal scholars who study Section 230, the arguments being lodged by Congress are based on a deeply flawed understanding of how that law works in the first place.
‘Private entities can engage in censorship. We call that editorial discretion.’
Eric Goldman, Santa Clara University School of Law
In his questions to the panel, Republican congressman Matt Gaetz questioned whether tech companies can claim to be immune from liability for other people’s posts under Section 230 and also claim the First Amendment right to restrict content on their platforms, just as publishers can. “When you avail yourself to the protections of Section 230, do you necessarily surrender your rights to be a publisher or speaker?” Gaetz asked. “The way I read that statute now, it’s pretty binary. It says you have to be one or the other.”
That interpretation is flat-out wrong, says Eric Goldman, a leading Section 230 scholar at Santa Clara University School of Law. “That’s such a gross misreading of Section 230 it breaks my heart,” Goldman says. The reason Section 230 came into existence in the first place, he says, was to encourage tech companies in the early days of the web to remove objectionable content, with the promise that they wouldn’t be held liable if they happened to screw up in the process.
“The concern was if Section 230 didn’t protect those editorial decisions, services would choose not to do that socially valuable work,” Goldman says.
But Congress doesn’t need to take his word for it; it’s written right there in the law they passed. Part C of Section 230 promises “protection for ‘Good Samaritan’ blocking and screening of offensive material.” It reads:
“No provider or user of an interactive computer service shall be held
liable on account of—(A) any action voluntarily taken in good faith to
restrict access to or availability of material that the provider or
user considers to be obscene, lewd, lascivious, filthy, excessively
violent, harassing, or otherwise objectionable, whether or not such
material is constitutionally protected.”
Nowhere in the statute does it say that tech companies are prohibited from making editorial decisions if they want Section 230 protections. Quite the opposite. “It urges companies to filter and to block offensive content,” says Danielle Citron, a law professor at the University of Maryland and author of the book Hate Crimes in Cyberspace. “Congress wanted to let companies filter dirty words and shield kids, without being held responsible for doing it incompletely. The idea that 230 requires neutrality is to fundamentally misunderstand it.”
The assertion that companies that are protected under Section 230 are no longer protected by the First Amendment are similarly misleading, Goldman explains. The First Amendment prohibits government intervention in free speech. As private companies and as publishers, tech companies enjoy that protection, too. Twitter is entitled to suspend an account for spewing neo-Nazi hate speech, just like neo-Nazi sites like Stormfront are entitled to ban commenters who preach equality. All the First Amendment says is that the government can’t intervene in those decisions. Changing that would require Congress to essentially recast these platforms as public forums, no different from a town square, where certain types of discrimination are prohibited. Though some high-profile court cases have flirted with the idea, none have ruled on it definitively yet.
“Private entities can engage in censorship. We call that editorial discretion,” Goldman says. On the other hand, he adds, “When the government tells publishers what they can and can’t publish, that’s called censorship.” In a way, by threatening to crack down on tech platforms that don’t moderate content to the government’s liking, Congress risks committing the very censorship it’s accusing technology companies of doing.
But Gaetz is right about one thing. “I just think it’s confusing when you try to have it both ways,” he said, “when you say, ‘We have these liability protections, but at the same time, we have the right to throttle content.'”
And it is confusing from a legal standpoint. In most other fields of law, Goldman says, if you have control, you also have liability. But Section 230 says a service can exercise discretion and still not be liable. That’s what makes it counterintuitive. It’s also what makes it effective. “Congress doesn’t understand how brilliantly they acted 22 years ago,” Goldman says.
Congress could, of course, amend the law, as Rutherford proposes, to carve out circumstances under which tech companies would face legal liability. But as it stands today, the seemingly incongruous argument the tech industry’s making—that it’s both a publisher that can decide what to publish and a platform that can’t be held liable for what it publishes—is more of a public relations problem for Silicon Valley than a legal one.
But Facebook, Google, and Twitter have certainly not made that easy to understand. Earlier this year, during his own hearing before the Senate, Facebook CEO Mark Zuckerberg said Facebook is a tech company, not a publisher. “We’re responsible for the content, but we don’t produce the content,” he said. Other tech companies have argued much the same thing. That’s led to what Goldman calls a “talking out both sides of their mouth problem,” and complicated the public understanding of the issue.
Still, not every member of Congress seemed as confused on Tuesday. “There’s this thing called the First Amendment. We can’t regulate content,” said Democratic congressman Ted Lieu. “The only thing worse than an Alex Jones video is the government trying to tell Google […] to prevent people from watching an Alex Jones video.”
Some experts, like Citron, believe that there are ways Section 230 could be changed in a responsible way. It could, for instance, only apply to companies that truly make a good faith effort to police their platforms. Under that standard, companies that make little to no effort, could be held liable. Even then, though, Citron says the tech companies that testified Tuesday, while not perfect by any means, would be considered “paragons of responsibility.” They don’t always get it right, but they do employ thousands of content moderators who follow dense, if imperfect, content guidelines to make those decisions.
And yet, other Section 230 advocates like Goldman fear that the recent criticism of the law, coupled with a widespread misunderstanding of it in government, could lead to lasting changes that would effectively gut Section 230 and change the internet as we know it for the worse. “I’m terrified because right now, it’s open season for bad policy ideas,” Goldman says. “Undermining Section 230 is a terrible policy idea, so that means it’s apparently fair game.”
More Great WIRED Stories
http://bit.ly/2uJEyoj