April 20, 2019
  • 9:00 am Official update: Rep. Katie Hill raised over $600,000 in first quarter
  • 10:14 am Cartoon: Santa Clarita vs. Los Angeles
  • 7:30 am No agreement reached between Henry Mayo and nurses union
  • 10:00 am Suzette Martinez Valladares: Republican strategist, candidate for Congress
  • 7:00 am Rep. Katie Hill raises over $560,000 in first fundraising quarter

With all due respect to Bella Silverstein, her take on Facebook’s responsibility for last weeks terror attack in New Zealand is spectacularly bad for several reasons I’ll enumerate here.

She lays a foundation that Facebook is somehow an accessory for the atrocity committed and insinuates the platform was a knowing and willing participant in the act of a single disturbed individual. The solution in her mind is for Facebook to police all content posted to the site to ensure none of it goes against, what she calls, the public good. While her arguments are directed towards Facebook, they can be applied wholesale to any company that operates a website. I find many of Facebook’s actions over the years to be indefensible. Instead, I’m going to defend the core functionality of Facebook and other services like it; YouTube, Twitter, Instagram, et al.

The argument that Facebook is a publisher, is wholly incorrect. Facebook is a platform that allows the nearly frictionless sharing of information. It adheres to Section 230 of the Communications Decency Act; a law passed in 1996. To quote, the law “provides immunity from liability for providers and users of an ‘interactive computer service’ who publish information provided by third-party users.” Essentially it indemnifies services from penalties for actions of its users. Without it, lawsuits would be filed against the service whenever someone posted defamatory material, uploaded a copyrighted song, or a myriad of other activities.

Online services have to walk a delicate path with content found on their platforms. Police it too much, and they lose their CDA 230 immunity. Police it too little, and risk upsetting the user base for content they find abhorrent. For the legal reason alone, Facebook simply cannot editorialize the content on the platform.

Facebook is profitable because many people use it. Many people use it because of the frictionless ability for them to share various aspects of their lives. Ms. Silverstein argues all content posted should be held for review by a person. This removes the “frictionless” aspect of the platform. If someone had to wait to post a picture of their food on Instagram, or if your aunt couldn’t post those Minions Memes every 45 seconds, they would simply stop using it and move to another platform that facilitated the sharing in a means we have grown accustomed to.

Nevermind the resources involved in reviewing all of the content posted. These platforms have a mechanism to report content, and via a method of automation and human intervention they review those reports. Google, YouTube, Twitter, Facebook and other platforms have tens of thousands of employees reviewing the content. Even if each of their human reviewers were paid minimum wage, that is still an expensive endeavor. These teams are currently only reviewing what is reported, and there are millions of hours spent every week on the process. If instead they were to review content before it is allowed on the sites, these services would go bankrupt overnight.

These human review teams also have remarkably high burnout rates, and experience high turnover. The statistics I’ve heard range from six weeks to three months on average for these positions. They’re forced to view content we almost universally agree is abhorrent, and it takes a severe psychological toll. Many former reviewers suffer PTSD for years after they have left the position. I won’t go further than to say these jobs are less than desirable, so finding millions of people to staff these positions on an ongoing basis is an impossibility.

The core of Ms. Silverstein’s argument is that had these review teams been in place, the live video stream of the New Zealand shooting wouldn’t have happened. Technologically speaking, this is false. It was a live video, there was no way for someone to review the video beforehand to know what the content would be. There isn’t a control room like you would find at a TV station, with a litany of screens to monitor all video simultaneously. And to circumvent the review process, all the shooter had to do was describe the video as “live stream of me playing Super Mario Bros.” and it likely would have been waved through.

I cannot conceive of a system that allows live videos to maintain their “live” status, but also be reviewed by a human, and be cost effective. There simply isn’t a way to do it. However, each live video has an ability to be reported to the review team, so users that find the content objectionable can subject the video to review. This is still a costly endeavor, but much more reasonable and doesn’t inhibit the frictionless sharing aspect of the platform. Users who repeatedly abuse platforms and have multiple posts removed as a result of violating the websites terms of service are removed from the platform entirely. There is no evidence I am aware of that the perpetrator of the attack last week ran afoul of the terms prior to to the event, and therefore was allowed to post.

Reportedly, the original video was viewed less than 4,000 times. Platforms are currently doing their best to remove the videos when they are reposted to their site. Technologically speaking, it is easier for an automated system to remove a known video containing objectionable content than it is to remove a video it has never been seen before. These systems are heuristic to an extent, and while machine learning/AI can identify objects and actions, they are not currently able to discern the content itself. And even our best systems we currently have would not have been able to differentiate between the terrorist video and a video game, or TV show, or movie with similar content.

In the face of tragedy, we hear calls for someone to do something. It’s frequently new laws, or amendments to existing ones. The call for Facebook to “do something” ignores the law, ignores the economic costs, ignores the human costs, ignores the technological feasibility, and demands a fix. As a software engineer that has worked on these platforms, I hear the calls for us to do something. It boils down to “you made it, you can fix it” which is a fallacy. Yet we are frequently told the fix is easy for nerds, we just need to nerd harder.

I engineer products that extend and amplify humanity. I’ll give you a better platform when you give me better humans.

Brett Haddock is a Canyon Country resident and a former Santa Clarita City Council candidate.

The Santa Clarita Valley Proclaimer’s opinion section does not represent the official opinions of Radio Free Santa Clarita, its board and its supporters.

Sign up for our Newsletter


Summary
Brett Haddock: In defense of social media
Article Name
Brett Haddock: In defense of social media
Description
Canyon Country resident Brett Haddock defends Facebook and argues against the claim that social media is an accessory to atrocity.
Author
Publisher Name
The Santa Clarita Valley Proclaimer
Contributor

RELATED ARTICLES

2 COMMENTS

  1. Bella Silverstein Posted on March 20, 2019 at 11:48 am

    Brett — I respect your opinion and would like to see you on our city council. I also respect your coding ability as a former software engineer for eBay. Facebook was absolutely not a knowing and willing participant in New Zealand’s terrorist attack. No one has implied that. I apologize if I made that unclear.

    You’re right: the Communications Decency Act (CDA) has been interpreted to say that platforms like Facebook are not to be considered publishers, and hence not liable for things they post.

    The CDA was enacted in 1996. A lot has changed since then. Back then we were talking about pornography and free speech versus obscene speech, and the right to sue or not be sued for libel.

    Now we’re discussing mass murder. There’s a difference. Laws must change.

    Legal experts agree. The Attorneys General of almost every state in the country asked Congress to remove the immunity provision of the CDA in 2013. They’re still waiting for Congress to act. I suggest they not hold their breath. In the meantime, we can all sit back and expect more mass shootings. Especially since the killers have a killer platform.

    The Sunday New York Times has a circulation of 1,087,500. Facebook reaches 1.74 billion people. There’s never been a platform like this. It may be futile to hope that Mark Zukerberg or Sheryl Sandberg or Congress do the right thing. But that does not exempt us from trying. As Jacinda Adern has shown, when people are murdered en masse, it’s time to act. I know what I would do.

  2. Brett Haddock Posted on March 20, 2019 at 4:22 pm

    Well that’s kind of my point, it’s intangible to make changes to the platforms as a response to these things. Removing CDA 230 will effectively kill the internet entirely. Without the safe harbor provision, they won’t accept any user content, and therefore won’t exist. CDA 230 is absolutely vital for the internet as we know it.

    Beyond that aspect, it’s not possible for them to possibly review and approve everything that is posted to their sites. Equating Facebook to a publisher is simply wrong.

    It isn’t Facebook’s fault the video was streamed and continues to be uploaded, but they’re doing their best to take it down when they find it.

    This would be like blaming Honda for creating a vehicle capable of driving bank robbers away from a crime scene. Should Honda manually approve everything use of the Civic? No, it isn’t possible. They would have to employ hundreds of thousands of people, and the cost of the car increases.

    Aside from all that, if Facebook or other platforms did actually implement a system to approve content, users would move elsewhere. It would be a game of whac-a-mole.

    Calling on platforms to make changes skirts the core issue entirely. We need to find a way to stop madmen from committing atrocities. This is a censorial route that ignores the issue and only buries our heads in the sand. The platforms aren’t at fault, the current climate of rhetoric that enables, emboldens, and creates martyrs of these people is.

    Facebook didn’t create this shooter, society did.

Comments are closed.

LEAVE A COMMENT
%d bloggers like this: