With all due respect to Bella Silverstein, her take on Facebook’s responsibility for last weeks terror attack in New Zealand is spectacularly bad for several reasons I’ll enumerate here.
She lays a foundation that Facebook is somehow an accessory for the atrocity committed and insinuates the platform was a knowing and willing participant in the act of a single disturbed individual. The solution in her mind is for Facebook to police all content posted to the site to ensure none of it goes against, what she calls, the public good. While her arguments are directed towards Facebook, they can be applied wholesale to any company that operates a website. I find many of Facebook’s actions over the years to be indefensible. Instead, I’m going to defend the core functionality of Facebook and other services like it; YouTube, Twitter, Instagram, et al.
The argument that Facebook is a publisher, is wholly incorrect. Facebook is a platform that allows the nearly frictionless sharing of information. It adheres to Section 230 of the Communications Decency Act; a law passed in 1996. To quote, the law “provides immunity from liability for providers and users of an ‘interactive computer service’ who publish information provided by third-party users.” Essentially it indemnifies services from penalties for actions of its users. Without it, lawsuits would be filed against the service whenever someone posted defamatory material, uploaded a copyrighted song, or a myriad of other activities.
Online services have to walk a delicate path with content found on their platforms. Police it too much, and they lose their CDA 230 immunity. Police it too little, and risk upsetting the user base for content they find abhorrent. For the legal reason alone, Facebook simply cannot editorialize the content on the platform.
Facebook is profitable because many people use it. Many people use it because of the frictionless ability for them to share various aspects of their lives. Ms. Silverstein argues all content posted should be held for review by a person. This removes the “frictionless” aspect of the platform. If someone had to wait to post a picture of their food on Instagram, or if your aunt couldn’t post those Minions Memes every 45 seconds, they would simply stop using it and move to another platform that facilitated the sharing in a means we have grown accustomed to.
Nevermind the resources involved in reviewing all of the content posted. These platforms have a mechanism to report content, and via a method of automation and human intervention they review those reports. Google, YouTube, Twitter, Facebook and other platforms have tens of thousands of employees reviewing the content. Even if each of their human reviewers were paid minimum wage, that is still an expensive endeavor. These teams are currently only reviewing what is reported, and there are millions of hours spent every week on the process. If instead they were to review content before it is allowed on the sites, these services would go bankrupt overnight.
These human review teams also have remarkably high burnout rates, and experience high turnover. The statistics I’ve heard range from six weeks to three months on average for these positions. They’re forced to view content we almost universally agree is abhorrent, and it takes a severe psychological toll. Many former reviewers suffer PTSD for years after they have left the position. I won’t go further than to say these jobs are less than desirable, so finding millions of people to staff these positions on an ongoing basis is an impossibility.
The core of Ms. Silverstein’s argument is that had these review teams been in place, the live video stream of the New Zealand shooting wouldn’t have happened. Technologically speaking, this is false. It was a live video, there was no way for someone to review the video beforehand to know what the content would be. There isn’t a control room like you would find at a TV station, with a litany of screens to monitor all video simultaneously. And to circumvent the review process, all the shooter had to do was describe the video as “live stream of me playing Super Mario Bros.” and it likely would have been waved through.
I cannot conceive of a system that allows live videos to maintain their “live” status, but also be reviewed by a human, and be cost effective. There simply isn’t a way to do it. However, each live video has an ability to be reported to the review team, so users that find the content objectionable can subject the video to review. This is still a costly endeavor, but much more reasonable and doesn’t inhibit the frictionless sharing aspect of the platform. Users who repeatedly abuse platforms and have multiple posts removed as a result of violating the websites terms of service are removed from the platform entirely. There is no evidence I am aware of that the perpetrator of the attack last week ran afoul of the terms prior to to the event, and therefore was allowed to post.
Reportedly, the original video was viewed less than 4,000 times. Platforms are currently doing their best to remove the videos when they are reposted to their site. Technologically speaking, it is easier for an automated system to remove a known video containing objectionable content than it is to remove a video it has never been seen before. These systems are heuristic to an extent, and while machine learning/AI can identify objects and actions, they are not currently able to discern the content itself. And even our best systems we currently have would not have been able to differentiate between the terrorist video and a video game, or TV show, or movie with similar content.
In the face of tragedy, we hear calls for someone to do something. It’s frequently new laws, or amendments to existing ones. The call for Facebook to “do something” ignores the law, ignores the economic costs, ignores the human costs, ignores the technological feasibility, and demands a fix. As a software engineer that has worked on these platforms, I hear the calls for us to do something. It boils down to “you made it, you can fix it” which is a fallacy. Yet we are frequently told the fix is easy for nerds, we just need to nerd harder.
I engineer products that extend and amplify humanity. I’ll give you a better platform when you give me better humans.
Brett Haddock is a Canyon Country resident and a former Santa Clarita City Council candidate.
The Santa Clarita Valley Proclaimer’s opinion section does not represent the official opinions of Radio Free Santa Clarita, its board and its supporters.
- Coast Guard veteran Eric Ohlsen to challenge Assemblymember Tom LackeyAugust 15, 2019
- Black ‘N Blue fights to break The SCV CurseAugust 11, 2019
- Highlights from Mike Garcia’s interview with The Talk of Santa ClaritaAugust 9, 2019