September 18, 2019
Facebook has unveiled the charter for its ‘supreme court,’ a supposedly independent content moderation board that will take money from, and be appointed by, Facebook itself – while making binding decisions. What could go wrong?
Facebook has released preliminary plans for an “Oversight Board” tasked with reviewing content disputes. The 40-member body, referred to previously as Facebook’s “supreme court,” will have the authority to make binding decisions regarding cases brought to it by users or by the social media behemoth itself, according to a white paper released Tuesday, which stresses that the new board will be completely independent of Facebook, by popular request.
The company has clearly taken pains to make this new construct look independent, the sort of place a user might be able to go to get justice after being deplatformed by an algorithm incapable of understanding sarcasm or context. But board members will be paid out of a trust funded by Facebook and managed by trustees appointed by Facebook, while the initial board members will also be appointed by Facebook.
“We agreed with feedback that Facebook alone should not name the entire board,” the release states, proceeding to outline how Facebook will select “a small group of initial members,” who will then fill out the rest of the board. The trustees – also appointed by Facebook – will make the formal appointments of members, who will serve three-year terms.
Facebook insists it is “committed to selecting a diverse and qualified group” – no current or former Facebook employees or spouses thereof, current government officials or lobbyists (former ones are apparently OK), high-ranking officials within political parties (low-ranking is apparently cool), or significant shareholders of Facebook need apply. A law firm will be employed to vet candidates for conflicts of interest, but given Facebook’s apparent inability to recognize the conflict of interest inherent in paying “independent” board members to make binding content decisions, it’s hard to tell what would qualify as a conflict.
How will Facebook decide which cases get the democracy treatment? Cases with significant real-world impact – meaning they affect a large number of people, threaten “someone else’s voice, safety, privacy, or dignity,” or have sparked public debate – and are difficult to parse with regard to existing policy will be heard first. “For now,” only Facebook-initiated cases will be heard by the board – Facebook users will be able to launch their own appeals by mid-2020. Is the company merely reaching for an “independent” rubber-stamp to justify some of its more controversial decisions as the antitrust sharks start circling? Decisions will not only be binding, but also applicable to other cases not being heard, if they’re deemed similar enough – potentially opening a Pandora’s box of far-reaching censorship.
In a letter accompanying the white paper, Facebook CEO Mark Zuckerberg claims the company’s moderators take into account “authenticity, safety, privacy, and dignity – guided by international human rights standards” when they make a decision to take down content. Given that the company’s own lawyers have questioned the very existence of users’ privacy, what does this bode for the other “values,” let alone international human rights standards?
Perhaps most ominously, Zuckerberg seems to have bigger things in mind for his Oversight Board than merely weighing in on Facebook content moderation decisions. “We expect the board will only hear a small number of cases at first, but over time we hope it will expand its scope and potentially include more companies across the industry as well” (emphasis added). Not exactly a throwaway line from the man who said he wanted Facebook to become an internet driver’s license. The private-sector social credit score may be closer than we think – and Zuckerberg would very much like to be the scorekeeper.
This article was posted: Wednesday, September 18, 2019 at 5:35 am