A Week When Social Media Showed Its True Face



Last week Silicon Valley reminded the world just how much control it now wields over the national discourse. On Tuesday, Twitter apologized for one presidential tweet and labeled two others as “false.” On Wednesday, The Wall Street Journal published an internal 2018  Facebook study acknowledging the platform promoted national “divisiveness.” A day later President Trump signed an executive order targeting social platforms’ liability immunity. Undeterred, on Friday Twitter hid an official presidential tweet for “glorifying violence” and Facebook warned the White House that it too could impose limits on government speech regarding “state use of force.”

What do these social platforms’ newfound willingness to curb government speech suggest about the future of democracy?

In the past, social media companies had done little to limit Trump’s use of their platforms. That all has now changed. While news outlets have long fact-checked politicians, their ratings were available only to readers of those outlets. In contrast, Twitter’s warning rating is displayed atop the official U.S government tweets.

In response to Tuesday’s actions, Trump codified into an executive order the arguments long made by critics of social media companies: that by virtue of deciding what kinds of speech they allow on their platforms and actively removing, or shading, speech with which they disagree, the platforms are little different than news entities that enforce editorial guidelines on their opinion sections.

When the president then warned in a tweet about protests in Minneapolis that looting will lead to shooting, Twitter both hid the message from view and limited its distribution across its platform as a violation of its acceptable speech policies. In contrast, Facebook left the same presidential announcement up on its platform, justifying its decision by noting that the post amounts to an official announcement of “state of use of force.” Yet founder and CEO Mark Zuckerberg added that “today’s situation raises important questions about what potential limits of that discussion should be” and that “if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician. We have been in touch with the White House today to explain these policies.”

What are we to make of this, given that Twitter and Facebook have become the de facto platforms through which many government officials announce policy decisions and actions? If those platforms threatened to permanently ban any politician whose actions they disliked, cutting off their most important way of reaching voters, would that politician self-censor?

Coincidently this week, the Wall Street Journal’s peek at Facebook’s decision making over the past several years spotlighted a 2018 internal report acknowledging that “[o]ur algorithms exploit the human brain’s attraction to divisiveness. … If left unchecked [it will show users] more and more divisive content in an effort to gain user attention & increase time on the platform.”

Time and again, the Journal’s investigation shows, the company confronted questions existential to the functioning of democracy, from controlling whose voices are heard to determining whether proposed features might disproportionately silence certain voices, such as those of conservatives.

Despite their enormous influence on the public discourse, little of this internal deliberation is ever seen by the public or policymakers and thus subjected to a larger, societal debate. Indeed, the company confirmed that it would not be releasing the internal research cited by the Journal.

Facebook’s lack of transparency over the years regarding its research into and deliberations about how its platform affects the functioning of society suggests it does not believe it bears a moral responsibility to the public or policymakers to shed greater light on its actions. As a private company it is, in fact, under no legal obligation to do so. Yet, its outsized role in society’s free exchange of ideas underscores the critical importance that we, as a nation, better understand these platforms’ influence on us.

It isn’t just social media platforms that are increasingly curbing free speech. Nearly every web publishing platform today now enforces some kind of acceptable speech guidelines and removes violating content.

In the end, if Congress wanted to take concrete action to regulate social media platforms, one of the most meaningful steps it could take would be to mandate that these companies reveal their own research into their impact on society and why they’ve made the decisions they have that affect democracy itself.

RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *