What’s Behind The Trump-Twitter Clash?by Michael Posner
Late yesterday, President Trump escalated his war against the social media companies, Twitter in particular, issuing an executive order that seeks to limit their legal protections against liability for content posted on their sites. The irony of his action, as others have pointed out, is that the President himself has been one of the greatest beneficiaries of the current state of these social media platforms. This is a space where harmful content, including provably false disinformation, especially in the political realm, continues to flow freely. With his 84 million followers on Twitter, the President has used the platform as a megaphone for his divisive brand of politics, rooted in his penchant for stoking hatred and fear to stir support among his most ardent constituents. Unconstrained by facts, he routinely wades in and often creates controversies—our first president to govern by Tweet.
The most prominent social media companies themselves, Facebook, YouTube, and Twitter, have been enablers in this grim spectacle. Facebook and YouTube in particular have been buoyed by unimaginable global growth, record stock prices, and multi-billion-dollar ad revenues, as they have sought to downplay their governance responsibilities for what appears on their sites. They continue to argue as Facebook’s chief, Mark Zuckerberg, did again the other day, that they are the champions of free speech and not “arbiters of the truth.”
Though in the last few days Twitter has assumed greater responsibility, traditionally these companies have wanted us to view them as plumbers, running pipes, with little control or responsibility for what flows through. With this they reject the notion that they are publishers, relying on Section 230 of the Communications Decency Act of 1996, the law the president now seeks to modify, to avoid the legal liabilities that traditional publishers bear.
The true status of these companies falls somewhere in between plumber and publisher, and we now need to develop a third paradigm, one that acknowledges and addresses their unique character. Three core principles need to guide the development of this new model.
The first is that government regulation of content is dangerous and wrong. As James Madison and the founders rightly understood, our democracy depends on our capacity to freely express criticism of government actions. As Madison wrote in defense of the First Amendment, “a popular government, without popular information, or the means of acquiring it, is but a prologue to farce or a tragedy; or, perhaps, both.” The impetus for yesterday’s Executive Order was Twitter’s assertion that the President was incorrect in saying that online voting inevitably leads to massive election fraud. This type of speech by Twitter is exactly what Madison and the Founders sought to protect when they drafted the Constitution.
Second, the social media platforms themselves need to assume greater responsibility for what is on their sites, especially for political content. To their credit, these companies have carved out certain areas where they are now much more vigilant in taking down false or harmful content, or at a minimum, in demoting it. In the last few months they have taken down mis- and disinformation relating to the coronavirus, such as posts promoting fake cures and prevention techniques. They also recognize this heightened degree of responsibility to remove hate speech, child pornography, and bullying, as part of their enforcement of their internal community standards. When the President tweeted that demonstrators in Minnesota might be shot if looting began, Twitter quickly posted a warning that the President’s tweet violated the company’s policy against glorifying violence. Twitter’s notice challenging the president’s tweet about voting by mail leading to election fraud, however, fell into a different category of special concern: falsehoods relating to the conduct of elections. Despite these laudable recent efforts, Twitter and the other platforms need to go further. Last fall, in a report entitled, “Disinformation and the 2020 Election,” the NYU Stern Center for Business and Human Rights recommended that the platforms also should take down provably false content. As my colleague Paul Barrett wrote, “The highest priority should be removing provably false content that affects politics or democratic institutions.” As we head into the final months before our election this fall, the need for the social media companies to address political disinformation becomes ever more important.
Finally, we all need to recognize that governance of the internet is a work in progress, involving an industry that is still in its adolescence. We sometimes forget that Facebook was launched in 2004, YouTube in 2005 and Twitter in 2006. The exponential growth of these services attests to their brilliance in harnessing the technology and the vital role they can and often do play as sources of information, education, and entertainment, as well as venues for commerce and political engagement. But having seen this potential, we now need a more thoughtful conversation, outside the realm of partisan politics, where we look to better define the roles of government, the platforms, and all of us as consumers in developing a healthier and less divisive online environment. To the extent the platforms are willing to take greater responsibilities, this will reduce the demand for government to become more involved. Today the Internet is dividing and not healing us, and the status quo will not hold. Nothing less than our democracy depends on our taking corrective action to address the harmful content that is tearing our society apart.