https://9to5mac.com/wp-content/uploads/sites/6/2020/05/Facebook-algorithms-boost-divisiveness.jpg?quality=82&strip=all&w=1000

Facebook algorithms promote divisive content, but company decided not to act

by

Facebook algorithms are partly programmed and partly based on machine learning, which explains why the company had to carry out research to learn that they are effectively promoting divisive content to users…

The primary job of the algorithms is to maximize user engagement, so they highlight content that achieves this and reduce the visibility of content that doesn’t.

An internal investigation found that one of the unintended consequences of this was that users were actively exposed to sensational and polarized content, as that drove them to respond. The Wall Street Journal reports that senior executives at the social network were asked to take action to limit visibility of divisive content, but chose not to do so.

A Facebook team had a blunt message for senior executives. The company’s algorithms weren’t bringing people together. They were driving people apart.

‘Our algorithms exploit the human brain’s attraction to divisiveness,’ read a slide from a 2018 presentation. ‘If left unchecked,’ it warned, Facebook would feed users ‘more and more divisive content in an effort to gain user attention & increase time on the platform’ […]

The high number of extremist groups was concerning, [an earlier] presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth, [finding] that ‘64% of all extremist group joins are due to our recommendation tools’ and that most of the activity came from the platform’s ‘Groups You Should Join’ and ‘Discover’ algorithms: ‘Our recommendation systems grow the problem’ […]

Facebook had kicked off an internal effort to understand how its platform shaped user behavior and how the company might address potential harms. Chief Executive Mark Zuckerberg had in public and private expressed concern about ‘sensationalism and polarization.’

But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.

The report says there were two reasons for this, in addition to the obvious one of not wanting to reduce eyeball time for ads.

First, the company took the view that it should not interfere with free speech, even if that were in the interests of users, as that would be “paternalistic.” CEO Mark Zuckerberg is said to be a particularly vigorous proponent of this argument.

He argues the platform is in fact a guardian of free speech, even when the content is objectionable — a position that drove Facebook’s decision not to fact-check political advertising ahead of the 2020 election.

Second, that any action to limit divisiveness might be perceived as politically motivated.

Some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.

The whole piece is worth reading.