From the Wall Street Journal (via John Gruber), “Facebook Executives Shut Down Efforts to Make the Site Less Divisive”:
A 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found [that] “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms.
Gruber noted:
In the old days, on, say, Usenet, there were plenty of groups for extremists. There were private email lists for extremists. But there was no recommendation algorithm promoting those groups.
This crystalized in my mind the extent to which recommendation algorithms are central to both the successes and failures of social media. “Success” in terms of reach: the algorithms pick the most addictive posts to keep people hooked on the site, leading to massive engagement; “failure” in terms of the human cost: the most addictive posts are not only addictive but also often divisive, distressing, and untrue. The algorithms are widely and directly boosting extremely problematic content!
And this isn’t new to social media — human editors at tabloids and cable news have been using similar “recommendation algorithms” in their heads as they pick stories and headlines to keep people watching.
How do we make alternate recommendation algorithms available that optimize for other qualities, such as well-being, empathy, and trust?