In this week’s Roundup: Threads won’t amplify news content during the Israel-Hamas conflict, why NPR’s decision to ditch X has had less of an impact than you might think, and why the prospects of generative AI may cool in 2024.

News

Meta’s latest social platform, Threads, will not be boosting news content during Israel’s conflict with Hamas, writes TechCrunch’s Sarah Perez.

According to Adam Mosseri, Meta’s Head of Instagram, amplifying news “would be too risky given the maturity of the platform, the downsides of over-promising, and the stakes.” The news comes after other social media platforms have been flooded with misinformation and violence.

Mosseri’s wariness about news content applied to Instagram, too. “Having worked on Facebook for a long time,” he said, “and leaning in really hard there, we want to be really careful not to over-promise and under-deliver.”

Utah has become the third US state to initiate legal proceedings against TikTok. As per Kim Bojórquez of Axios, the lawsuit alleges, among other things, that the design of the app and its algorithms break the state’s consumer protection laws by inducing children to use the app.

The exploitation of behavioral psychology to increase the “addictiveness” of social media platforms is well known. Speaking in 2017, Sean Parker, Facebook’s first president, recalled the developement of the app: “The thought process was,” he said, “all about, ‘How do we consume as much of your time and conscious attention as possible?’ And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever, and that’s going to get you to contribute more content, and that’s going to get you more likes and comments. It’s a social validation feedback loop. … You’re exploiting a vulnerability in human psychology.”

Whether Utah decides to prosecute other social media companies on this basis remains to be seen.

Analysis

Following its erroneous designation as “U.S. state-affiliated media,” NPR ditched X in the middle of April. Now, six months later, says Gabe Bullard at Nieman Reports, an internal memo has been circulated outlining the effect of the decision on traffic: none. To be more precise, the decision for the company to remove itself from X led to a one percentage point decrease.

This shouldn’t be surprising. Our Social Media Index, a tool which examines the percentage of referral traffic coming from various social media platforms, has X generating around 0.65% of all traffic for publishers.

But for organizations like NPR, X was never just about the clicks. Rather, it was a way to generate other forms of engagement and be part of the conversation.

Bullard’s article looks at how NPR have adapted their social strategy on other networks to compensate.

It was only a matter of time before staff cuts to X’s team countering disinformation resulted in the inevitable. Unfortunately, it only took two weeks. The conflict currently raging in southern Israel and Gaza has led to a flood of misinformation, according to Charlie Warzel at The Atlantic.

Indeed, in a later-deleted post on Monday, Elon Musk recommended two accounts to his millions of followers, both of which had spread fake stories about an explosion near the Pentagon in May, and one of which was later found to have posted antisemitic content.

The spread of misinformation on X was entirely predictable, but the dearth of social platforms that seem up to the task of replacing what Twitter offered to users is symptomatic of a wider malaise within the industry, contends Warzel.

AI

An overheated generative AI market is set for a major slowdown next year, reports CNBC’s Ryan Browne.

Produced by analysts at CSS Insight, the report predicted that the cost and complexity of the technology would dent some of the early hype.

The report also foresees issues with the regulatory measures working their way through the European Parliament. The EU’s AI Act will enact the first comprehensive legislation on the use and development of AI. But the report argues that the sheer pace of change in AI tech will mean the need for multiple revisions to stay current.

Another potential drag on the unencumbered growth of AI may well be heightened awareness of its environmental impact.

A study from the Netherlands, reported by Zoe Kleinman and Chris Vallance for the BBC, has found that in a worst case scenario, the annual energy consumption associated with the production and running of AIs could match that of a small country, or .5% of the world’s yearly energy consumption.

Menu