In this week’s Roundup: Will Elon sink Twitter? Redundancies at Meta, the problem with the software moderating video content online and what happens when America and China fight for control of AI.
Well, that didn’t take long, did it? The remaining staff at Twitter are being urged to “seek whistleblower protection” if they have qualms with the things being asked of them, according to this article by Alex Heath at The Verge.
Even by the lowest of low standards, Musk’s short tenure at the helm of Twitter has been an absolute dumpster fire. Users have left the platform, advertisers are staying away, charging for blue ticks turns out to not be the greatest idea and Musk himself has revealed “massive” revenue loss at the company – which isn’t profitable anyway.
Now it has emerged that Musk’s changes to the platform have not passed through the company’s own data governance procedures, thereby potentially placing the company at odds with the Federal Trade Commission.
And it’s not just Twitter that’s having problems. Mark Zuckerberg has announced that 11,000 people are to lose their jobs at Meta, as per this article from The Guardian’s Alex Hern, in the largest restructuring the company has ever undergone.
The layoffs, which amount to 13% of their entire workforce, come after Zuckerberg bet big that online habits developed during the COVID-19 pandemic would outlast it. Unfortunately, as online activity returned to normal levels and the Metaverse continues to drain the company’s resources, Meta finds itself significantly over-extended.
Nonetheless, Zuckerberg appears committed to the development of the Metaverse, describing the project as a “high-priority growth area.”
It turns out that firing half of your workforce at the same time as you try and roll out every new feature you can think of isn’t a great idea.
But it’s taken just a few days to show just how bad an idea it is. Not only has Twitter had to scramble to try and un-fire some of the people that they fired, but the platform is starting to show signs of wear and tear, and this, says the MIT Tech Review’s Chris Stokel-Walker, could be a sign of much worse things to come.
If the recent past has taught us anything, it’s that content moderation on social media ain’t up to much.
Now this could be for a few reasons: a) companies make an active decision to promote material that keeps people engaged for longer, and sometimes this happens to be harmful content; or b) understanding the edge-cases of harmful content is beyond the ability of AI systems designed to catch it.
This is the problem outlined by Arthur Holland Michel in The Atlantic. As video becomes the focus for more social media platforms, developing AI sophisticated enough to understand the nuances of human expression becomes even more pressing. And as things stand, we’re nowhere near.
A few months ago, we noted that Meta has developed a tool to translate 200 different languages. Now, Google has theoretically gone one step further and announced plans to create a language model that is capable of supporting 1,000. That’s according to this article from James Vincent at The Verge.
“Supporting,” here, means that the language model will have many different uses beyond translation.
As vice president of research at Google AI (and Echobox technical advisor) Zoubin Ghahramani explains, “One of the really interesting things about large language models and language research in general is that they can do lots and lots of different tasks, […] the same language model can turn commands for a robot into code; it can solve maths problems; it can do translation. The really interesting things about language models is they’re becoming repositories of a lot of knowledge, and by probing them in different ways you can get to different bits of useful functionality.”
What happens when old-school realpolitik meets 21st Century technology? As Protocol’s Kate Kaye has it… it’s complicated.
A borderless, collaborative approach to AI has led to developments in the technology taking place at great speed. But with the US and China struggling for competitive advantage, this open-source approach could be endangered.
Thing is, in today’s ever-more interconnected world, taking your own tech “in-house,” as it were, is easier said than done. Open-source systems are the basis of many of AI’s biggest advances with teams from some of the world’s largest corporations frequently working with each other and building upon each other’s technologies.
In the end, trying to regulate open-source AI might be like trying to regulate the tide.