In this week’s roundup: what’s the difference between you and a Brazilian soccer star according to Facebook? Why is boring bad business for the metaverse? And what does the latest US-EU Trade and Technology Council meeting mean for AI? Find out below.
In another setback for Meta’s ad business, the EU will stop the company from using personal data to target ads without explicit consent, according to Al Jazeera.
The ruling bans Meta’s apps from using terms of service agreements to justify the collection of personal data for targeted adverts based on behavior within their own apps. Currently, Meta only allows users to opt in or out of ads targeted on the basis of a user’s general internet activity.
And that’s not the only bad news in store for Meta’s businesses. Dan Milmo and Alex Hern of The Guardian report that Facebook’s “Supreme Court” (its oversight board) has found that the company’s “cross-check” system appears more aligned with its business interests than promoting freedom of speech or human rights.
While for the rest of us, content moderation on the platform is performed by machines, cross-check is Facebook’s content moderation policy for certain high-profile users which involves the human checking of posts that could potentially be taken down. Posts subject to the process are allowed to remain in place pending review, a review which takes into account the entirety of Facebook’s moderation policy.
Some of the entities on that list, Facebook has now admitted, were placed there because they generated a lot of income for the company and some were granted special treatment such as accounts staying active after flagrant breaches of the rules.
Meta’s metaverse project, as we all know, is an enormous money drain on the company, with $100bn pumped into it already. The results are… underwhelming.
So what’s actually going on here? As The Guardian’s Steve Rose makes clear, Meta’s problem isn’t necessarily a lack of interest in the concept of a metaverse, per se. In fact, some companies are already attracting millions of users to their own metaverse every month – Minecraft, for example, or Roblox (me neither).
The problem with Meta’s metaverse seems to be that it’s just a bit dull, and whether the enterprise has legs or not will depend, to a great extent, on whether Meta can solve that problem.
The attentive among you may have noticed that Twitter is having a few difficulties at the moment. But for those companies that figure Twitter’s pain as their gain, the road ahead isn’t straight forward. For all of its problems, Twitter still has a large, dedicated user base that aren’t about to quit any time soon, and without a critical mass of users on other upstart platforms, widescale desertion of Twitter looks a long way off.
Some of these would-be competitors, write Mitchell Clark and Emma Roth of The Verge, are focused primarily on the quality of discourse on their platforms as their main USP. Some are merely Twitter clones with supposedly better moderation policies and less toxic leadership. And some, like Mastodon, are decentralized and run by their users. All of them, however, will struggle to match the scale of Twitter without offering as simple and streamlined an experience as it does.
Everyone’s talking about Chat GPT, yet another chatbot released to the public, and by the sound of things it’s decent. Not only does the thing work, but it’s supposed to be pretty hard to make it write the kinds of things you’d now expect to hear from Ye, which is a blessing.
Except that might not be strictly true. The company behind the chatbot, Open AI, say that although unspecified actions have been taken to stop the AI ingesting harmful responses, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” which presumably is a nice way of saying that if you push it, it might come out with some stuff about the temperature of burning jet fuel. How right they were.
According to this article by Sam Biddle in The Intercept, it’s not that hard to make the chatbot come up with some pretty terrifying things.
The US-EU Trade and Technology Council ended a few days ago, resulting in a joint statement opposing the use of AI systems that are incompatible with human rights.
As per this article for The Brookings Institution, Benjamin Cedric Larsen of the World Economic Forum writes that this implicit criticism of the Chinese and Russian states’ use of coercive technologies could be the start of a process of “technological decoupling” whereby states look to establish technological self-sufficiency.
This bifurcation, based on the relative importance of human rights, could reduce the interoperability of AI systems, and make the alignment of regulatory approaches more difficult.