In this week’s Roundup: Meta’s Canadian news ban doesn’t seem to be affecting user numbers, is YouTube collecting kids’ data? And could Google’s new watermark be the solution to deepfakes?
News
A report from Reuters suggests that Facebook usage in Canada has been unaffected by the recent news ban implemented by Meta.
Katie Paul and Steve Scherer write that “Daily active users of Facebook and time spent on the app in Canada have stayed roughly unchanged.”
This is despite the apparent popularity of news content on the platform. Data from the US released by Meta shows that 13 of the 20 most viewed domains in the US are news publishers.
Changes to the T&Cs associated with X Premium users will allow the company to collect biometric and employment data, writes Chris Vallance at the BBC.
X claims that this data will be used to authenticate a user’s identity should they wish. To do this, they will have the option to submit a selfie or official photo ID. The collection of employment and education data, meanwhile, is intended to help Twitter branch out into recruitment services.
Analysis
We all know that ad revenue at X has taken quite the hit since Elon Musk took over. This is because businesses are understandably wary of advertising on a platform whose content moderation policies might mean their ads appear next to particularly distasteful material.
Recently, news broke that Musk’s next project at X was to disable the “block” feature. Understandably, some people have concerns.
In this article for The Conversation, Denitsa Dineva, a lecturer at Cardiff University, outlines how, without recourse to blocking, business accounts may be forced into a moderation role.
In 2019, Google reached a settlement with the FTC that required it to stop collecting data for targeted ads from children. The result was the “made for kids” tag that content creators were required to tick when uploading content. Theoretically, tagging a video as “made for kids” disables any tracking. But recent reports have cast doubts on this, leaving Alphabet liable to legal issues.
Sara Morrison at Vox, reports that two different pieces of research suggest that the company is in fact collecting data from children viewing “made for kids” videos, and using this data for targeted ads.
Google has denied the reports’ claims, but, if they turn out to be true, could face fines that run into the billions of dollars.
AI
Google thinks it has an answer to the problem posed by deep fakes: watermarks Wikipedia reliably informs me that watermarks have been around since the 13th Century. But Google’s version gives it a bit of a glow-up.
The Verge’s David Pierce writes that the mark is invisible and can’t be cropped out, meaning that it is hopefully pretty resistant to tampering.
Meanwhile, the race to produce tech that mitigates the risk of fraudulent AI-generated images is in full swing, with Adobe, Meta and OpenAI just a few of the names working on their own separate projects.
Consider Naruto the selfie-taking macaque. In 2015, PETA took British photographer David J. Slater and a publishing company called The Blurb to court over Naruto’s selfie, claiming that the macaque should be the rightful owner of the image’s copyright. In that case, the judge ruled that animals aren’t eligible to claim copyrights, with that honor going only to people and legal entities.
Now, consider Stephen Thaler. An inventor and AI researcher, Thaler has created multiple AIs who themselves have created artworks and inventions. He is also behind multiple lawsuits aiming to have copyrights for this work granted to these AI systems.
The basis for these lawsuits, as Wired’s Will Bedingfield writes, is Thaler’s contention that these AIs are sentient. If correct, this would mean that they fulfill the key “authorship” requirement that many copyright regimes demand.