In this week’s Roundup: the latest on Elon’s Twitter trial, a case at the Supreme Court that could rewrite the rules of the internet and why your voice could say more about you than you think.
News
You really couldn’t make this stuff up. The BBC reports that the Elon Musk/Twitter feud has become so toxic that even agreeing to do what you are being sued to do is unacceptable to the Twitter board.
Musk has now purportedly agreed to close his purchase of Twitter at the original price, but such is the level of distrust between the two parties that Twitter are still insisting on taking the matter to trial. In court filings for the company, Twitter’s lawyers argue that halting the legal proceedings would simply enable more mischief and chicanery from Musk.
Musk, for his part, has other irons in the fire. Only a few days ago he was forced to deny reports that he’d discussed the use of nuclear weapons as well as the terms of a negotiated settlement with Vladimir Putin.
More changes to Facebook have been announced. According to Andrew Paul at Popular Science, the platform will now add “show more” and “show less” buttons that will theoretically enable the Feed algorithm to get a better understanding of what you like and, by extension, what will keep you clicking and tapping your little fingers around the platform for longer.
It’s hardly a new idea, in fact, as the article notes, there’s no guarantee that the algorithm will heed these signals. But it’s another step in the increasing importance of algorithmic recommendation for Facebook.
Analysis
Another day, another set of proposed regulations to rein in the powers of Big Tech… kinda.
The impressively titled AI Bill of Rights proposed by the Biden Administration, in actuality, won’t be legally binding on private corporations, according to this article by Khari Johnson in Wired. Instead, the Bill is only a white paper which may guide how the federal government deploys such technology in the future.
As we’ve covered here before, significant legislation concerning Tech and AI is in the works in both the US and the EU – seemingly, this ain’t one.
Perhaps a touch hyperbolic, but you get the picture. The US Supreme Court has agreed to hear a case that, if successful, could hold tech companies responsible not for the content which they host, but the content which their algorithms promote.
The case has been brought by the family of Nohemi Gonzalez who was killed in the 2015 terrorist attacks in Paris. The suit claims that Google, as owners of YouTube, were negligent for developing and deploying an algorithm which served extremist videos to people who were unlikely to have seen them otherwise.
The success of the case will depend on how the court interprets Section 230 of the Communications Decency Act which grants tech companies immunity from prosecution for the content they host. For the Gonzalezes, there is a clear distinction to be made between hosting content and actively promoting it, although this is a distinction which two previous courts have been unwilling to make.
AI
AI may soon be used to analyze the sound of your voice to diagnose illnesses, as per this article from NPR’s Carmen Molina Acosta and Lise Weiner.
The research, funded by the National Institutes of Health, seeks to understand how different diseases or ailments – a stroke, say, or Parkinson’s disease – can be detected in minute changes to speech.
It’s hoped that this research will be ultimately implemented in the form of an app that will be able to detect illnesses and notify people to seek medical help.

Politicians in the UK entered the uncanny valley recently, as AI-powered robot artist Ai-Da appeared before the Communications and Digital Committee.
Answering questions supplied before the hearing, Ai-Da – named after mathematician and computer pioneer, Ada Lovelace – described her artistic process and the future of the discipline.