In this week’s Roundup: Elon takes Media Matters to court, revelations from the DOJ’s antitrust case against Google and just why did Sam Altman get the sack from OpenAI?
Joining the Center for Countering Digital Hate on Elon Musk’s naughty list this Christmas is media watchdog Media Matters, reports the BBC’s Max Matza.
Musk has filed a lawsuit claiming that the organization has “manufactured side-by-side images” showing advertisements next to bigoted and hateful posts, all in an attempt to drive advertisers from the platform.
Needless to say, Media Matters contends that the suit is without merit.
Meta’s president of global affairs, Nick Clegg, has announced that the company will allow a select group of researchers access to its new Content Library and API to promote transparency.
According to MIT Tech Review’s Tate Ryan-Mosley, the tool provides “near-real-time data about pages, posts, groups, and events on Facebook and creator and business accounts on Instagram, as well as the associated numbers of reactions, shares, comments, and post view counts.”
This is all publically available, but the tool will make it easier for researchers to analyze the data.
The DOJ’s antitrust lawsuit against Google is nearing its resolution. With that in mind, Vox’s Sara Morrison’s article outlines some of the things we’ve discovered from the trial.
For instance, it emerged that in 2021, Google paid $26.3 billion to be the default search for companies like Apple, Verizon and Samsung. Also, Google “tweaked search ad auctions in ways that may increase prices to advertisers by 5 or even 10 percent so that Google could meet its revenue goals.”
If all this firing and rehiring is testament to anything (apart from OpenAI’s messy organizational structure) it’s the chaos that can reign in an industry without clear regulations in place or any meaningful transparency. As Blake Montgomery of The Guardian writes, “the development of cutting-edge AI rests in the hands of a small, secretive cadre that operates behind closed doors.”
So, why did Sam Altman get canned in the first place?
The exact reasons for his initial departure haven’t been made completely clear, with a lack of candor being cited. But in the following days, reports emerged that researchers raised the alarm over powerful new tech in development at the company in the days before Altman was fired. Anna Tong, Jeffrey Dastin and Krystal Hu from Reuters write that some staff had concerns about the potential impact of the technology which seems to have acquired the ability to do math.
This is more impressive (and potentially sinister) than it sounds, apparently, which led to ethical concerns.
In a similar vein, Katyanna Quach at The Register reports that Meta has decided to do away with its AI ethics team.
The Responsible AI team was created in 2019 before being folded into its Social Impact Unit. Now, it’s getting the chop.
A spokesman for the company was at pains to clarify that, obviously, binning-off their AI ethics team didn’t mean that they weren’t interested in AI ethics.