Digital Bridge: Debunking AI hype — Ex ante competition — Twitter’s data problem

Digital Bridge: Debunking AI hype — Ex ante competition — Twitter’s data problem
Опубликовано: Thursday, 01 June 2023 11:41

POLITICO’s weekly transatlantic tech newsletter for global technology elites and political influencers.




By MARK SCOTT


Send tips here | Subscribe for free | View in your browser


VÄLKOMMEN TILL DIGITAL BRIDGE. This week’s newsletter comes from Luleå, Sweden, where U.S. and EU officials just wrapped up the latest Trade and Technology Council summit. I’m Mark Scott, POLITICO’s chief technology correspondent, and I bring you clear evidence of mass collusion between Google’s rivals to take on the search giant’s dominance.


There’s a little for everyone this week. Buckle up:


— Amid the hype around artificial intelligence, let’s acknowledge what many are afraid to say: Most of us don’t know what we’re talking about.


— The competition world wants to rewrite rules for the digital world to get ahead of abuses. That’s not exactly going to plan.


— Twitter’s latest omnishambles: strong-arming outside researchers over how they access the tech giant’s treasure trove of data.


DON’T GET SUCKERED BY THE AI HYPE MACHINE


I HAVE A CONFESSION TO MAKE. I am not an expert in artificial intelligence. Thing is, that’s also true for almost everyone that has come out of the woodwork to expound on the benefits or dangers of a technology whose history goes back decades but has captured the public’s imagination ever since OpenAI’s ChatGPT (itself, a company that has been around almost a decade) started to grab people’s attention last fall.


In the last six months, we’ve seen multiple open letters calling AI a greater threat than climate change; politicians falling over themselves to be seen to be doing something (anything!) about a technology they barely understand; and the rest of us struggling to understand if artificial intelligence — and, so-called generative AI in particular — is either here to save us or accelerate what climate scientists have been warning for years: that we’re heading off a cliff without any breaks.


Within this debate there are two camps emerging: the long-termists versus the short-termists (bear with me). On the one side are those making public calls (looking at you, OpenAI’s Sam Altman) for new global regulatory agencies and wholesale AI legislation to curb the risks that so-called general AI, or some future all-knowing technology akin to a human brain, may pose sometime in the future. For these tech futurists, that existential threat — or a future AI system taking over the world — is fundamentally more important than the short-term issues around data bias, algorithmic transparency and a litany of other wonky topics.


Not surprisingly, the other camp disagrees. For them, there are enough AI uses that are already causing harm (related to automated decision-making; skewed datasets harming minority groups; and the concentration of power within a few AI giants that have the resources to compete globally) that need to be tackled — and tackled now. Why worry about Skynet, they argue, if people’s social security benefits or housing allowances are getting screwed by opaque automated systems that are neither accountable nor understandable?


In truth, the balance confronting politicians and policymakers is to find a middle road between both group. Call it the “trees and woods” theory, in which officials must combat the existing problems bubbling around AI, while also keeping an eye on the long-term systemic risks this emerging technology may represent over the next 10 years. It’s not an enviable place to be in. That’s especially true when lawmakers are already being bombarded by companies urging for regulation (note: they only want pro-business regulation) with little to no understanding of what goes into a large language model AI or how algorithmic audits actually operate.


Luckily, this is not completely new. Again, I’m no AI expert (anyone who says they are must be treated with a massive grain of salt). But there’s existing regulatory and technical know-how from data protection standards, pharmaceutical regulation, banking rules and a litany of other industries that could be brought to bear. Governments are desperate to be seen as responding to a technology that often feels more like science fiction than to mundane questions around what data goes into AI models, how you test new systems in a controllable manner, and what global oversight looks like for a cross-border problem.


My personal opinion is: Be to focus more on the short-term concerns than get lost in the long-term scaremongering about a technology that will likely look a lot different in three years, let alone two decades. Case in point: The current AI models, based on how they were trained with skewed data, favor me, a white man living in a Western country, more than those from minority groups. That would be a clear place to start if we’re looking to reduce harms by making such systems more equitable. I get it’s more interesting to focus on the once-in-a-generation threat that AI may represent. But for policymakers, you need to look past the hype and focus on how you can effect change in the here and now.


EX ANTE REGULATION: THE PROBLEM WITH PREDICTING THE FUTURE


IN HIPSTER ANTITRUST CIRCLES (if such things exist), the craze over the last five years has been how to overhaul the enforcement of digital markets to stop a select few (mostly American) companies from dominating everyone else. Regulators looked at their current rulebooks — one focused on intervening only after harm had materialized — and figured something was amiss. They reference Meta’s $19 billion deal for WhatsApp in 2014 as a clear sign that something was wrong. While that deal involved little, if any, revenue being generated by the internet messenger, the acquisition allowed Facebook (as it was called then) to corner the market within a fast-growing sector.


With such cases in mind, policymakers have been on a crusade to implement so-called ex ante regulation, or rules that would allow regulators to intervene in emerging industries before a legacy company buys its way to domination. That’s the central component of Europe’s digital competition overhaul, key to what the United Kingdom is trying to do with its proposals, and was also at the heart of stalled bills in the U.S. Congress. Others like Australia and South Korea are mulling similar changes to their competition regimes.


Yet two contrasting decisions by the U.K.’s Competition and Markets Authority and the European Commission show the troubles when enforcement agencies use their new powers to predict the future. Both jurisdictions stepped in to review Microsoft’s $69 billion takeover of Activision, the video-gaming giant, over concerns the deal may hamper competition and hobble consumer choice. In part, both relied on this new theory of harm — that regulators should step in before abusive behavior occurs — to determine whether they should approve the deal. It was a clear test of whether antitrust agencies could accurately determine where future harm would come from.


And yet, the Europeans and British went in completely different directions. In Brussels, the European Commission’s antitrust experts approved the deal, arguing Microsoft’s purchase of Activision would increase competition in the console gaming market that is currently dominated by Sony and its PlayStation device. The U.S. tech giant agreed to freely license Activision’s blockbuster titles like Call of Duty to others for 10 years, and the European Union believed the deal would lead to better outcomes for (console) gamers.


The U.K. went the other way. In its decision, London blocked the acquisition on the grounds it would give Microsoft an overly-dominant position in the fast-growing cloud computing gaming sector, or the ability for anyone to stream titles over the internet. That market is significantly smaller than console gaming, based on annual revenue. But it’s growing significantly faster, and the Brits viewed Microsoft’s takeover of Activision as a direct threat to future rivals — akin to a regulator stopping Meta’s deal for WhatsApp, on the grounds of protecting consumers in the long term.


So who’s right? Good question. Both agencies used existing powers (technically, their new ex ante powers haven’t kicked in yet) to look into the future to see what was more important: console gaming or cloud computing gaming. Regulators had slightly different evidence bases to rely on. But their goal was the same in protecting consumers and giving rivals a chance to compete. They just came to completely different conclusions, and — given Microsoft’s need to get the U.S., the U.K., and the EU to approve the deal — that will likely kill the deal. FWIW, the U.S. Federal Trade Commission sued Microsoft in December to also block the acquisition.


Those in the industry view the Microsoft-Activision case as a clear sign that regulators will never be good at predicting the future. Those who favor ex ante regulation argue it’s a work in progress and only time will tell if the U.K. or EU was right in their contrasting assessments. But one thing is clear. Anyone who thinks revamped competition rules will solve all the problems within digital markets should have a hard look in the mirror. In truth, no one — either enforcers or company executives — can truly know what is coming around the corner. Having the maximum flexibility in these new rules (including the willingness to admit enforcement decisions were wrong) would be a good place to start.


BY THE NUMBERS



image

caption | credit


TWITTER JUST CAN’T HELP ITSELF


FIRST, TWITTER DROPPED SOME OF ITS COMMITMENTS to tackle disinformation. Now, the Blue Bird is playing hardball with outside academics who track foreign interference, hate speech and other content issues on the social network. The latest example: The company’s executives have told several independent researchers they must delete all Twitter data collected via its so-called decahose, a cheap data access tool used by academics to track what happens on the platform. If they want to hold on to that information, which often dates back years and represents arguably the only outside oversight of what has happened on Twitter, these groups must sign up for new data-access packages that range from $42,000 to $210,000 a month, according to multiple emails between researchers and Twitter that were shared with Digital Bridge.


This is part of Twitter’s effort to do what any business is supposed to do: make money. Maintaining the digital infrastructure to permit such outside data access is costly, and represents a downside risk for Twitter, as most of this research is dedicated to finding problems on the platform. But such work also represents a fundamental part of allowing everyone to understand what’s happening on what’s still a social network central to politics — especially ahead of next year’s election cycle. Academics, who were granted anonymity to discuss internal meetings with Twitter, said they can’t afford to pay up to $210,000 a month for worse access than they currently receive (for a much lower price) via Twitter’s decahose. The deletion orders are expected to kick in over the next six weeks.


WONK OF THE WEEK


WE’RE BACK IN WASHINGTON THIS WEEK, where Anne Neuberger is the White House’s deputy national security advisor for cyber and emerging technology. There’s been major churn at the National Security Council recently, so she’s now one of the longest-serving officials.


This isn’t new territory for the Columbia University graduate, who spent more than a decade working on cybersecurity issues within the U.S. National Security Agency, eventually being appointed as the agency’s director of its cybersecurity directorate before joining President Joe Biden’s administration in early 2021.


“In the intelligence community, we put a tremendous focus on countries, what their plans are and how they use cyber to achieve their strategic agendas, and each one does things a bit different because their strategic objectives are a bit different,” Neuberger told a cybersecurity conference while working at the NSA in 2019.


THEY SAID WHAT, NOW?


“The United States and the European Union are committed to deepening our cooperation on technology issues, including on artificial intelligence (AI), 6G, online platforms and quantum,” according to the communiqué from the latest EU-U.S. Trade and Technology Council summit. “We are committed to make the most of the potential of emerging technologies, while at the same time limiting the challenges they pose to universal human rights and shared democratic values.


WHAT I’M READING


— Brandon Silverman, the co-founder of CrowdTangle, does a deep dive into TikTok’s transparency efforts to give outsiders a better understanding of what is going on within the Chinese-owned social media giant.


— The geopolitical issues with who controls large language models for AI systems are too important to leave to computer scientists alone, and require politicians and social scientists to weigh in, too, argues Hannes Bajohr, a researcher at the University of Basel.


Alex Joel, a former senior U.S. official and current American University professor, outlines why he believes the White House’s executive order on transatlantic data flows meets the requirements under EU law to provide the U.S. with a so-called data adequacy agreement.


Daphne Keller, director of Stanford University’s program on platform regulation, goes through the practicalities of accessing social media data, and what pitfalls need to be avoided as regulators worldwide and those within the outside research community clamor for more access.


— Data and how it is collected is the under-valued and deglamorized aspect of AI that needs to be better understood to avoid inserting existing societal biases into how this technology is developed, according to research from Google.


— Kai Zenner, an assistant to a leading European politician working on the bloc’s AI act, has everything you need to know about where that legislative process stands, including more documents than you can shake a stick at.