The European Union and United States put out a joint statement Friday affirming a desire to increase cooperation over artificial intelligence — including in relation to AI safety and governance — as well as, more broadly, talking up an intent to collaborate across a number of other tech issues, such as developing standards for digital identities and applying pressure on platforms to defend human rights.
As we reported Wednesday, this is the fruit of the sixth (and possibly last) meeting of the EU-U.S. Trade and Technology Council (TTC), a body that’s been meeting since 2021 in a bid to rebuild transatlantic relations battered by the Trump presidency.
Given the possibility of Donald Trump being returned to the White House, with US presidential elections taking place later this year, it’s not clear how much EU-US cooperation on AI or any other strategic tech area will actually happen in the coming years.
But, under the current political make-up across Atlantic, the will to push for closer alignment across a range of tech issues has gained in strength. There is also a mutual desire to get this message heard — hence today’s joint statement — which is itself, perhaps, also a wider appeal aimed at each side’s voters to opt for a collaborative program, rather than a destructive opposite, come election time.
An AI Dialogue
In a section of the joint statement focused on AI, filed under a heading of “Advancing Transatlantic Leadership on Critical and Emerging Technologies”, the pair write that they “reaffirm our commitment to a risk-based approach to artificial intelligence… and to advancing safe, secure, and trustworthy AI technologies”.
“We encourage advanced AI developers in the United States and Europe to further the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems which complements our respective governance and regulatory systems,” the statement also reads, referencing a set of risk-based recommendations that came out of G7 discussions on AI last year.
The main development out of the sixth TTC meeting appears to be a commitment from EU and US AI oversight bodies, the European AI Office and the US AI Safety Institute, to set up what’s couched as “a Dialogue”. The aim is fostering a deeper collaboration between the AI institutions, with a particular focus on encouraging the sharing of scientific information among respective AI research ecosystems.
Topics highlighted here include benchmarks, potential risks, and future technological trends.
“This cooperation will contribute to making progress with the implementation of the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, which is essential to minimise divergence as appropriate in our respective emerging AI governance and regulatory systems, and to cooperate on interoperable and international standards,” the two sides go on to suggest.
The statement also flags an updated version of a list of key AI terms, with “mutually accepted joint definitions”, as another outcome from ongoing stakeholder talks flowing from the TTC.
Agreement on definitions will be a key piece of the puzzle to support work towards AI standardization.
A third element of what’s been agreed by the EU and US on AI shoots for collaboration to drive research aimed at applying machine learning technologies for beneficial use-cases, such as advancing healthcare outcomes, boosting agriculture and tackling climate change, with a particular focus on sustainable development. In a briefing with journalists earlier this week a senior Commission official suggested this element of the joint working will focus on bringing AI advancements to developing countries and the global south.
“We are advancing on the promise of AI for sustainable development in our bilateral relationship through joint research cooperation as part of the Administrative Arrangement on Artificial Intelligence and computing to address global challenges for the public good,” the joint statement reads. “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction. We are also making constructive progress in health and agriculture.”
In addition, an overview document on the collaboration around AI for the public good was published Friday. Per the document, multidisciplinary teams from the EU and US have spent over 100 hours in scientific meetings over the past half-year “discussing how to advance applications of AI in on-going projects and workstreams”.
“The collaboration is making positive strides in a number of areas in relation to challenges like energy optimisation, emergency response, urban reconstruction, and extreme weather and climate forecasting,” it continues, adding: “In the coming months, scientific experts and ecosystems in the EU and the United States intend to continue to advance their collaboration and present innovative research worldwide. This will unlock the power of AI to address global challenges.”
According to the joint statement, there is a desire to expand collaboration efforts in this area by adding more global partners.
“We will continue to explore opportunities with our partners in the United Kingdom, Canada, and Germany in the AI for Development Donor Partnership to accelerate and align our foreign assistance in Africa to support educators, entrepreneurs, and ordinary citizens to harness the promise of AI,” the EU and US note.
On platforms, an area where the EU is enforcing recently passed, wide-ranging legislation — including laws like the Digital Services Act (DSA) and Digital Markets Act — the two sides are united in calling for Big Tech to take protecting “information integrity” seriously.
The joint statement refers to 2024 as “a Pivotal Year for Democratic Resilience”, on account of the number of elections being held around the world. And includes an explicit warning about threats posed by AI-generated information, saying the two sides “share the concern that malign use of AI applications, such as the creation of harmful ‘deepfakes,’ poses new risks, including to further the spread and targeting of foreign information manipulation and interference”.
It goes on to discuss a number of areas of ongoing EU-US cooperation on platform governance and includes a joint call for platforms to do more to support researchers’ access to data — especially for the study of societal risks (something the EU’s DSA makes a legal requirement for larger platforms).
On e-identity, the statement refers to ongoing collaboration on standards work, adding: “The next phase of this project will focus on identifying potential use cases for transatlantic interoperability and cooperation with a view toward enabling the cross-border use of digital identities and wallets.”
Other areas of cooperation the statement covers include clean energy, quantum and 6G.