Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
28/07/2023

AI: The Balancing Act Between Technological Advancement and Global Governance

AI: The Balancing Act Between Technological Advancement and Global Governance
 Milo Rignell
Author
Fellow - AI & Emerging Technologies

Artificial Intelligence is in full bloom, and its potential is as promising as it is threatening. An initial race towards innovation following the launch of ChatGPT was quickly followed by worries about the "risk of extinction” that AI could present. A number of countries view these concerns as an opportunity to leave their mark on the global governance of AI. In this article, Milo Rignell, project manager and resident fellow for emerging technologies at Institut Montaigne, describes the new balance that countries must find between the pursuit of technological dominance, the race for leadership in AI governance, and the precautionary principle. The analysis covers the American, Chinese, British and French responses to this challenge.

The race for technological dominance

The first is the global race for AI. Towards the end of 2022, ChatGPT revealed the capabilities of the latest AI models to millions of users, showcasing compelling results across virtually every field, from medicine to finance – even though everyone could sense that they were likely only seeing the tip of the iceberg. Goldman Sachs estimated that AI could drive a 7% (or $7 trillion) increase in global GDP over ten years. The question on everyone’s mind, from businesses to entrepreneurs, investors, and heads of state, was how to catch up to OpenAI, the creator of ChatGPT, and take the lead in the intensifying AI race. In 2022, global private investment in AI reached $92 billion, representing a twenty-fold increase in the span of merely a decade. For governments, these advancements confirmed that AI was a strategic issue of utmost importance, not just from an economic standpoint but also from a military perspective.

The race for AI governance

Following the rapid spread of ChatGPT among the general public, coupled with the astounding leaps in performance showcased by OpenAI’s latest AI model (GPT-4), the conversation quickly turned to a second issue – the need for governance of a technology that is incredibly powerful yet still misunderstood.

We currently lack the means to guarantee or even evaluate the behavior of AI systems that are growing more capable and autonomous by the day.

Many scientists leading the charge in modern AI advancements, along with several leaders of companies at the forefront of AI innovation, have publicly voiced their concerns about the "risk of extinction from AI", adding that this "should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Not just because such AI models could fall into the wrong hands and be exploited for malicious purposes, for example to design biological weapons and artificially induced pandemics.

But also because the prospect of artificial general intelligence, capable of outperforming humans in all cognitive tasks – from developing new computer programs to long-term planning – confronts us with a disconcerting realization. We currently lack the means to guarantee or even evaluate the behavior of AI systems that are growing more capable and autonomous by the day. The attention garnered by the risks associated with AI has spurred a second race – a race for AI governance, to define the rules that will regulate AI systems around the world.

A trilemma between the technology race, the governance race and the precautionary principle

The rapid pace of advancements in AI has created a trilemma for countries vying for global leadership in AI – mainly the United States and China, but also the United Kingdom and some member states of the European Union, France in particular. These countries must figure out how to simultaneously remain at the cutting edge of a highly strategic technology, lead the development of global governance standards, and mitigate the risks associated with the development of unpredictable AI systems that could jeopardize national or even global security.

Recent developments in AI have compelled these countries to reassess their strategies in light of this trilemma. They must now find a way to balance the pursuit of technological dominance, the race for leadership in AI governance, and the precautionary principle. Historically, national strategies have prioritized technological leadership in AI. Regulation and investment in AI safety were often regarded as hindrances to these ambitions. For China, AI development was one of the cornerstones of its "Made in China 2025" industrial policy that was unveiled in 2015. Beijing’s goal to become the world leader in AI by 2030 was a key element of its broader strategy to gain economic and military ascendancy over the United States. Washington’s initial AI strategy also placed strong emphasis on R&D and technological leadership. While China was quick to take an active interest in AI standardization, as it has for other technologies, the European Union was alone in its early push for effective regulation of the technology, as outlined in its February 2020 white paper on AI.

However, both China and the United States have recently started recalibrating their strategies to adapt to the rapidly evolving AI landscape.

For China, AI oversight is primarily motivated by domestic control considerations. A series of regulations have been introduced to control AI’s potential for disseminating information that the regime might consider threatening, including a regulation governing online content recommendation algorithms in March 2022, another on deep synthesis technology (used to generate "deepfakes") in January 2023, and most recently, in July 2023, rules to govern generative AI.

 

For China, AI oversight is primarily motivated by domestic control considerations.

However, at higher strategic levels, decision-makers within the Chinese Communist Party are taking a hard look at the "existential risks" that artificial general intelligence could bring about. While the United States continues to move forward without any real plans to regulate AI (consistent with its tradition of limited federal regulation, particularly on technological matters), the Biden administration has nonetheless laid the groundwork for regulation by publishing a "Blueprint for an AI Bill of Rights". This document, released in October 2022, sets out initial guidelines for both public and private stakeholders to promote the development of safe and responsible AI.

A new arena for innovation: AI safety

In lieu of regulation, the United States amended its industrial strategy for AI, honing in on safety and trust, fostered through an incentives-based approach built on voluntary standards, transparency, evaluation, and innovation. The latest version of Washington’s strategic plan for AI, published in May 2023, dedicates 5 of its 9 priorities to the safety and security of AI systems, stating that "long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means". Building a more robust toolkit to not only develop AI models, but also evaluate and audit their safety, is a pivotal part of this strategy. The US National Institute of Standards and Technology (NIST) is at the forefront of this mission. In May 2023, the White House hosted a meeting on advancing responsible AI innovation. Participants included Vice President Kamala Harris and the CEOs of four American companies at the forefront of AI innovation (OpenAI, Google, Anthropic, and Microsoft) that are developing the most advanced generative AI models – sometimes referred to as "foundation models". Here again the evaluation of these large-scale AI models was identified as a key theme, alongside their transparency and cybersecurity. The White House announced that these leading AI companies had committed to participating in a public evaluation of their AI systems at the DEFCON 31 cybersecurity conference in August 2023.

Rishi Sunak He also announced in early June that the UK would host a global summit on AI regulation in late 2023.

The safety and evaluation of large-scale foundation models have also become a major focus of Britain’s AI strategy in recent weeks. Alongside a flexible and "pro-innovation" approach to AI regulation aimed at making the UK a leader rather than a watchdog in AI, Prime Minister Rishi Sunak announced in late April that he was dedicating £100 million in funding for a task force to develop safe foundation models. He also announced in early June that the UK would host a global summit on AI regulation in late 2023, a move supported by the Biden administration.

These developments are in line with the recommendations put forward by Institut Montaigne, which has advocated for France to establish itself as the global leader in AI safety as quickly as possible. This could be achieved by positioning the country as a haven for the plethora of international scientists wishing to work full time on AI safety, by leveraging its existing expertise in AI safety for critical systems through its industrial ecosystem, and by using the impending European regulations to establish carefully chosen safety requirements. An ancillary benefit of this approach would be to disadvantage less reliable foreign AI systems, thereby promoting the development and use of more trustworthy domestic alternatives.

France and AI – an unresolved balancing act

The Macron administration’s recent announcements on AI suggest a laissez-faire strategy. The primary focus appears to be on bridging the technological gap rather than adopting an alternative approach focused on AI safety. The French government has earmarked €40 million to support the development of "digital commons" for generative AI – tools such as large pre-trained models, databases, and evaluation tools. The administration is also planning an international competition on "general purpose AI" (or "artificial general intelligence", depending on the translation) aimed at empowering more players to develop solutions in a sector that generally requires substantial initial sunk costs. This approach is undoubtedly a necessary measure to prevent excessive technological reliance on the foundation models controlled by a handful of American and Chinese companies. At this stage, however, it is largely defensive in nature. The chances of truly catching up to leading American competitors remain slim, and the risk of falling further behind increases if the pace of technological advancement continues to accelerate.

But France is far from being out of the game. The country can play a crucial role globally in establishing the safety and trustworthiness requirements that will apply to AI systems and the companies developing them. In concrete terms, this involves establishing voluntary standards (such as ISO) and creating evaluation and auditing tools that will ensure these standards are upheld.

In concrete terms, this involves establishing voluntary standards (such as ISO) and creating evaluation and auditing tools that will ensure these standards are upheld.

These tools could provide clarity and direction for researchers, entrepreneurs, and established industrial actors, enabling them to do what they do best and find innovative solutions in AI safety. Provided these measures are skillfully linked with the forthcoming European AI Act, they can be instrumental in supporting the successful implementation of this regulation by 2025 and clarifying the requirements that companies will need to meet.

France has already proven itself to be a trailblazer in the field of AI safety and evaluation. The country has global expertise in AI safety, with a well-established industrial sector involved in managing critical systems such as aeronautics and nuclear power. The government has already invested over €100 million into a major advanced research project focused on trustworthy AI, making it one of the most significant investments into AI safety. Additionally, entities such as the French National Metrology and Testing Laboratory (LNE) are leading the way in AI evaluation.

France must now bridge its industrial and generative AI ecosystems to create a wellspring of global leaders in the safety of the largest foundation models. In the short term, the government’s €40 million pledge to support the development of common tools for generative AI could serve as an opportunity to develop benchmarks for measuring the safety and trustworthiness of AI models. The international competition on general purpose AI, which aims to compare performance based on standardized metrics, could be redefined as an "international competition on safe general purpose AI" by using these new benchmarks to prioritize innovation in safety and trustworthiness.

There is unanimous consensus on the need for common safety and trustworthiness standards for AI

While not all countries are keen to adopt binding regulation, there is unanimous consensus on the need for common safety and trustworthiness standards for AI. Corporate adoption and user trust depend on it.

Europe, China, and the United States are all converging towards similar criteria for safety and trustworthiness, which include robustness, reliability, transparency, explainability, data quality, cybersecurity, accuracy, non-discrimination, absence of bias, and correctly specified model objectives. European AI safety, therefore, is not so different from its American or Chinese counterparts (although China places greater emphasis on content control). As part of the transatlantic EU-US Trade and Technology Council (TTC), Europe and the United States have even agreed on a joint roadmap to develop AI standards and evaluation tools. However, despite Europe’s ambition to take the lead in AI governance, the US, by way of the NIST, is currently guiding the process. This presents a potential pitfall for Europe: a European regulatory framework that conflicts with inadequate evaluation and implementation tools would greatly stifle innovation. 

France still has time to strike the right balance between playing technological catch-up, asserting itself as a leader in the global governance race, and adopting a precautionary stance against possible consequential risks. But what looks like an unresolvable trilemma is actually an unprecedented opportunity. By becoming a leading player in the safety of large-scale foundation models (development and evaluation), France could assert Europe’s pioneering position in the AI governance race. At the same time, it would foster an ecosystem of stakeholders capable of developing trustworthy AI models that can actually be deployed by economic actors with complete confidence.

 

Copyright Image : Lionel BONAVENTURE / AFP

This photograph taken in Toulouse, southwestern France, on July 18, 2023 shows a screen displaying the logo of Bard AI, a conversational artificial intelligence software application developed by Google, and ChatGPT.

Receive Institut Montaigne’s monthly newsletter in English
Subscribe