Search for a report, a publication, an expert...
Institut Montaigne features a platform of Expressions dedicated to debate and current affairs. The platform provides a space for decryption and dialogue to encourage discussion and the emergence of new voices.
22/09/2023

AI & Foresight: Envisioning the Future When the Unthinkable Becomes Possible

AI & Foresight: Envisioning the Future When the Unthinkable Becomes Possible
 Tom David
Author
Policy Officer - Technological Foresight

Is France heading in the right direction to understand and address the new challenges brought by artificial intelligence (AI), or does it risk missing the mark? Upon reviewing the government’s Military Programming Law for 2024-2030, the answer remains unclear given a lack of concrete details regarding the choices that will be made in practice. While this is a significant governance matter that extends beyond the purview of the Ministry of the Armed Forces, the Military Programming Law still provides some interesting insight into the direction France is taking.

The law makes mention of AI several times, emphasizing the need to anticipate technological advancements and take bold risks rather than trying to catch up with existing developments. It also views AI as a tool that can be used to enhance the services of the Ministry. While it is hard to disagree with these statements, they are quite broad and encompass a myriad of possibilities and choices that can lead France down many different paths.

The Military Programming Law comes during a time of complex global crises, whether climate-related, technological, or geopolitical. This is not a time for routine budget allocations and revisiting the military’s prerogatives but for accurately gauging the changing nature of the world and the scale of the threats France faces.

The Military Programming Law comes during a time of complex global crises, whether climate-related, technological, or geopolitical.

In an editorial that outlines the main areas of the law, the Minister of the Armed Forces identifies the unique circumstances of the times. According to him, the objective for France is clear: to address new threats and maintain its position among the world's leading powers.

Ultimately, the effectiveness of the choices made will be determined by the quality of foresight and the ability to comprehend the systemic nature of the risks involved. Stanford University’s 2023 AI Index Report reveals that 36% of researchers surveyed are concerned that decisions made by general-purpose AI models could lead to a nuclear disaster. About 69% of U.S. adults are concerned about the potential societal risks posed by AI.

Unfortunately, the law provides scant details when it comes to general-purpose AI.€10 billion for innovation? All right. But what kind of innovation?

According to three surveys of leading AI researchers (who have published at NeurIPS, one of the most prestigious machine learning conferences), over half of the respondents estimate not negligible chances that AI could cause irreversible damage on a global scale (2018, 2022, 2022). And there is growing consensus among scientists that "mitigating the risk of extinction from AI should be a global priority." In this context, the decisions we make about AI could have a range of outcomes; they might be highly beneficial, insufficient, or even counterproductive.

Actions must be commensurate with ambitions and objectives. The government-sponsored bill will allocate €413 billion over the next seven years to meet the needs of the armed forces by enhancing basic capabilities, modernizing existing resources, focusing on critical technologies, and upholding moral and ethical considerations. Unfortunately, the law provides scant details when it comes to general-purpose AI. €10 billion for innovation? All right. But what kind of innovation?

The race to innovate can come at a high cost if safety is compromised
Currents dynamics in AI: speed and rivalry

The first type of possible innovation is the one that aims to go further and faster than competitors. Institut Montaigne’s recent report on defense innovation underscores the importance of innovation and explains how falling behind can lead to greater technological threats in the future. The pace of AI innovation is accelerating, increasing the risk of making hurried decisions and potentially trapping organizations in a sunk-cost fallacy, akin to the historical case of the Concorde aircraft.

The leading players in AI-OpenAI (in partnership with Microsoft), Google DeepMind, Meta, and Anthropic-are primarily American companies, and in this regard, they compete with foreign countries, particularly China. It is out of the question for these businesses to leave this powerful tool in the hands of rivals without building a significant lead, not only for economic gain but also to retain offensive and defensive capabilities if needed. 

The leading players in AI are primarily American companies, and they compete with foreign countries, particularly China.

Export restrictions on semiconductors and chip components, essential for training AI models, go in this direction. Stepping down to the company level, rivalries push firms to outdo one another by building superior models or adopting conflicting strategies to gain a larger market share. Google's conservative tech dissemination approach contrasts with OpenAI's public release of ChatGPT and Meta's open-source promotion. All three try to retain or capture a larger market share by undermining the others.Competition even exists at the level of individual business leaders who sometimes have a history of well-publicized personal rivalries. Elon Musk, for instance, initially invested $100 million for the development of OpenAI, only to subsequently withdraw his support in 2018, presenting an ultimatum to Sam Altman, the current CEO. Either Altman yielded control to Musk, or Musk would withdraw entirely from the initiative, along with the $1 billion in funding he had initially promised. Refusing to bow down to Musk’s demands, Altman managed to woo rival Microsoft to replace the funding that would be lost due to Musk’s change of heart. The escalating feud between the two tech leaders was recently thrust into the limelight when Musk launched xAI, a company aiming to compete directly with OpenAI. These battles also take the form of recruitment wars, with tech teams often poached directly from the competition. The AI industry can therefore be characterized by fierce competition at several different levels: between countries, between companies, and between individual leaders. The intensity of these rivalries is fueling even greater investment and development in the sector, which wouldn’t be so problematic if the future of society wasn’t at stake. Everyone wants to lead the race, and the complex dynamics involving competition, cooperation, and confrontation highlight palpable tensions.

The race-like nature in innovation is not productive or beneficial as it does not address safety considerations.

The "limited rationality" with which these players operate can lead to "lose-lose" outcomes, likely culminating in crises and conflicts. The current race to develop the most powerful deep learning model with the broadest capabilities seems to overlook the aspect of safety, as robustness is not currently a prerequisite for enhancing capabilities. Existing practices are even more problematic when considering that these models are "black boxes" with structural vulnerabilities that have not been addressed.

The race-like nature in innovation is not productive or beneficial as it does not address the dangerous path in which the industry is headed by overlooking important safety considerations. The rush to be the first does not allow for a strategic stance to reduce tensions and might even lead to catastrophic outcomes for everyone. Specifically, in the race to be the best, entities might end up pushing too hard in a bid to outdo each other, creating technologies that are not sufficiently safe or reliable. These inadequately developed technologies could then backfire and ultimately lead to significant setbacks and losses, perhaps even triggering broader societal and economic problems. The current approach could actually make the race even more frantic and widen the resource gap between France and major players like the U.S. and China. Instead of racing forward for the sake of it, France would be wise to adopt strategies that steer the direction of developments toward sectors where it holds an advantage (e.g. safety). It would also behoove France to carefully assess alternative strategies before betting the farm and incurring heavy losses in the mad dash for innovation.

Adopting a risk-based mindset: A necessary pit-stop
Defining the transformative aspect of AI and understanding the spectrum of misuse

The pace of AI development is so rapid that it is practically impossible to fully anticipate the future implications (changes that might have taken decades before are now occurring within months),particularly given AI’s profound potential to bring about change. The unpredictable aspects of AI development must be regarded as an additional risk and approached cautiously and rationally.

The unknown can work for us just as well as against us. This makes it imperative to think in terms of a risk model where potential scenarios (especially the worst-case scenarios) are identified and mitigated. This process should consider the dual nature of technology and the associated accidental risks.

The unpredictable aspects of AI development must be regarded as a risk and approached cautiously and rationally.

AI models were initially highly specialized, designed for specific tasks such as medical imaging or online board games. Although research continues to invest in developing such models, it seems that the only significant danger posed by these so-called "narrow" AIs depends on their specific use case and application. These risks are especially pertinent to critical systems, infrastructures or sectors (e.g. transportation, healthcare, energy) where malfunctions or failures could have dire repercussions, including severe injury or death. These issues seem to have been identified in France; various stakeholders, including industrial entities, are actively addressing them, notably through Confiance.ai, a collaborative effort between French academic and industrial stakeholders.

However, over the last 5 to 10 years, models have gradually become more general and versatile, largely thanks to the emergence of transformer architectures for language processing. These architectures facilitate a more nuanced understanding of language by considering the relationships between words in a broader context rather than focusing on individual words. They use "attention mechanisms" to focus on different parts of a sentence when processing each word (or different parts of the context for non-textual data), which enables the model to account for the relationships between words. The ability to process a larger context and refine the understanding of relationships between words depends on having more computational power and access to massive amounts of data. Modern learning models can thus handle an increasingly broad and varied range of tasks. An example is the Gato model developed by DeepMind, which can perform over 600 tasks ranging from robotics and video games to mastering conversation.

These technologies can be quickly repurposed for malicious activities with the potential for large-scale harmful consequences.

However, the dual-use nature of these technologies also means they can be quickly repurposed for malicious activities with the potential for large-scale harmful consequences. In one instance, researchers showed that an AI model initially designed for drug design and synthesis could be easily retrained to create over 40,000 new biochemical weapons simply by making minor tweaks to the model’s parameters. This example is alarming for several reasons. First, the underlying technology is open-source and easily accessible. Second, the training data did not initially contain any information about neurotoxic agents. And last, this is not a unique case but rather an example illustrating a broader problem with the potential misuse of AI models due to intrinsic vulnerabilities in the structure of these models.

Other publications have shown that Large Language Models (LLMs) can be used to create AI models with agent-like characteristics, where an objective is assigned to the model, and it plans and executes actions to achieve it. They act as "autonomous planners" and can carry out a range of complex tasks like creating chemical experiments using lab equipment, creating polymorphic malware to design and launch large-scale cyberattacks, and detecting cybersecurity vulnerabilities. A biosecurity expert at the Massachusetts Institute of Technology recently explored the potential of LLMs like GPT-4 and Bard in assisting individuals with no scientific background to deliberately induce the next pandemic. The conclusion is that they can, which is not that surprising considering the extent to which companies have already automated many lab processes in "cloud labs" where human supervision is optional. These models still have limitations in their capacities and pose significant security threats. Their algorithmic security can be bypassed as long as several vital technical issues remain unresolved, and it is unknown whether some of these issues can be resolved at all.

cloud lab

Emerald Cloud Lab’s automated laboratory in South San Francisco, California. Credit: Emerald Cloud Lab

When "democratization" does not necessarily rhyme with "general interest"

These issues are presented for illustration but represent just a fraction of a broader set of well-known problems, such as bias in AI models, the spread of misinformation, and potential dangers tied to algorithmic recommendation systems. Even more concerning is the potential for large-scale harm by making powerful AI technologies accessible to more and more people. According to Red Team Defense, an initiative by France’s Ministry of the Armed Forces and Defense Innovation Agency involved in predicting potential future conflicts arising between 2030-2060, part of the threat posed by widespread access to technology is that individuals are now capable of conducting advanced biology experiments from the comfort of their own home.Unfortunately, these risks are not given the serious consideration they deserve because they have not been viewed through the lens of a broader and more systemic strategic framework. The more capable and versatile AI becomes, the more it will permeate all facets of society and benefit mankind. The widespread adoption of AI technologies, driven by their speed, efficiency, and productivity benefits, could sideline entities that fail to keep up with the advancements, leading to societal dependencies on automated systems.

Once an organization or society becomes heavily reliant on efficient automated systems, it can be quite challenging to go back to older methods and technologies. However, without well-established safeguards that ensure organizational and structural resilience, a society that succumbs to the pressure of adopting AI technologies to remain competitive can become significantly vulnerable to external disruptions. General-purpose AI amplifies existing risks, especially as it becomes more accessible and the potential for misuse and accidents increases.

AI amplifies existing risks, especially as it becomes more accessible.

The discrepancies in regulations concerning dangerous materials are startling. Consider laboratories that study and handle the most lethal pathogens and biological agents in the world. Access to these facilities is tightly regulated and controlled to prevent misuse and accidents. Likewise, stringent regulations exist for accessing enriched uranium and the tools needed to build nuclear weapons. All of this, of course, is to ensure public safety. 

It is neither complicated nor costly to acquire an AI model and use it for harmful purposes.

In stark contrast, companies that develop cutting-edge AI models face hardly any regulation. It is neither complicated nor costly to acquire an AI model and use it for harmful purposes-all you need are the lines of code, the model’s parameter settings, and adequate computing power to run the program (stated differently, a computer, an internet connection, and a handful of half-intelligent people).

The openness of technology can be a good thing, provided, however, that these models are intrinsically safe and robust. As things stand, it is not inconceivable that, as AI models become even more advanced, they could easily fall into the hands of hostile nations, terrorist groups, malicious organizations, or lone wolves with bad intentions.

Factoring in accidental risk

There is no doubt that the misuse of AI is a real and pressing concern. But the accidental risks associated with AI technology must also be considered. Doing so requires abandoning the mindset according to which the absence of accidents indicates our ability to manage, control, and regulate technology proficiently. This, in turn, requires extra effort given that individuals naturally tend to overlook the impact of external factors beyond their control or comprehension. AI models are essentially "black boxes" given that we don’t understand how they function. Unlike nuclear bombs and pathogens with physical constraints or predictable patterns that limit their potential impact, the upper bound of potential damage from AI misuse or malfunction is unknown.

Currently, no one can guarantee that AI models will always operate and interact with users in ways that align with what is considered desirable, despite the best intentions of developers. In principle, there are several ways for a model to fulfill a request, no matter how simple it may be. AI models, equipped with ever growing capabilities, can employ various means to achieve a given objective, raising concerns about potential unforeseen accidents if the AI model uses means that were not anticipated or considered inappropriate to achieve its goals.

The upper bound of potential damage from AI misuse or malfunction is unknown.

Defining a more holistic approach to technological innovation

The risk of AI misuse is real. Scenarios involving accidents or unintentional harm are not hard to imagine. We have no idea what the upper bound for maximum damage might be. Joining the race certainly appears to be a losing strategy considering the magnitude of the threats. That said, given the unique concerns emerging in AI, what other forms of innovation can be explored?

Technological innovation in defense should consider both the potential for malicious use and the accidental risks associated with the ease of access to these technologies. However, these considerations should be part of a more comprehensive approach. Advancements in AI bring up significant security and defense challenges (that the Military Programming Law is starting to acknowledge), and a more expansive and collaborative strategy that spans different sectors and government ministries is needed to address them. 

Technological innovation in defense should consider both the potential for malicious use and the accidental risks. 

Having robust and safe AI models will end up being a decisive factor. Achieving this may require moving away from current paradigms and going down a different path. Similar reflections are underway in the United Kingdom by way of ARIA, a newly formed research agency inspired by the U.S. Defense Advanced Research Projects Agency (DARPA). Rishi Sunak considers this an important part of making the UK a central player in global regulation. ARIA is expected to launch operations shortly, aiming to create a genuine paradigm shift.

While this approach seems to be heading in the right direction, it is still too narrow in scope. The threats emerging from the interplay of technology and society call for a global and systemic approach that combines multiple areas of expertise. Has the time come to innovate in how we think about foresight, so essential for governance, especially when certainty is diminishing and complexity is increasing? Should we innovate more broadly and deeply, even to the point of rethinking how technology is governed? Most likely. After an ontological reflection to determine what we aspire to be, it will be society’s responsibility to actively shape its future and make the necessary decisions, including potential sacrifices and concessions that might be necessary to achieve this vision.

Copyright Image: Fabrice COFFRINI / AFP

Receive Institut Montaigne’s monthly newsletter in English
Subscribe