Search for a report, a publication, an expert...


AI, and with it our society, is at a historical turning point. We are now developing "general purpose" AI systems like ChatGPT, capable of performing a large number of tasks. This could quickly become a decisive competitive advantage for companies and countries alike.

However, these systems represent a major and growing security challenge. Not only because they can be used by malicious actors, but also because the statistical nature of today's AI systems poses unprecedented safety risks, which are now some of the most important technological barriers in the field.

This challenge also represents a unique opportunity for France to position itself as a leader in safe and trustworthy AI, by attracting some of the best AI talent, for whom safety is becoming a major concern and yet not properly addressed by their current employers. It has world-class researchers in mathematics and AI, as well as cutting edge expertise in systems and software engineering for safety. Thanks to the powerful computers of the French National Centre for Scientific Research (CNRS), it is also one of the only European countries able to develop large, general purpose AI models.

To seize this opportunity, France must give itself the means to do so, with a disruptive innovation project and a fundamental research cluster dedicated to the development of safe and trustworthy general purpose AI systems. It must also ensure that the cutting edge but potentially dangerous AI systems developed today by American and Chinese actors are subject to future European regulations, which are likely to define the international requirements for AI safety and trustworthiness.

Policy Paper ( 80 pages)
Summary ( 3 pages)
Receive weekly news from Institut Montaigne
I subscribe