Woman and man actively chatting in front of big screen and showing graphs

AI and risk management: keeping a human element

Oct 30, 2023Cyril Amblard-Ladurantie Leadership

The rise of artificial intelligence and the arrival of ChatGPT raise many ethical and practical questions, ranging from data confidentiality to intellectual property, including cognitive biases. In the complex world of risk management, one constant remains: no matter how sophisticated AI  is, human oversight remains necessary. 

Risks and mistrusts of AI 

After their thunderous appearance, ChatGPT and other generative artificial intelligence like Bard and Bing AI quickly sparked divergent opinions. According to a survey conducted by BlackBerry, 75% of companies plan to restrict its use in the professional context. Meanwhile, Gartner reveals that generative artificial intelligence solutions are among the top concerns for IT security leaders.  

However, beyond their success with the public, most companies immediately adopted these generative AIs for various tasks: recruitment announcements, content creation, trend prediction, etc. But very quickly, a set of concerns and issues emerged, mainly regarding risks linked to confidentiality (disclosure of information, reuse of data by AI), intellectual property (question of ownership of generated content), cognitive biases, or even the reliability of responses, which has proven in some cases to be completely inaccurate.  

Despite this, AI is becoming increasingly prominent within businesses, integrating itself into critical processes such as customer relationship management, finance, marketing, and recruitment, among other areas. In this context, and even if its potential for value creation is immense, its risks require careful management not to compromise the organization's integrity.  

AI: traditional governance for innovative solutions 

AI risk management follows a comparable logic to other algorithms already used in businesses. The first step is to inventory all AI integrated into processes, including those used informally by employees, called "shadow AI" (generative AI freely accessible and usable by all). Then, a risk-based approach should be applied throughout the AI ​​lifecycle, from development, validation, stress testing, release into production, and monitoring up to decommissioning.  

Whether for external AI or those developed internally and depending on their degree of involvement in the processes (essential, core business, critical, etc.), it is necessary to conduct an in-depth analysis of the risks upstream.  But this analysis must be more rigorous than that applied to traditional technological solutions: it must include identifying and possibly correcting cognitive biases, data protection issues, and, in particular, intellectual property.  

Then, the company will be able to personalize the level of security of each AI solution by prioritizing the implementation of monitoring devices and specific mitigation procedures in the event of incidents, especially when artificial intelligence is deployed in critical processes for the organization.  

AI & risk management: the necessary human supervision 

Although their management may seem comparable to that of other technological solutions, the presence of AI in businesses is not neutral, primarily because of their creative potential and easy access (generative AI in particular). This requires implementing the proper control measures upstream and downstream of their use. 

A good example is user training, which aims to raise awareness around good practices and the correct wording of questions to avoid biased answers. Throughout the use of AI, it is also crucial to carry out regular tests to prevent any drift in learning, understanding, and responses. After dismantling AI, it is also essential to ensure the data is cleaned or archived (for regulatory audits, for example). 

Within the framework of these control procedures, the Human factor remains at the heart of any AI project. This also raises the question of the availability of skills (internally or externally) and the possible risk tied to their absence within the organization. 

It is also essential to recognize that AI will not solve all an organization's problems. On the contrary, the decision to integrate AI into specific processes can reveal an inadequacy of the information system, such as data silos between entities, the absence of sufficient data history or skills (data scientists), etc. 

In other words, the deployment of AI cannot be improvised and, above all, is not without risks unless there is constant and appropriate human supervision, which is reassuring for us Humans.