“We have to take the unintended consequences of any new technology along with all the benefits and think about them simultaneously as opposed to waiting for the unintended consequences to show up and then address them.” Those are the words of Microsoft CEO, Satya Nadella, when he was asked about the implications of AI during the 2024 World Economic Forum in Davos.
Fast forward another year, and the connected, non-linear approach he describes is more relevant than ever – including in the manufacturing industry. Indeed, as AI’s capabilities expand, many firms are focusing not only its groundbreaking potential in the design studio or the shopfloor, but on the fresh wave of risks it is generating across their value chain as a whole.
To be clear, we’re not talking about the big, societal, computers-taking-over-the-world type threats here. Instead, industry leaders are beginning to focus on more immediate, business-focused issues, such as: How does Generative AI alter our operations, obligations, and exposure? And how resilient are our systems and processes in the face of disruption or stress?
Evolution of the revolution
These are exactly the right questions to ask – and not for the first time in history either. From the steam engine to the Ford Model T to the modern computer, technology has long been prompting organizations across sectors to reshape the way they operate.
The internet is an especially pertinent case in point. By completely transforming how information was created, shared and utilized, it brought about both a tech revolution and a business revolution at the same time – all while creating a brand-new set of risks. Compromising a company’s data and intellectual property suddenly no longer required any kind of physical presence on site. AI hasn’t changed that; it’s just made it even easier.
As Jaymin Kim, Managing Director, Emerging Technologies at Marsh, points out, “From a business perspective, generative AI may be just as revolutionary as the internet in changing business models. But from a risk perspective, if the internet was a revolution, generative AI is an evolution. One that’s intensified and accelerated the risks that already existed – at least so far. But what is different is the way it has entered the market.”
Here, Kim is referring to the fact that unlike most technologies which tend to start as military or commercial-grade tools before becoming consumer-facing, generative AI—including large language models like ChatGPT—has done it the other way round. This has allowed it to bypass traditional enterprise-level safeguards and enter the workplace before risk controls could catch up.
Rise of the risk managers
Of course, the sheer speed and scale of AI’s arrival had a role to play as well, leading many manufacturers to simply ban its use outright. Yet prohibition was and is never going to be enforceable long-term. More importantly, it runs counter to the need to manage AI’s business value and risk simultaneously from the outset. After all, if you’re not using the technology today, how can you prepare to police it properly tomorrow?
Yet the opportunity to rebalance is not lost completely – far from it. It simply requires manufacturers to be proactive about governance and, more specifically, to broaden the lens on who owns and manages AI risk. In a connected enterprise, it’s no longer enough for CIOs or CISOs to act alone. Instead, effective AI governance requires multifunctional, centralized collaboration.
Boards and C-suites must therefore take an active role, alongside compliance leaders, HR, business unit heads, and crucially, risk managers. In fact, of all the roles within the business, it is risk managers who are best placed to bring these diverse perspectives together and drive a coordinated, enterprise-wide approach.
Scenario-driven risk identification
They also need firm foundations, including establishing clear implementation policies that evolve alongside the technology itself and training employees to use AI safely, responsibly and effectively. But perhaps the most critical step is adopting a new method of anticipating and mitigating risk that Kim calls scenario-driven risk identification.
“Scenario-driven risk identification with generative AI involves running ‘what could go wrong’ simulations based on how an organization is using various generative AI tools. These scenarios should then inform proactive resilience planning that includes business continuity and incident response planning. Building resilience against generative AI risks is not just about ‘putting a human in the loop.’,” she explains. “That’s a vast oversimplification. It’s about understanding what the various generative AI loops are, where the risks reside, and who or what has the right capabilities to manage them over what time frame.”
Consider, for example, a fast-food company piloting generative AI to take drive-through orders. Once the system reaches 80% usage, it becomes a point of operational dependency. By simulating failure modes before they reach this point, like order misinterpretation or system downtime, the company can map out continuity plans and understand revenue impact without the risk of disruption once the technology is in play.
The same thinking applies to manufacturing. When AI is integrated into production planning, supply chain, or quality control, it becomes essential to business outcomes. Leaders should therefore simulate what might happen if that system fails or produces inconsistent outputs – not after roll-out as a tactical fix but beforehand as a strategic priority.
Muscle memory
The good news is manufacturers already have the muscle memory to make this happen.
Failure Modes and Effects Analysis (FMEA) has long provided a systematic approach to identifying how systems can break down and what the consequences of that might be.
In other words, the framework for scenario-driven risk identification exists; all manufacturers have to do is move it further upstream in the innovation and development process while bringing together the right cross-functional teams to implement it.
How long this takes will naturally depend on each manufacturer’s individual operating environment. But what’s true for everyone is that managing the business challenges of AI is not about choosing between caution and innovation or trying to course correct after the fact.
Rather, it’s about taking a dual approach from the outset – one that views AI opportunity and risk as two sides of the same coin. The technology that’s driving this transformation may be cutting-edge, but for manufacturers, the new-old mindset required to deliver it doesn’t have to be built from scratch. It just needs to be redirected.
Read the full article here