TL;DR;
Manufacturers have been facing continual pressure to improve their technology base, reduce costs, and improve quality since the Industrial Revolution. Manufacturers are used to change but not every manufacturer can or will embrace it at the same rate. Also, no manufacturer jumps straight to being an expert at the new thing they're needing to adopt. The same goes for Artificial Intelligence (AI) as an emerging change in manufacturing.
In my keynote, The Historian and AI, aimed at manufacturers in North Wales I used the competency model that all manufacturers and engineers will understand -- the individual journey from apprentice to journeyman to master -- to describe how manufacturers will grow their competency in AI.
AI in manufacturing can be split into two core areas for manufacturing: robotics and processes.
Robotics
Robotics in a fascinating area to me but not my specialism. The vital work that goes on in robotics is how to make machines more readily adapt to tasks humans do with ease like picking up different objects with different strengths.
For all but the biggest manufacturers, the development of bespoke robotics solutions might be infeasible. New machinery like robots are big investments and can't happen frequently.
Processes
The other area of AI, processes, can often be often done using the existing data coming off the shop floor or with a more limited amount of investment needed to retrofit some sensors into the environment.
When talking about processes, I'm referring to things like optimising flow through the manufacturing process, identifying quality issues, and minimising downtime for machinery. These are areas where existing data consolidation options can be used to support off-the-shelf AI solutions or specialist development of bespoke solutions. A single data scientists or AI engineer is going to be less costly than most investments a manufacturer may face.
A historian is an appliance designed to aggregate data coming from manufacturer's machinery to support the analysis and optimisation of processes.
Manufacturers can use sensor data consolidated by devices like historians to start improving processes with a growing AI competency.
The AI competency
The AI competency for a manufacturer starts at the apprentice level, where the basics are learned and people learn exactly how much they don't yet know! It's a formative stage where the foundations of important knowledge and the establishment of trust occurs.
The next level up is journeyman status. At this point, you're expected to be fairly competent and you're trusted to make suggestions.
The final level, which as any master will tell you, isn't a pinnacle but a still continuing growth of knowledge. Still, it is involves an intimate knowledge of what's happening and why. The master-level engineers are trusted to make decisions with limited oversight, and errors are few and far between.
These levels of competency add more responsibility, more trust, and more scope as time goes on. Just as you shouldn't trust a brand new person to the field to be a master, when you start introducing AI you have to build up skills and establish trust.
The apprentice
Like with all apprenticeships, knowledge acquired at this stage often feels like a struggle and can be initially painful. Apprentices get tasked with the simpler tasks and aren't expected to be proactive. They're closely monitored and aren't tasked with anything too critical.
In the manufacturing world, the apprentice might be expected to monitor a few critical systems and flag any unusual changes in the system to their supervisor. This type of tasks is also a perfect start for your first AI solutions.
You can use anomaly detection to monitor some key systems and get alerts if something appears off.
At it's simplest, this could be generating alerts whenever a value exceeds a threshold. In the long run though, this can end up as a relatively sophisticated process that takes into account patterns of work over recent weeks, so that any issues are contextualised by what is expected at a given point in time.
These types of systems, like apprentices, can risk being too nervous (generating more alerts than they needed to) or too relaxed (generating an insufficient number of alerts) but it is a learning process where the right level of attention and signals are the goal.
As the anomaly detection process usually just works on a single sensor, you end up striking the balance of tasking lots of "apprentices" (your anomaly detection routines) to work on your shop floor, each potentially generating lots of alerts. As a result, you can't spend much time improving each one, or you can employ just a few and spend more time improving each individually.
Working on anomaly detection processes allows you to develop your data processing capability so that you can raise alerts in near real-time. It also starts establishing comfort with computers prompting action; a circumstance that might be new and uncomfortable to your employees.
The journeyman
After the apprentice hasn't blown up anything (or anything too serious anyway) for a while, they've usually established some trust and can be tasked with more difficult things. At the journeyman level, they should be expected to understand at a high-level what factors drive issues, so that they can start understanding when things might go wrong. This is a transition period from being reactive to being proactive, but they're not necessarily given the responsibility and trust for making big decisions yet.
In the AI world, this is equivalent to building and using machine learning methods that involve known variables and will be used for explaining drivers in behaviours. Techniques like regression and decision trees reign in this area: You take your most important machines and sensors that your experts tell you are the most relevant for determining something like imminent breakdowns, quality reduction, or processing speed. You then match these sensor values against the outcome you're trying to understand, and construct a model that describes how the sensor readings impact the outcome.
This model can be used to understand the situation, enabling manual controls or rule based processes to be put into place. It could also be deployed to actively monitor for a high chance of a breakdown, for instance, and generate an alert.
These types of models are typically harder to build and deploy than anomaly detection processes, because they require historic data from multiple sensors and whatever it is you're trying to predict/understand. It is also harder to have many of these happening because they're intended to be given to a human to think about and reflect upon.
Developing these types of models validates your ability to work with historic data and uncover insights. It starts establishing the next level of trust in your employees that "the computer" is able to arrive at sensible conclusions. Significant Return on Investment can also start to be seen from being able to prevent issues or optimise processes using the insight generated.
Master
The final level in the competency path is master status. A master is trusted to understand the breadth of the field and have deep knowledge that means they can make intuitions about issues before they arise. They are trusted to make decisions mostly autonomously with limited oversight. They still learn on the job, but it's about refining and honing their knowledge, not increasing the breadth of knowledge.
In the context of AI, this is the use of deep learning and reinforcement learning to construct models that integrate into your manufacturing environment. These models might actually be used to automatically generate tasks, work orders, or control the machines.
With deep learning, we're able to take a huge amount of data from a multitude of sensors, and build a model that doesn't just use a few rules that someone else had said were important, but actually integrates information from the whole suite of sensors to make predictions.
With reinforcement learning, you build a model with a journeyman level technique or with deep learning. You then optimise it's predictions on the fly using real-time signals about how good it was at predicting things.
These techniques are much more complicated to build but can provide stronger coverage of the situation and be more accurate. Ideally, there should always be a human in the loop to monitor what decisions are being made as models, like humans, are fallible and can make mistakes.
Attaining master status involves the curation of trusted data, trusted models, and strong real-time processing capabilities. Employees have worked around the increasingly sophisticated models and have seen benefits to them and to the company from their use. They trust the solutions to not just give them insight, but to also start assisting them in the execution of their daily tasks.
Conclusion
Becoming a Master of AI in the manufacturing space is an investment in people and technology. It changes how people will work and trust must be established to support that. It will likely take 3-5 years to become a Master of AI - it isn't a flash in the pan initiative, and will require strong buy-in from the management team. Like with all things the best time to start was yesterday, but the next best time is today!
If you'd like to chat more about how AI can fit into your manufacturing business, you can book a chat with me using my booking link.