The use of digital twins is becoming more common, but things are going very wrong in some cases. Here’s how to fix them now
A digital twin is a digital copy of several physical objects, such as a person, a device, production equipment or even airplanes and automobiles. The idea is to provide real-time simulations of a material or human asset to determine when problems might occur and proactively fix them before they actually arise.
Although the role of digital twins varies greatly, the connection is established using real-time data from sensors that can synchronize digital twins living in the virtual world with real twins that we can touch. This new sync simulation leverages IoT (Internet of Things), AI (artificial intelligence), machine learning, and analytics with ss space graphics to create simulations of the updated model’s activity as the physical organism changes.
There are all kinds of use cases for digital twins; the most common is a digital duo to represent machines, such as factory equipment and machines. Simulations sometimes require proactive maintenance and if done properly will provide better productivity and up-to-life of the machine.
The problem is that most digital twins exist in public clouds, for the obvious reason that they run much cheaper and have access to all the storage and processing of the cloud, as well as special services such as AI and analytics, to support twins. Moreover, the cloud provides purpose-built services for creating and running twins.
The ease of building, delivering, and deploying twins has led to a number of problems in which digital twins become evil twins and do more harm than good. Some examples I’ve seen include:
In production, the duo overeating or reducing the productivity needed to fix problems before they become real problems. Companies are fixing things identified in twin simulations that really don’t need fixing. For example, they replace hydraulic fluids three times more often than necessary in a factory robot. Or worse, the twins propose configuration changes that lead to overheating and fire. The last thing that really happened.
In the transportation industry, a digital twin can turn off the jet engine due to what the duo simulates as a fire but turns out to be a faulty sensor.
In the healthcare world, a patient is indicated as having factors that can lead to a stroke but it is determined to be problematic with the predictive analysis model.
My view is that attaching simulations to actual devices, machines, and even humans is very much more likely to make mistakes. Most of these can be traced back to the first-time twin’s creators, who did not find their fault until deployed immediately after deployment. The problem I have is that you can crash a plane, scare the patient, or set a factory robot on fire.
With the cloud making the use of digital twins much more reasonable and faster, I see problems like this increasing. Maybe the problems are not bad, but they can definitely be avoided.