Having a digital model synchronized with the real world has the potential to transform the way we live and work. We need to understand the advantages and limitations of the technology to bring the concept further into our everyday reality, where cyber-physical systems will become ever more commonplace.
NAFEMS is exploring the potentially transformative effect that digital twin technology will have on analysis and simulation. We are providing the steady guiding hand that is necessary to ensure standards and best-practice are maintained.
The digital twin is a term that is being increasingly bounced around, so while it may not be a new concept to the analysis community, it is certainly not a widely used term in the public domain. The idea of a digital representation of a physical product or system, linked, via the Internet of Things, to the physical unit; using real, rather than nominal, values for dimensions and conditions, is at the heart of it. This digital representation can contain any number of models of the item and its behaviour, and can be used to understand what has happened, what is happening and even what will happen. The idea isn’t as new as you might think. In the film 2001: A Space Odyssey, HAL (the HAL 9000 computer) reports that an antenna control device is going to fail. There are no claims that HAL can see into the future, rather that sensor data is being sent from the device and that HAL is using it as an input into a model of the system, which has predicted the failure. The original digital twin?
To those of us engaged in the simulation community, the idea of using simulation to predict behaviour is nothing new. Many of the daily uses of simulation are to do just that in some form or other: will this design change improve the performance and efficiency of the engine, will the wind turbine survive undamaged in a once-in-a-hundred-years gale, will the bridge collapse if the amount of traffic is doubled, will the new paddle design enable the pharmaceutical to be sufficiently mixed in less time, and so on. The digital twin is a step further though, using continuously updated, measured data from the field, predictions are specific and more directly relevant.
One idea is that real-time simulation is needed for running a digital twin. Although simulation speeds have increased, often dramatically with new hardware developments and new algorithms, few of us could claim our simulations happen in real-time. It isn’t uncommon though to use reduced order or surrogate models, based on results from more thorough simulations, to quickly search a performance envelope or identify where to focus our efforts for further simulations.
The key difference between a digital replica and a digital twin is the flow of data, which should be continuously shared between the digital twin and the physical entity that it represents. The digital thread is the communication framework that enables the information to flow through the product lifecycle, crossing the boundaries between departments and disciplines. As highlighted in the cross-working group discussions, managing this data and ensuring it is usefully accessible is a key enabler. Here an effective Simulation Process and Data Management (SPDM) system can be a real game changer. This controlled documentation of the process the analyst followed as well as the data used, their decisions and assumptions, can then be used to easily generate simulations for a digital twin.
For many people, a futuristic world wouldn’t be complete without robots and Artificial Intelligence (AI). As engineers, we’re all familiar with the extensive use of robots in industry and in our homes. So maybe they don’t have the quirky personality portrayed in films but would you really want a vacuum cleaner with a GPP (Genuine People Personality) and could you imagine if Alexa was modelled on Marvin, the paranoid android? But where do these ideas impinge on our simulation universe? The members of the Manufacturing Process Simulation working group would of course point to Industry 4.0 and digital manufacturing, where simulation technologies are certainly used. Beyond manufacturing, many of our simulation software tools now have semi-intelligent interfaces, to guide users through the process, checking inputs and outputs in some cases. This built-in intelligence becomes more evident with greater democratisation of simulation to users with expertise in other fields.
Computers are particularly good with large amounts of data, handling it much faster than human beings can. This can enable the intelligent combination of physical measurements with results from simulations, taking into account the different contributors to uncertainty in each, and providing greater insight into product behaviour. Alternatively when there is insufficient data available, surrogate models can be used to fill in the gaps of knowledge. Building such models for non-trivial design spaces or operating envelopes in another area where AI and machine learning are being used. Could they also be an approach to tackle the perennial challenge of turbulence?
Althea de Souza, NAFEMS Computational Fluid Dynamics Chair
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2024
Developed By Duo Web Design