In August 2021 Wired magazine published an article about a study that discovered that a neural network trained to read x-ray scans was also predicting subject patients’ racial identity, a non-biological category. The most interesting —and concerning— thing about this discovery was that no one could explain why the algorithms were able to do this, not even the people who trained them. The article was brought up by Ankit Patel during a panel discussion at the NAFEMS Americas Virtual Event, ‘The Challenge of Incorporating AI/Machine Learning Explainability into Engineering Simulation’ featuring Mahmood Tabaddor, Vladimir Balabanov, and Peter Chow.
The study in the Wired article is not an isolated case, Cornell University also published a study that seemed to make similar findings - AI predicting race where no cues were thought to be present. How can this be explained, is it a question of legacy data, a case of ‘garbage in = garbage out’? Or is there something else going on?
Considering the AI revolution currently underway, the implications of such biases cannot be overstated. During the webinar, Peter touches briefly on a British Medical Journal article on the use of AI to accelerate Covid-19 drug discovery and vaccine development that ‘Highlighted design biasedness in ethics elements towards certain parts of society.’ He suggests that there is a ‘need to identify and explain where that came from. Is it the data? legacy data or historic data contains lots of biasedness, so we need to be careful.’
The issue is not only in the medical industry, as Vladimir points out, ‘In engineering, it might seem less important than it is in medicine, but the fact remains that, whether it’s building machines, airplanes, or cars, eventually, the product that is being built is dealing with people’s lives, and therefore any mistakes in there are costly.’
AI is the future, there is no question about that, but what kind of future is it going to be? A bewildering one that feeds into the kind of imagined nightmare scenarios that have long been the stuff of science fiction, where humans are at the mercy of AI that they need to figure out how to, at best, understand, and at worst, outwit?
In 1967 Arthur Koestler published an interesting book titled The Ghost in the Machine, the book is largely about human evolution and individual and societal behaviour - it is also very much of its time and some of the author’s suggestions and opinions would certainly raise an eyebrow or two now!
However, a particular phrase sticks in the mind; reculer pour mieux sauter. This translates to something like ‘drawing back in order to make a better leap’ or ‘pulling back to jump better.’ Is this how we avoid ending up in a Koestler ‘blind alley’ of unexpected and unwanted outcomes brought on by our rapid development of AI/ML?
Is now the time to pause and look at how we do things and work on developing Explainable AI?
"It’s a matter of trust – if you can understand the process and explain it, you can trust the outcome."
Or would that just be getting in the way of progress? Vladimir gives an example from the world of materials engineering, pointing out that when it comes to detecting flaws during the manufacture of composite materials, AI can identify flaws that are hard for human engineers to pick up. Another example of the amazing results that AI/ML is producing is given in thisGuardian newspaper article about an AI tool that is able to predict the likelihood of tumour regrowth in cancer patients.
These are just a couple of examples of the exciting developments that are happening in AI/ML at the moment, so why worry if the results are so good? Mahmood draws a parallel between physics-based modelling and AI/ML that gets to the heart of why this matters, ‘Physics-based modelling and simulations follow a history that is similar to what machine learning is going through, for instance, on trustworthiness, we have verification & validation, which is about the process, not just the outcome.’
It’s a matter of trust – if you can understand the process and explain it, you can trust the outcome.
As in other industries, there is an understandable focus on the democratization of AI/ML and this is very much underway already as Ankit shares, ‘I have some high school students in 10th grade who I mentor who are training neural networks, and it's very exciting.’
If AI is the future, it arguably makes sense to put it into the hands of future users. Ankit does go on to say, however, that as wonderful as it is that the barrier to entry into the world of AI/ML has been lowered, it is important that there be tools that allow the user to understand the hidden biases, tools which are yet to be developed.
Perhaps the AI/ML train has already left the station and it’s time for everyone to just jump on board the best they can. It need not necessarily be a journey into the unknown however, perhaps there are lessons from other industries that the AI/ML industry can utilise as it grapples with the challenges ahead.
Click here and sign in /sign up to listen to the whole conversation.
Artificial IntelligenceFuture of SimulationMachine LearningSimulation Governance
All TagsStay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2024
Developed By Duo Web Design