Helping AI Learn from its Mistakes
A while back, I wrote about how intelligence and errors go hand in hand, and therefore, AI will make mistakes. This is a scary prospect for many people –we are going to entrust and empower AI in more and more matters, but they’ll inevitably make some mistakes?
Rahul Matthan
describes an interesting idea on how we can make AI “safer”. By
replicating the practice of the airline industry:
“It
is safer to sit in a plane 10,000 metres above sea level than in a speeding car
anywhere in the world. Unlike every other high-risk sector, the airline
industry truly knows how to learn from failure.”
Individuals and
companies learn from their mistakes. But the aviation industry is unique in
this matter:
“It
(airline industry) has put in place mechanisms that not only ensure that the
company involved learns and improves, but that those findings are transmitted across
the industry so that everyone benefits.”
Therefore, argues
Matthan:
“If
AI is as dangerous as so many people claim it is, surely we should be looking
to put in place a similar culture.”
In fact, one such
AI Incident Database has already been established:
“This
is an initiative designed to document and share information on the failures and
unintended consequences of AI systems. Its primary purpose is to collate the
history of harms and near-harms that have resulted from the deployment of AI
systems, so that researchers, developers, and policymakers can use them to
better understand risks and develop superior safeguards.”
Matthan hopes that
countries will globalize this idea, similar to how aviation industry accidents
are shared globally.
He admits that
would require a mindset change. Transparency, esp. about failures, mistakes and
near-misses. And a systematic approach to record and analyze mishaps.
One certainly hopes such an idea takes root. After all, AI is here to stay, it’s not going to be perfect, so it’s key we have the information of the errors other AI has made so we can keep improving things.
Comments
Post a Comment