Do we need New Laws and Regulations for AI?
Does the advent of AI mean we need new laws and regulations? When I read two of Rahul Matthan’s posts, I realize they are entirely different questions. The same answer need not apply to both!
Take the laws
aspect first. Consider fake news, he says. AI can generate fake content,
both audio and video. Does this mean we need new laws? Surprisingly, no. We
already have existing laws to cover all this (impersonation of others is
a crime; so is harming someone’s reputation). Or consider how AI can be used to
trigger communal tensions. Already a crime, covered under the law on using
electronic communications to promote disharmony and enmity. Matthan says there
is a reason why we can find existing laws for most scenarios:
“Pretty
much any harm that you think is exacerbated by AI—be it misleading advertising,
election interference, forgery or bias—is covered by existing provisions of
law. In almost every instance, since these laws have been drafted so broadly as
to cover a wide range of circumstances, it will make little difference if the
harm is committed using a technology that did not exist at the time when the
law relevant to the particular case was enacted.”
Simply put, laws
are framed to declare which actions are crimes. Laws are agnostic to the
mechanism or technology used to commit those crimes, so new tech that can
create new ways to commit crimes are more often than not, already covered.
Sure, we will need some new laws but not as many as some people think.
Next, take the
regulations question. Do we need new regulations due to AI? On this, there are
two considerations. One, AI is probabilistic
and non-deterministic. Its decisions and answers, by definition, can never be
100% right. Or consistent. But our regulations (worldover, not just India) were
framed for a binary world – this is allowed, that isn’t. Define such rules, and
everyone has to follow them. That’s deterministic. Compliance expected 100% of
the time. Therefore, argues Matthan, the philosophy of regulations needs
to be revisited now, not just individual regulations. Regulatory frameworks
need to factor in for the non-deterministic nature of AI. Compliance cannot be
100%. What then? Well, let the odd mistake happen. But if the same error repeats
again and again, then demand the developers fix it. Of course, depending on the
matter at hand, the number of mistakes that will be tolerated will need to
vary.
Two, we need to keep in mind that AI can bring
a lot of benefits. Make the regulations too tight and constrictive, and we risk
losing out on all those benefits. We need the regulations to give some room for
errors, weigh in the costs and benefits, and take different approaches for
different matters.
Personally, I feel Asia will handle AI better than the West. Not because we are smarter or anything, but because Asians are philosophically and culturally open to shades of grey; the yin and the yang can co-exist; and being less developed, the benefits of AI will outweigh the risks.
Comments
Post a Comment