AI and Regulations

Recently there was news that Italy had banned an AI chatbot named Replika. Rahul Matthan’s blog identifies the reason for that – the EU’s GDPR regulation (simplistically put, GDPR is an EU regulation on data privacy). So what are the base principles of GDPR? And what are the problems with those principles when it comes to AI?

 

Number 1 on that list is “consent” – seeking explicit permission before using data. An exception is allowed if said data collection is “necessary” for some “legitimate purpose”. The way AI’s work, they just scour the Net for info and stitch it together in unimaginable ways to derive conclusions from it. By definition, the company that created the AI cannot know to what purpose the AI might put that info to use. GDPR was framed in simpler times when companies could be expected to know what they would do with the data. Not anymore.

 

Number 2 is that data collection be restricted to what is relevant to the task at hand. And that the data be retained only for as long as necessary. The way AI’s work, they need huge data sets stored forever (practically) to be able to derive patterns and long-term trends. The contradiction with GDPR is evident.

 

The third problem is… let me just explain it since I can’t find one word to summarize it. If a form asks you for some data, you can instinctively sense whether the ask seems irrelevant to the task at hand. But with chatbots, such things blend into the “conversation”. You answer without pausing, the way you would have in a, er, conversation with a fellow human. Even the company that owns the chatbot doesn’t know how the AI might use this seemingly irrelevant data to draw a meaningful connection.

 

Matthan therefore asks the right question:

“What if the problem does not lie with the technology. What if what needs to change are the laws that are being used to regulate it? Just because generative AI does not meet the requirements set out under the GDPR does that mean we should prohibit that technology from being used. Or should we, instead, try and redesign our regulatory frameworks so that they can enable these new technologies to function better?

 

A very valid point indeed. Whether the EU will update its regulations, only time will tell. But I can see that the major entities today have seriously different views on AI. The EU, having no AI companies or even Internet giants, tends to think only of the privacy of its citizens. The US, on the other hand, tries to juggle the contradictory pulls of privacy for citizens v/s corporate needs. India and China feel AI and everything based on Internet/smartphone only fulfils needs that are not currently met in the country, so they feel AI does more good than harm, and are thus inclined to let AI move forward with few restrictions.

 

Therefore, I don’t see any global consensus on the topic emerging.

Comments

Popular posts from this blog

Student of the Year

The Retort of the "Luxury Person"

Animal Senses #7: Touch and Remote Touch