Machine Learning Anecdotes


Bill Gates once said:
“When you use a computer, you can’t make fuzzy statements. You make only precise statements.”
But in the Age of Machine Learning, wherein systems learn on their own, the outcomes can be highly unexpected. Sure, the underlying instructions are still precise, not fuzzy, but what systems learn (or mis-learn) makes for interesting reading.

It can be dangerous. Once we let machines learn on their own, it becomes necessary to explicitly tell them what’s off limits, writes Tom Simonite:
“Even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake.”

Machines that learn on their own can be devious, or cheat. When researchers wanted a bot “to score big in the Atari game Qbert”, here’s what it did:
“Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points.”

Or they may come up with solutions that work only in a precise setup:
“Goldilocks Electronics: Software evolved circuits to interpret electrical signals, but the design only worked at the temperature of the lab where the study took place.”

And they can trick us into thinking the problem was solved!
“Optical Illusion: Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball.”

Mind-blowing, amusing, deceitful, dangerous… take your pick. All kinds of solutions can be devised. Just like what you’d expect with humans…

Comments

Popular posts from this blog

Student of the Year

Animal Senses #7: Touch and Remote Touch

The Retort of the "Luxury Person"