When Fiction is Relevant to Tech
Given the
future we are headed towards with its AI, self-driven cars and drones (and who
knows what other forms of “smart” software”?), governments world over worry
about security implications. And yes, governments can look outside the
bureaucratic box, as seen in Bruce Schneier’s description of one such US initiative a
decade back:
“The Department of
Homeland Security hired a bunch of science fiction writers to come in for a day
and think of ways terrorists could attack America. If our inability to prevent
9/11 marked a failure of imagination, as some said at the time, then who better
than science fiction writers to inject a little imagination into
counterterrorism planning?”
Schneier
though sees a problem with consulting sci-fi writers:
“More imagination
leads to more movie-plot threats -- which contributes to overall fear
and overestimation of the risks. And that doesn't help keep us safe at all.”
“Science fiction
writers are creative, and creativity helps in any future scenario
brainstorming. But please, keep the people who actually know science and
technology in charge.”
Schneier
has a point: the most easily imagined threat do tend to take over public
imagination in disproportionate amounts. On the other hand, the “modern hex”
blog I wrote describes a scenario that sounded all too possible in
the future. And that was from a sci-fi book.
And
sometimes, unlike the “modern hex”, the attack doesn’t involve hacking an
existing system. It just involves feeding wrong information to the sensors of
the system, as this scary attack on driverless cars already shows:
“Attackers successfully
attack a driverless car system -- Renault Captur's "Level 0"
autopilot (Level 0 systems advise human drivers but do not directly operate
cars) -- by following them with drones that project images of fake road signs
in 100ms bursts. The time is too short for human perception, but long enough to
fool the autopilot's sensors.”
Garbage
in, garbage out. Or in this case, garbage in, coffins out.
Or
perhaps we’ve already doomed ourselves to the next level of cat-and-mouse
warfare where it’s not humans v machines, but machines v machines, as Max
Tegmark writes in Life
3.0:
“Better AI systems
can also be used to find new vulnerabilities and perform more sophisticated
hacks.”
Maybe
fiction is relevant to tech. After all, doesn’t all this sound like the
scientific version of the Red Queen Effect, from Through the Looking Glass?
“‘‘Now, here, you
see”, (said the Red Queen), “it takes all the running you can do, to keep in
the same place. If you want to get somewhere else, you must run at least twice
as fast as that!’”
Comments
Post a Comment