AI and Fear of Job Losses
As AI gets better at more and more things, it invokes fear. Of job losses. Ben Evans wrote this excellent article on the topic. On the one hand, he says:
“Every
time we go through a wave of automation, whole classes of jobs go away, but new
classes of jobs get created… over time the total number of jobs doesn’t go
down, and we have all become more prosperous.”
But that is just
historical data. In practice we worry:
“When
this is happening to your own generation, it seems natural and intuitive to
worry that this time, there aren’t going to be those new jobs. We can see the
jobs that are going away, but we can’t predict what the new jobs will be, and
often they don’t exist yet.”
Then there’s what
economists called the “Lump of Labour fallacy”:
“The
Lump of Labour fallacy is the misconception that there is a fixed amount of
work to be done, and that if some work is taken by a machine then there will be
less work for people.”
Say, machines
reduce the price of production of shoes drastically. Yes, lots of shoemakers
will lose their jobs. But since the new shoes cost less, people will have more
money left with them now. This extra cash will create a demand for some new
product or service to fulfil, which will create new jobs.
Did calculators
wipe out accountants? No, once something becomes cheaper and more efficient, we
often find new uses for that capability. New uses translates into new jobs.
“It
also tends to mean that you change what you do. To begin with, we make the new
tool fit the old way of working, but over time, we change how we work to fit
the tool.”
But is this time
different? There are two reasons many feel that way. Evans looks at each reason
in detail.
Reason #1: It is happening so much faster than ever before.
True, but… Rolling
in any new tech into the workplace takes time. Years, not weeks. At the
workplace, a lot of existing knowledge involves not just technical but also
institutional knowledge. Next, he points out that companies don’t buy
technologies; they buy products.
“I
don’t think a text prompt, a ‘go’ button and a black-box, general purpose text
generation engine (like ChatGPT) make up a product, and product takes time.”
Or, as he puts it
in a snarky way:
“The
future takes a while, and the world outside Silicon Valley is complicated.”
Third, he rightly
says current AI is error prone:
“People
call this hallucinations, making things up, lying or bullshitting - it’s the
‘overconfident undergraduate’ problem.”
Reason #2: It looks like a general purpose
technology, i.e., it impacts multiple industries (and thus jobs)
Think of Excel, he
says. It is a general purpose tool – how many industries did it disrupt? The
Internet and smartphone were general purpose, and they have disrupted many
industries. But are we seeing unemployment rates much higher today than say,
when the Internet got started? Secondly, yes, you could build things on top of
each other and cumulatively their effect could be huge, but that takes time.
Lastly, and most critically, he says we don’t have general purpose AI… at
least, not yet. Each AI today does one thing - art, answering questions, music,
and so on. The real world is messy:
“You
might also suggest that the idea this one magic piece of software will change
everything, and override all the complexity of real people, real companies and
the real economy, and can now be deployed in weeks instead of years, sounds
like classic tech solutionism.”
All that is why
Evans ends by saying:
“As an analyst, though, I tend to prefer Hume’s empiricism over Descartes - I can only analyse what we can know. We don’t have AGI (Artificial General Intelligence), and without that, we have only another wave of automation, and we don’t seem to have any a priori reason why this must be more or less painful than all the others.”
Comments
Post a Comment