Posts

Showing posts from September, 2021

Data and Feelings

In 1954, Darrell Huff published How to Lie with Statistics . In his book on how to use, question, and understand statistical data, How to Make the World Add Up , Tim Harford agrees that Huff’s “book deserves the popularity, and the praise”, but fears Huff’s influence on people’s view of statistics may have gone too far, that Huff “made statistics seem like a stage magician’s trick”.   Today, of course, we have another problem when we encounter any stats: “It’s not that we feel every statistic is a lie, but that we feel helpless to pick out the truths. So we believe whatever we want to believe…” Harford’s book is on how we should learn to read statistical data. No, not in a dry academic way, but in ways that are very relevant to real life.   The first rule he suggests is amusing until you think a bit: When you see a stat, ask yourself how it makes you feel. Happy? Angry? Vindicated? Any strong emotion should be taken as a warning sign that you should look more closely at the

Oliver Cromwell Saga as an Infinite Game

In his book, Finite and Infinite Games , says finite games have an end – and hence a winner. Whereas infinite games don’t have a clear victor: “Who, for example, won the French Revolution?”   I was reminded of that line as I was reading the Hourly History take on Oliver Cromwell. It was the 1630’s, the decade of British King Charles I’s Personal Rule, when the monarch ignored Parliament and ruled as he pleased. Until he had to make concessions to defuse the growing resentment. Had Charles won?   But had he made enough concessions? The country was split down the middle on that question, a situation ripe for Civil War. And so in 1642, Charles declared war on Parliament. Cromwell supported Parliament. Though a man with no military experience, Cromwell learnt on the job. As a quick, decisive win looked unlikely, Cromwell switched modes to also becoming influential politically. The armies under control of different factions of Parliament were unified to form the New Model Army.

Dumb Kids

Image
It was that time of the year – the mid-year exams. As my 10 yo daughter drove us up a wall, I shuddered. Because it’s not just her: all the kids seem to be the same – they don’t know anything, but they’re all capable of arguing endlessly.   I remember this scene in the Amazon Prime spy series called The Family Man , where the teenaged daughter announces one day that she won’t go to school. I loved her mom’s response. Sure, don’t go. In fact, stop going to school altogether. It’s clear you are only going to end up as a maid anyway, so might as well start practicing at home from now itself. Oh wait, you don’t know any of that work either, do you? Let’s put you on an internship under Kanta bai…   At work, we find that the kids graduating out of college don’t know anything. Perhaps I shouldn’t curse those guys at office so much. The current generation doesn’t even look like it will get through college, assuming they can even get into college in the first place...   Which is w

Let There Be...

I am still reading Gary Bass’s book on the Bangladesh war of 1971 called The Blood Telegram (nah, here “blood” doesn’t refer to the gory details; rather, it’s the name of an American consulate official in Dhaka at the time). I found one anecdote very amusing, so I thought I’d write about it without finishing the book.   As the (West) Pakistani army went about brutally putting down the Bengalis of East Pakistan (present day Bangladesh), it was tempting for India to step in and break Pakistan. Partly because, hey, Pakistan was the enemy; partly because the refugees pouring in were stretching India’s meagre resources at the time; and of course, the optics was excellent: India stepping in to prevent a genocide, while the US and the USSR looked on and did nothing. Indira Gandhi called in General Sam Manekshaw to discuss the option of stepping in.   Indira expressed her problem with the refugee crisis, and asked Manekshaw to do something. Do what, he asked. “Go into East Pakistan”,

"Coupled"

Suicide. Most of us assume that all that is relevant is the state of mind of the person who takes that extreme step. But, asks Malcolm Gladwell in Talking to Strangers , is the availability of the means to suicide a key factor? Are the two (state of mind + availability of means) “coupled”, he asks.   Psychiatrists and social workers did not believe in the coupling theory. Ronald Clarke puts it perfectly: “(Most people feel) it was sort of insulting to think you could deal with it (suicide) by simply making it harder to commit suicide.”   In England, writes Gladwell, in the decades when town gas was delivered at homes for cooking etc, one of its constituents was carbon monoxide. Guess what was a popular way to commit suicide? Stick your head in the oven, seal it to the best you could with clothes etc, and inhale away. Later, when town gas was replaced with a different gas with no carbon monoxide, did suicides drop? If there’s no coupling, it shouldn’t matter: people would just

Why do Software Jobs Have so few Females?

Today, when most software developers (aka coders) are men, it’s hard to imagine an era when most coders were women. But that was how it started, writes Clive Thompson in his terrific book, Coders : “(In the 60’s), the sexy, high glory part of the job was regarded as building the hardware.” And so things were until the personal computer (PC) was invented in the 80’s. With the PC, boys got exposure to computers early whereas girls didn’t (yes, we are talking of the West). Guess what was the first thing said (boy) teenagers wrote software for? Video games. And who plays video games? Boys, rarely girls: “It began to make coding culture… even more male.”   Next, as offices started to see the possible use of PC’s, they found themselves in a position where hardly anyone knew anything, and everyone had to be trained on the job. Guess who was at an advantage at this point? Those who had some exposure to PC’s already. In other words, those boys who’d been creating games and in general

Livewired Brain #6: Future of Technology?

Based on this series of blogs on the livewired brain, which was based on David Eagleman’s Livewired , you’d be wondering whether/when our technologies would become that way. Would they become capable of changing themselves based on their “experience” of the world?   Actually, our software algorithms have already become like the brain, at least the ones we call Machine Learning. Voice recognition (Alexa), facial recognition (how your phone unlocks itself), and anything that feels like AI falls in that bucket. But the hardware doesn’t change itself. Human engineers and designers still have to make deliberate changes to the hardware.   Let’s say we do get to a point where even the hardware can re-do itself. Sure, it’d be great in many ways, for obvious reasons. Then again: “Note that a future of self-configuring devices will change what it means to fix them.” Huh? Eagleman points out that we already face that today! While “construction workers or car mechanics are rarely surpr

News Feed at Launch, and Today

Clive Thompson’s book titled Coders is about the people who write software (aka coders): “Programmers are thus among the most quietly influential people on the planet… The decisions they make guide our behavior. When they make something easy, we do a lot more of it. If they make it hard or impossible to do something, we do less of it.” In other industries, there would have been some adult supervision. Not so in software, with its startups by kids still at college. Which is why his book delves at length at the amount of influence they increasingly wield on society. And the problems that arise because coders are overwhelmingly engineers. And guys. And white. And young without enough (any?) life experience…   We take Facebook’s News Feed for granted today (It’s the name for how info shows up on your Facebook page, one post below another etc). But in 2006, News Feed did not exist. You had to click on each of your friends’ name/page to see if they had posted anything new. Yikes!

The Problem of Afghanistan

As the Taliban has become the de facto rulers of Afghanistan, most countries are not sure what is the “right” course of action. Recognize it officially, and hope to have some form of communication channels and influence? Refuse to recognize it, but that didn’t prevent the Kandahar hijacking and prisoners-for-hostages swap, or the planes from flying into the Twin Towers, did it?   C. Raja Mohan states a reality of life when he says : “That victories on the battlefield have political consequences is one of the fundamental features of international politics. Governments have no option but to come to terms, now or later, with the victor.”   He cites the speed at which UN resolutions are changing their wording as examples of this reality. A la when Napoleon escaped from Elba and marched towards Paris in 1815, this is a rough translation of the progression in the headlines of the age: “Here is a rough sense and sequence of the headlines: “The Cannibal has left his den”. “The Mon

Livewired Brain #5: Output Control

In earlier blogs, we’ve seen sensory substitution and sensory addition . But, as David Eagleman writes in Livewired : “That’s only the input half of the story.” Let’s next look into the output-related reorganization of the brain.   When a limb is paralyzed, the motor system parts of the brain reorganize: “The motor areas optimize themselves to drive the available machinery.” In fact, brains are not “predefined for particular bodies”. Instead, brains “adapt themselves to move, interact, and succeed”: “(Watch a human baby and notice how she is) learning how her motor output corresponds to the sensory feedback she receives.” It doesn’t stop with babies, of course. We continue the “same learning method to attach extensions to our bodies”. That’s how we learn to ride a bicycle. Or a skateboard. Or to surf the waves: “The specifics of the devices’ weight, joints, movements and controllers – everything you can do with them – work their way into your brain circuitry.” L

"Holy Fool"

Every time we hear of a spy who operated successfully for far too long, or a fraud who swindled people for ages, we wonder, “How could everyone have missed all the signs?”. But, as Malcolm Gladwell writes in Talking to Strangers : “In real life… lies (told by people we know and interact with very often) are rare.” This makes sense, because if we thought folks around us were liars or crooks, we wouldn’t be with them, would we? Ironically then, this blinds us to the odd cheat/ liar in our group. No wonder, says Gladwell, that for people we know, we “default to truth”, i.e., we accept (or come up with) explanations for stuff that is suspicious. It takes a lot of counter-evidence before we change our opinion.   Gladwell then describes something from Russia: “In Russian folkfore, there is an archetype called yurodivy , or the ‘Holy Fool’. The Holy Fool is a social misfit – eccentric, off-putting, sometimes even crazy – who nonetheless has access to the truth. ‘Nonetheless’ is act

"Hello, World!"

What is code (software program), asks Clive Thompson in Coders : “Code is speech, speech a human utters to silicon, which makes the machine come to life and do our will.” Take the first program taught in almost every computer language: It “begins with that one incantation. “Hello, World!”,  some variant of this line: print (“Hello, World!”)   Is code just words that a machine understands? No, wrote Fred Brooks: “Unlike the poet’s words, (code) is real in the sense that it moves and works, producing visible output… It prints results, draws pictures, produces sounds, moves arms… One types the correct incantation on the keyboard, and a display screen comes to life.” Which is why, Thompson writes: “The phrase “Hello, World!” is so laden with metaphoric freight. It summons to mind all the religious traditions where a god utters creation into existence: “In the beginning there was the Word” And “Hello, World!” also has its “creepy side”: “It reminds you of the unexpected