Posts

Showing posts from 2022

The Upper House

England, the US, and India – all of them are bicameral, i.e., they have two houses of parliament. It happened for pragmatic reasons. As democracy started to make inroads in Britain, existing lords would not accept losing all their power and privileges, while a democracy could not guarantee that they’d always have some power. Was bloodshed the only way? Britain solved the problem by creating two houses – the House of Commons where anyone could be elected, and the House of Lords which allowed the erstwhile lords to get direct entry and to pass on those rights to their heirs. That system of inheritance only stopped in 1999.   Since India copied the British model, we ended up with the Lok Sabha and Rajya Sabha. Except that membership to Rajya Sabha didn’t come by inheritance – it came via the votes of the state legislatures.   In both Britain and India, the upper house has repeatedly been used as a way to “insert” people into parliament – sometimes, to enable competent people who c

"Tree of Life" is Tangled: the Web

In the last blog , we saw how “endosymbiosis” complicated the idea of the tree of life. David Quammen’s book, The Tangled Tree , continues.   Why have bacteria gotten resistant to antibiotics? The usual culprits are indiscriminate over-use, and the evolution of bacteria themselves. But here’s a less known point. The genes that evolved in one bacterial species to confer resistance to the antibiotics seemed to become prevalent in almost all bacterial species. How could this be? Sure, it could happen by chance in one species, but how could the same exact solution popup in other species?   Wait, it gets weirder. Did you know that some of these antibiotics resistant genes existed even before the first antibiotics were discovered by humans? Huh? This sounds like a reversal of cause-and-effect! What was going on?   But before we answer those questions, let’s go to the 1920’s, when Fred Griffith stumbled onto something when testing two variants of a bacteria. Type I was virulent

"Tree of Life" is Tangled: Fusion

When we think of evolution, most of us have the “tree of life” view. Life started. Mutations occurred. Most were harmful, but a few proffered some advantages, and became more widespread. Over time, the extent of change due to cumulative mutations was so large that they couldn’t mate, and that’s when we say a new species has arrived. Or to use the tree of life analogy: the trunk of the “tree” has split into two branches. And the process repeats itself. The branches fork further and so we eventually see so many species.   This is also the view Darwin himself had. But over time, science and technology is showing that the tree of life view isn’t entirely right. Since the “tree” view is so widespread and easy to understand, David Quammen calls his book on the topic as The Tangled Tree .   Evolution moves slowly. Or so we are taught. Yet, the difference between bacteria and pretty much all complex life forms is colossal. How did that happen? Lynn Margulis noticed a key difference bet

ChatGPT, the AI that can Write Articles

ChatGPT is making a lot of news. It is an AI software to which you can ask questions, and it will then give answers. Unlike, say Google, it isn’t pointing you to links to other articles. Rather, as Stephen Shankland says : “It's an AI that's trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog.”   Here’s an example of a response it generated: “When I asked, "Is it easier to get a date by being sensitive or being tough?" GPT responded, in part, "Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona.” Not bad, right? But beware: “ The answers you get may sound plausible and even authoritative, but they might well b

FIFA World Cup and my Daughter

I was surprised when my 11 yo daughter showed interest in the ongoing FIFA World Cup. On the plus side, it meant one could watch the 8:30 p.m. match, without getting into a fight over the TV with her, so I wasn’t complaining.   It turned out she wasn’t really interested in the game. She just wanted to make sure she didn’t look lost in class when her classmates talked about the matches, the teams and the players. Like so many people, it turned out that she only knew of 2 players – yes, Messi and Ronaldo.   On Instagram, I showed her many of the endless stream of Messi’s brilliant goals and dribbling skills to get through the entire defense. The word “G.o.a.t.” (Greatest of all time) was a recurring theme in the comments section.   In the Morocco v Portugal match we watched together, I told her I supported Morocco and she imperiously declared that she supported whoever would win at the end. In other words, Morocco.   Next day, she mentioned that her class teacher was very

Hot or Cold, Nexleaf is There

“Wood or cow dung cakes under mud stoves in their homes” – that is how Vijay Mahajan says the world’s poorest 3 billion still cook, in his book Digital Leapfrogs . Even though the benefits of switching to cleaner stoves is evident – both to the individuals and the environment – nothing much has changed in the last 30 years. Why not? The reasons include inertia, newer stoves not being designed for how poor women actually cook, limited financing options, and no repair or maintenance services.   In India, one recent attempt at addressing this problem is by Nexleaf. In addition to means like providing loans, it uses digital technology to attack the problem. The company calls it StoveTrace. A sensor is attached to the stove that registers (1) when the stove is in use, and (2) temperature of the fire. This sensor connects to a device on the wall that records the data and sends it onwards to Nexleaf via good old phone lines.   Nexleaf uses this data to identify whether a stove needs r

The Road to the Theory of Evolution

As it became evident that mass extinction events had happened in the past, Christianity scrambled to explain them. Yes, they were cataclysmic events, but they were “directional and purposeful” went the argument, writes David Quammen in his wonderful book The Tangled Tree .   In the 1800’s, a geologist, Charles Lyell, published a book that said the processes and events that shaped the earth were erosion, deposition, and volcanic eruptions. And he added that those forces also led to extinctions. Edward Hitchcock was aghast at this view of a planet that could “exclude a Deity from its… government”. Lyell was a believer in God, not an infidel, but the risk Hitchcock saw was that the theory might drive others into godless ideas…   He was right. Well, at least in the case of one particular reader. His name was Charles Darwin and he combined three points to form his famous theory:         Offspring resemble their parents - inheritance        But offspring also differ slightly from

"High Conflict"

I read an interesting interview with Amanda Ripley on the topic of “high conflict”. Here’s how the interviewer, David Epstein, defines the term: ““High conflict” isn’t normal, healthy tension. It’s when disagreements devolve into “us versus them,” zero-sum combat (i.e. politics right now). ” How does one end high conflict? That is the theme of her book and this interview.   Ripley says that individuals need to break out of their identity group to break this vicious cycle. The fine print here is key. She does not mean someone on one side suddenly surrenders their core beliefs or defects to the other side. Rather, while they continue to hold onto their fundamental beliefs, it’s just that they stop agreeing with the extremes to which their group has gone .   Note here: high conflict can only happen when both sides behave in extreme ways. For the situation to de-escalate, what Ripley is saying needs to happen by more and more individuals on both sides.   This isn’t easy.

Absorbing Good Ideas ain't Easy

It’s easy to curse and lament the fact that new ideas don’t get accepted easily. Sure, the reason is vested interest and factionalism at times. But often, there’s a very far less malicious reason for it: inertia, as Seth Godin wrote : “We stick with what we know, with what feels safe, with the status quo… (After all) the status quo is the status quo precisely because it’s good at sticking around.” Also, as venture capitalist Paul Graham wrote , there’s the inevitable asymmetry between new v/s established ideas: “When a new idea first emerges, it usually seems pretty feeble. It's a mere hatchling. Received wisdom is a full-grown eagle by comparison.”   So how do we learn to recognize new ideas worth pursuing?   For one thing, Graham says we should give weightage to who is proposing it: “Most implausible-sounding ideas are in fact bad and could be safely dismissed. But not when they're proposed by reasonable domain experts. If the person proposing the idea is rea

User Friendly #4: Present Day

For most millennials , the smartphone has made everything hassle-free. No queues, no waiting, no commuting, no cooking. So much so, writes Cliff Kuang in User Friendly . “The real world was getting to be disappointing when compared with the frictionless ease of the virtual world.”   Apps are individualized, not shoehorned for everyone: “We all use the same containers – whether it’s apps or smartphones – but everything inside is different for each of us.” Digital ads are highly individualized too. Carpet bombing everyone with the same ads is history, thanks to how much Google and Facebook know about us. It can even feel creepy, how much your phone seems to know about you.   Which is why Kuang says: “User-friendliness wrought a world in which making things easier has morphed into making them usable without a second thought. That ease eventually morphed into making products more irresistible, even outright addictive.”   From a ‘pilot error’ mindset to ‘designer error’

Primer on Xinjiang

Xinjiang is big – it’s larger than France, Spain, and Germany combined. It lies nestled between the Karakoram range and the Tibetan plateau, writes Ananth Krishnan in The Comrades and the Mullahs . Throughout its history, it was looked at as a buffer zone by the Chinese, separating China proper from Central Asia. That probably explains why though it has been part of China for centuries, Xinjiang has never truly been integrated into China. (Their Muslim identity hasn’t helped either).   Ironically, the Chinese reforms of the 80’s allowed slightly greater ethnic autonomy (at least, on paper), created conditions for the Uighur identity (natives of Xinjiang) to solidify, set off the movement for self-determination, something they had been promised in the 1950’s. China responded with a two-fold approach – a rapid attempt to integrate the Xinjiang economy with the hinterland, and to tighten security control over the region. In 1997, riots happened in Xinjiang, and the Chinese government

Rise of Techno-Nationalism

During Donald Trump’s tenure, there was a big fight between the US and China about Huawei , the Chinese telecom equipment (and smartphone) manufacturer. The West was worried that if Huawei equipment grabbed the largest share of the upcoming 5G market, then China could insert malware and spyware in telecom networks all over the world…   The West would know. As Anirudh Suri writes in The Great Tech Game , in the 19 th and 20 th centuries, Britain had the monopoly over “telegraph communication network” across the world. Progressively, physical infrastructure monopoly had resulted in a monopoly over the raw materials needed for telegraph cables. As Britain became the #1 player in telegraph systems, it was cheaper and more economical for other countries to use British systems. With ever larger systems under their control, British expertise at laying cables and repairing these systems became better. It had become a circle that reinforced British dominance over telegraph systems across

How we Learn Best is so Unintuitive

Imagine a teacher who asks a maths question to the class, then gently nudges the students towards the correct answer (affirmation when they are on the right track; correcting them when they seem to be going off the path). A variant of the above approach is what David Epstein describes in Range : “’Lemme show you, there’s a better, easier way.” If the teacher didn’t already turn the work into using procedures practice, well-meaning parents will.”   Next imagine a second teacher who lets the students try and solve it on their own. No feedback in real time. Afterwards, she corrects the paper and includes notes on the right approach (if needed).   Which way do you think results in better learning? Not just for the duration of the class or course, but in the long run? It’s the let-them-struggle, let-them-fail approach that yields better learning, not just in that course/class, but in long term retention as well!   There’s even a term for it, the “hypercorrection effect”: “Th

The AI Future: Views of Two Countries

Hollywood is full of apocalyptic movies where the machines and AI takes over the world – the Terminator series is one of the best-known examples. In recent times, as machine learning algorithms get better at almost everything which seemed impossible just a few years back – recognizing faces, photos, transcribing and even translating spoken words in real-time – many people have been voicing their concerns, from Elon Musk to Bill Gates. There’s a term for it – the “singularity”. It’s the point of no return, at which technology takes over… for good.   On the other hand, China, which is second only to the US in all matters AI, doesn’t seem worried. In fact, tech entrepreneurs in China are optimistic that AI advances will make life better. Why the difference, asks Kai-Fu Lee in his book . For one, it’s their experience with technology so far: “The Chinese government has long emphasized technological advances as key to China’s economic development… For the last forty years, Chinese pe

User Friendly #3: Examples

Earlier, I spoke of the importance of mental models and feedback in designing user friendly products. Let’s take a few examples Cliff Kuang’s User Friendly .   He cites driverless cars as an example of the challenges when something is new. And how Audi went about handling it. First , it should be obvious when the car is in auto mode (Mode confusion has led to many airline crashes). Audi needs two buttons need to be pressed together to transition to auto mode – this prevents accidental activation. When the car takes over, the color of various panels changes to convey the status. Second , the occupant should know what the car is going to do, before it does it. Surprises don’t sit well with user experience. Hence, before changing lanes, the Audi shows a countdown timer informing what it is going to do next. Third , one should be able to “see” what the machine is “seeing”. Else, one is nervous what the car may be missing. On the display, the Audi shows all the cars around it. Fourth ,

Shadowplay

I bought Tim Marshall’s account of the Yugoslavia war in the 90’s, Shadowplay , because I don’t understand anything about the place. Or as Marshall put it: “I thought I knew my history, but actually coming to a region where everyone seemed to have a grievance and an ‘itch’ at the end of their name was confusing. MiloÅ¡ević. Panić, Ilić?” In case you’re wondering, this is not a popular history book. Instead, it’s a British journalist’s account of his stint in Yugoslavia during that period.   With typical British wry humour, he pointed out Europe’s surprise by the carnage that broke out after the death of Marshall Tito who had convinced folks that “they really were Yugoslav first, Croat/ Bosnian/ Muslim/ Serb second”. But after his death and the fall of communism, things old divisions resurfaced: “To my generation it just didn’t seem possible. War was what happened far away, in places with different cultures. War did not happen in our continent because we’d left all that behind

User Friendly #2: Mental Models and Feedback

Designing for the user is easier said than done. That’s obvious. In User Friendly , Cliff Kuang points out a key element to such design: “(It is key to understand) the ways in which humans assume their environment should work, how they learn about it, how they make sense of it.” If the user can’t make a mental map or model of the product, he’ll struggle to use it. Put differently, the device should work the way the user expects it to work. If you don’t see the importance of the mental model, consider the Internet.   Those of us who are started using the Internet when it started to take off think of it as a set of sites with links between them. The browser was a way to navigate across sites. That’s our mental model of the Internet – as the World Wide Web (sites linked to one another).   The majority in poorer countries, though, did not get to use the Internet until the smartphone became ubiquitous. Their model of the Internet is nothing like a “web”. To most Indians, WhatsA

Digital Payment Systems - Differing Views

Why is it that India is the only country with a government created smartphone payment system (aka UPI upon which all mobile payment apps from from PayTM to PhonePe to BHIM to Google Pay are built)? How come China’s private sector payment apps (WeChat and AliPay) are now so ubiquitous that cash is not even accepted in more and more places across the country? The other side of the coin – and the very curious one – is that no Western country has built any such smartphone based digital payment system. What’s going on?   I thought the answer lay in the fact that MasterCard, VISA, and the banks of the West lobby against any such system since it would eat into their commissions. While that’s definitely part of the reason, Anirudh Suri’s book, The Great Tech Game , reminded me that there are other reasons as well.   An additional reason why countries like India and China took the move, writes Suri, is that the current banking system to move money across countries, the SWIFT system, is

The Stuxnet Story

The secret project was called “Olympic Games”. Its aim was to cripple the Iranian nuclear program “without setting off a regional war”, writes David Sanger in his super-interesting book on cyber warfare titled Perfect Weapon . The US and Israel settled on creating a malware (computer virus) that would speed up/slow down the Iranian nuclear centrifuges, leading them to “ultimately destroy themselves”.   Being a covert operation, the US (and Israel) couldn’t claim they’d done it. So how then did this story, the malware now known as Stuxnet, break out? Well, it was rooted in the fact that Stuxnet couldn’t be simply added onto an Iranian centrifuge (obviously). It had to be spread all over the world, and hopefully would end up entering the centrifuges via malice (an Iranian traitor) or by usual stupidity (someone carrying an infected USB into work). As with any scatter and pray operation, the malware thus reached all over the world (We’ll come to why it didn’t do any damage anywhere el

User Friendly #1: A Very Brief History

Cliff Kuang’s book, User Friendly , is an interesting romp of how much the design of products has evolved to make them easy to understand and use. It wasn’t always that way. Once upon a time: “(The view was) that correctly operating a machine was about finding the right person to operate it.” ‘Pilot error’ is the term that sums up that attitude – if something went wrong, it was the user’s fault.   The World Wars began to change that. Why? “The performance of men under stress bore no resemblance to that of those operating a demonstration model.” And from that emerged the idea that machines could/should be designed to “better conform to the limits of (humans’) senses and minds”.   That mindset eventually spilled over into consumer products. The driver was consumerism – businesses could see that user friendliness could be the “elixir of sales growth”.   But it needed the “profusion of computers and electrical gadgets” for the trend of designing for the user to really take off. And now, th

A Brief History of R2P

In 1994, around 800,000 people were killed in the Rwandan genocide. In response, the UN decided that if a country couldn’t provide physical and economic security to its citizens, other countries (with some checks and balances) could intervene to restore order, writes Richard Haass in his book, The World .   This came to be called the Responsibility to Protect (R2P) doctrine. Since it was such a grey area (how much disorder justifies external intervention?), it was never put into practice. Until the US and its NATO allies invoked R2P in Libya in 2011. The rest of the world soon came to see it as just a thinly veiled attempt to overthrow the government of Gaddafi. Even worse, it led to anarchy, the very thing they claimed they were trying to prevent/fix.   With the Libya episode, R2P is dead. Russia and China were now set against R2P, viewing it as a “cover for imposing political outcomes”. The West too realized that no outcome could be guaranteed; it could make a bad situation w

China and Big Data

In an earlier blog , we saw how the Chinese government was instrumental in kick-starting China’s foray into AI. But that started in 2014, so how has China made so much progress in such a short time? Given that most of the R&D around AI was done in the West – the US, UK, and Canada - how did China emerge in this #2 position so quickly?   It helped greatly that most of the AI algorithms are public knowledge. They are not trade secrets or protected by patents. Combine that with the fact that these AI’s get better the more data they have access to. Not surprisingly, China with its huge population, generates enormous data.   Further, unlike the West, China doesn’t care about privacy. No, this isn’t just because the government says so. Rather, most Chinese don’t mind sharing their data with Alibaba, TikTok, Baidu or WeChat either. They find the conveniences and features they get in return to be worth it.   Plus, China has learnt AI by doing, i.e., by its entrepreneurs trying