Posts

Showing posts from June, 2019

No Problem too Small

Back in the 1980’s, when Apple was still a small company making personal computers(!),   Steve Jobs famously tried to lure John Sculley from Pepsi saying: “Do you want to sell sugar water for the rest of your life, or do you want to come with me and change the world?” Change the world. Make a dent in the universe. Everyone graduates hoping to do those things. And then we feel very disappointed with the work we actually do. But we shouldn’t feel that way, wrote Richard Feynman to a former student, Koichi Manom in a letter dated February 3rd, 1966. Manom had written to Feynman that he was working on “a humble and down-to-earth type of problem”. To which Feynman replied: “It seems that the influence of your teacher has been to give you a false idea of what are worthwhile problems.” Feynman acknowleges that the aura around him might have put pressure on his students to set unrealistic goals for themselves: “You met me at the peak of my career when I seemed to you to be co

An Attempt at Fixing College Education

Oft-cited problems with college education include the irrelevance (obsolete nature) of what is taught, the college not working hard to get all students a job at the end, and high tuition fees. Tyler Cowen writes of Lambda School in California as one private sector attempt to solve these problems. Here’s how it works. They give the student the option to pay zero tuition fees. Huh? What’s the catch, you ask: “The deal is that students pay back 17 percent of their income from the first two years of work, if earnings exceed $50,000 a year, with a maximum payment of $30,000. Students who don’t find jobs at that income level don’t pay anything.” It’s obvious how the idea hopes to solve the criticisms mentioned at the top of the blog: It is now in the college’s interest to get you a high paying job. That translates into better (relevant) teaching + having connections with companies that would hire you eventually. Time will tell how this model will fare, but Cowen analyzes the

One-Way or Two-Way Links?

As is well known, the Web was conceptualized as a way to be able to share and access information in academic circles. Founder Tim Berners-Lee wanted it to go a step further, writes Walter Isaacson in Innovators . He didn’t want just a data management system; rather, he wanted a collaborative playground. Ergo, Berners-Lee came up with the famous “hypertext”, those links when clicked take you to the other document or site, without worrying about which hardware or OS or anything else it ran on! Tech visionary, Ted Nelson, had visualized something similar in the 1960’s, except that Nelson wanted the links to be two-way for these reasons: 1)       It would allow navigation in both directions (linker to linkee and vice-versa); 2)      It would force links to be approved by both sides (linker and linkee). This provision would avoid the all too common problem we often face today: broken links; 3)      Lastly, it would allow for the future creation of a way to pay sites that were

When the Stars Aligned...

Given the way Google considers everything as a problem that engineering can fix, I assumed its founders are pure geeks. Which made it all the more surprising how they’ve managed to run the company so well, and retain control… aspects you associate with managers, not engineers. Walter Isaacson’s Innovators answered those questions for me. When he was twelve, (Google founder) Larry Page read a biography of Nikola Tesla and was troubled by it: though he was one of the greatest of inventors, he was very poor at commercializing his inventions. Page describes his learning from that: “If you invent something, that doesn’t necessarily help anybody. You’ve got to actually get it into the world; you’ve got to produce, make money doing it so you can fund it.” Which is why Page made sure he majored in both computer science and business . He also got to learn from his older brother (by 9 years) Carl’s experience of founding a social networking company that was eventually bought by Yaho

Changing One's Mind

When the famous economist, John Maynard Keynes, was accused of changing his views, he famously retorted: “When the facts change, I change my mind. What do you do, sir?” We like to believe we are like Keynes, open to new facts and willing to change our mind. If only the other side were like us, we lament… However, Seth Godin is right about the sad truth about political discussions: “The honest answer to, “if it could be demonstrated that there’s a more effective or just solution to this problem, would you change your mind?” is, for a political question, “no.” Kathryn Schulz takes Godin’s point a step further in her superb book, Being Wrong . She says, we almost never change our minds easily. She cites this spectacular example as evidence: guess when Switzerland gave all women the right to vote? Hold your breath: in 1971 . Did you find that “stunningly retrograde”, to use Schulz’s phrase? By that time, Switzerland was in the dubious company of countries like “Banglades

Knowing when Accuracy Matters

Accurate understanding of the situation. A view that maps with reality. That’s what we want in those who prescribe or decide policies. It’s also the reason many have contempt for academia, writes Nassim Nicholas Taleb in his book, Skin in the Game : “In academia there is no difference between academia and the real world; in the real world, there is.” But reality is messy, as Thomas Huxley pointed out ages back: “Many a beautiful theory was killed by an ugly fact.” Interventional-ism, the tendency to act and “fix” things, often leads to unmitigated disasters. Think Iraq. Taleb explains the problem with the interventional way of thinking: 1)       “They think in statics, not dynamics” : Or as a famous military general once said, “No plan survives contact with the enemy”. 2)      “They think in low, not high, dimensions” : The number of variables that impact an idea are numerous. After a point, trying to factor in for all of them becomes impossible. So the theoretician star

Machine Learning Anecdotes

Bill Gates once said: “When you use a computer, you can’t make fuzzy statements. You make only precise statements.” But in the Age of Machine Learning, wherein systems learn on their own, the outcomes can be highly unexpected. Sure, the underlying instructions are still precise, not fuzzy, but what systems learn (or mis-learn) makes for interesting reading. It can be dangerous. Once we let machines learn on their own, it becomes necessary to explicitly tell them what’s off limits, writes Tom Simonite: “Even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake.” Machines that learn on their own can be devious, or cheat. When researchers wanted a bot “to score big in the Atari game Qbert”, here’s what it did: “Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to t

What You See Ain't What's Out There

Image
Learning to see. Sounds like a crazy phrase, right? Aren’t animals (including humans) born knowing how to see? And how could we ever know what is the case, since babies can’t communicate? Well, there are ways of finding answers without going down the Nazi medical experiments road, writes David Eagleman in his terrific book, Incognito: The Secret Lives of the Brain . Like the case of Mike May, whose eyesight was restored at the age of 43. When the bandages were removed post-surgery: “He stared with utter puzzlement at the objects in front of him. His brain didn’t know what to make of the barrage of inputs… He was experiencing only uninterpretable sensations of edges and colors and lights. Although his eyes were functioning, he didn’t have vision .” Hmmm, but you’re not convinced yet. Eagleman continues: “Mike knew from a lifetime of moving down corridors that walls remain parallel, at arm’s length, the whole way down. So when his vision was restored, the concept of converg

Anyone can be Creative

Steve Jobs famously commented about creativity: “Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things.” Sure, we applaud creativity in the “good guys”. But whether we like it or not, bad guys can be creative too. Take Ross Ulbricht. As Nick Bilton explains in American Kingpin , Ulbricht put multiple existing technologies in a whole new combination to create his site: 1)       The Dark Web and Tor : The Dark Web is the parallel Internet where you can’t be tracked. Not by Google or Facebook or (critically) any government agency. To swim in this parallel Internet, you need a special browser called Tor. 2)      Bitcoins : Paying by cash or credit cards leaves a trail. But with the advent of Bitcoins, payments could be anonymi

Scary? Impressive? Both?

Recently, I heard this podcast on the metadata we put it there, knowingly or unknowingly, and what can be known about us using that (“Metadata” refers to data about data. For example, that pic you took has data other than the image itself. What else does it have? This blog gives you an idea). The host showed a pic put out by a random (non-famous) person to Andreas Weigend, the ex-Amazon Chief Scientist, and asked him, “What can you tell us about this person based on just this one pic?” The first test was with a selfie a woman had taken next to the famous mermaid statue in Copenhagen. Ok, knowing it was taken in Copenhagen was easy. Weigend pointed out that the metadata of the pic told him: -           The date and time on which the photo was taken (which means her whereabout at that point in time is now known); -           Which phone and model she used to take the pic; -           Next, he cross-referenced this pic with Google’s image match feature to find other pi