Don’t Trust the Odds (Ratio)

The title of this post is inspired by Scott Alexander’s Never Tell Me The Odds (Ratio). The goal of this post is to explain the meanings of (commonly-heard) metrics that indicate the “odds” of something (either directly or indirectly).

Just because these terms are commonly-heard does not mean they are commonly-understood. The odds are that most people don’t understand the numbers related to the odds – and misinterpret how big the odds really are.

Let’s take an example, borrowed from Scott Alexander:

Suppose you run a drug trial. In your control group of 1000 patients, 300 get better on their own. In your experimental group of 1000 patients (where you give them the drug), 600 get better.

The relative risk of recovery from the drug = probability of recovering from the drug in the experimental group ÷ probability of recovering on one’s own in the control group = (600 / 1000) ÷ (300 / 1000) = 60% ÷ 30% = 2.0.

The odds from recovering from the drug in the experimental group = probability of recovering ÷ probability of not recovering = 600 ÷ (1000 – 600) = 3/2. Likewise, the odds from recovering on your own in the control group = 300 ÷ (1000 – 300) = 3/7.

The odds ratio = odds of recovering from the drug ÷ odds of recovering on one’s own = (3/2) ÷ (3/7) = 3.5.

The Cohen’s d effect size takes the difference in the average of two groups (x1 – x2) and divides it by the standard deviation (s):

cohen's d effect size
cohen's d calculation

(Formula screenshots taken from this post on effect size.) Cohen’s d for the example above = (0.6 – 0.3) / 0.474341 = 0.6. I have used this standard deviation calculator and this Cohen’s d calculator. Note that Scott Alexander’s result is a little bit different at 0.7.

To recap, for the example above, we got the following results:

  • Relative risk (drug vs. self-recovery) = 2.0
  • Odds ratio (drug vs. self-recovery) = 3.5
  • Cohen’s d effect size = 0.6

The numbers lie on a wide range from 0.6 to 3.5 – and depends on which one is reported, and in what fashion, it could bias up (or down) the reader’s perception on how effective the drug is (vs. self-recovery). As Scott Alexander puts it:

The moral of the story is that (to me) odds ratios sound bigger than they really are, and effect sizes sound smaller, so you should be really careful comparing two studies that report their results differently.

May the odds be forever in your favor! 😉

[Book Review] Humble Pi: When Math Goes Wrong in the Real World

My ratings of the book
Likelihood to recommend: 5/5
Educational value: 5/5
Engaging plot: 5/5
Clear & concise writing: 5/5
Suitable for: everyone

Humble Pi is a witty & funny book that could let anyone (re)discover their love for mathematics! Overall, Matt Parker’s book is an appetizing combo of mathematics and comedy – if you want to learn mathematics while having tons of fun, this is one of the best books to start with, regardless of your background or fluency in maths.

Beyond making maths digestibly fun (and funnily digestible), another highlight of the book is how to think about thinking. In other words, the philosophy of thinking – such as how to be rational and how to prevent errors.

I particularly enjoyed the “Swiss cheese” model in thinking about errors: think about each error like a hole in a slice of cheese. And horrible sh*t (disaster) happens when somehow the holes are lined up together and the error falls through slices of cheeses, and lands in the pot of catastrophe. More often than not, a catastrophic consequence is the accumulation of a few errors – seemingly minor errors if we look at them alone – but when added together could bring explosive effects. What this means is instead of focusing too much on achieving 0 errors (which is desirable yet almost always impossible), what is more practical is to focus on improving error-detection that spots an error early – patch the first hole in the first slice of cheese, so that it does not trickle down into the remaining slices.

P.S. I’ve mentioned the Swiss cheese model in a post about premature optimization and other topics in software engineering.

I would also highly recommend checking out Matt Parker’s YouTube videos: his talks at Google and the Numberphile channel, which features bite-sized videos by various mathematicians on everyday-maths and has 3M+ subscribers to date (April 12th, 2020).

Matt Parker – Talks at Google: “Things to Make and Do in the Fourth Dimension”
Matt Parker – Talks at Google: “The Greatest Maths Mistakes”

Below I quote some parts of the book that I personally find insightful:

1/ We are used to going from theory to application, though sometimes the reverse happens: the application comes first, and then we discover the underlying theory afterwards. We should not let the joy of discovering the application over-shadow the need to fully understand the theory behind – otherwise, using the tool without really understanding its risks could hit us in the foot.

There is a common theme in human progress. We make things beyond what we understand, as we always have done. Steam engines worked before we had a theory of thermodynamics; vaccines were developed before we knew how the immune system works; aircraft continue to fly to this day, despite the many gaps in our understanding of aerodynamics. When theory lags behind application, there will always be mathematical surprises lying in wait. The important thing is that we learn from these inevitable mistakes and don’t repeat them.

2/ Don’t underestimate how little attention the public & institutions could pay to math – and what is most frustrating is not the mistakes themselves (which could be absurdly hilarious), but the lack of respect for mathematical facts or a pursuit of truth.

Matt Parker wrote to the UK government after he discovered that the geometric shape of the football was wrongly painted on signs in the UK (unlike the white hexagons, the black shapes on the ball’s surface should be pentagons instead of hexagons). However, the official response from the UK Department for Transport was: “Changing the design to show accurate geometry is not appropriate in this context.” Matt Parker clearly did not think too highly of the response he got:

They (the Department of Transport) rejected my request. With a rather dismissive response! They claimed that (1) the correct geometry would be so subtle that it would ‘not be taken in by most drivers’ and (2) it would be so distracting to drivers that it would ‘increase the risk of an incident.’ And I felt that they hadn’t even read the petition properly. Despite my asking for only new signs to be changed, they ended their reply with: ‘Additionally, the public funding required to change every football sign nationally would place an unreasonable financial burden on local authorities.’ So the signs remain incorrect. But at least now I have a framed letter from the UK government saying that they don’t think accurate math is important and they don’t believe street signs should have to follow the laws of geometry.

3/ While (most rational) people agree that 1 + 1 = 2, people don’t always agree on how the same number should be interpreted. A number ceases to be objective when subjective narratives are at play, hence we should not let our guard down and think an argument is “logical” just because numbers are used.

“It seems that, if the Trump administration couldn’t change the ACA (Affordable Care Act) itself, it was going to try to change how it was interpreted. It’s like trying to adhere to the conditions of a court order by changing your dog’s name to Probation officer.”

“[T]he Trump administration wanted to allow insurance companies to charge their older customers up to 3.49 times as much as younger people, using the argument that 3.49 rounds down to 3. […] They might as well have crossed out thirteen of the twenty-seven constitutional amendments and claimed nothing had changed, provided you rounded to the nearest whole constitution.”

“If there are enough numbers being rounded a tiny amount, even though each individual rounding may be too small to notice, there can be a sizeable cumulative result. The term ‘salami slicing’ is used to refer to a system by which something is gradually removed one tiny unnoticeable piece at a time. Each slice taken off a salami sausage can be so thin that the salami does not look any different, so, repeated enough times, a decent chunk of sausage can be subtly sequestered.”

4/ Precision and accuracy on two concepts with nuanced differences, and it is important to not mix the two. Precision is “the level of details given“, while accuracy is “how true something is“.

5/ Be ware of the word: average. Whenever you hear someone talk about averages, emind yourself of this commentary on the census from the Australian Bureau of Statistics: “While the description of the average Australian may sound quite typical, the fact that no one meets all these criteria shows that the notion of the ‘average’ masks considerable (and growing) diversity in Australia.” I would also add that the notion of the “average” masks how the average person is likely to overrate the concept of averages.

After the 2011 census, the Australian Bureau of Statistics published who the average Australian was: a thirty-seven year old woman who, among other things, ‘lives with her husband and two children…in a house with three bedrooms and two cars in a suburb of one of Australia’s capital cities.’ And then they discovered that she does not exist. They scoured all the records and no one person matched all the criteria to be truly average.

6/ Correlation does not mean causation. Just because two things have a high chance of happening at the same time does not mean one caused another. For example, I don’t think the number of math PhDs has any causal relationships with how much cheese people eat.

For the record, in the US the number of people awarded math PhDs also has an above 90 percent correlation over ten years or more with: uranium stored at nuclear-power plants, money spent on pets, total revenue generated by skiing facilities, and per capita consumption of cheese.

7/ Finally, this is one of my favorite quotes of the book on what mathematics is: “Mathematicians aren’t people who find math easy; they’re people who enjoy how hard it is.

I hope this book will rekindle your love for mathematics – or help you find it if you have never fallen in love with it in the first place.

“Premature Optimization” and the Pandora Box of Debates that Followed

All Evil Started with a Quote on All Evil

Donald E. Knuth, professor of computer science at Stanford University, popularized this phrase used in the programming community: “Premature optimization is the root of all evil.” Little did he know, however, this statement about “all evil” opened a Pandora’s Box – fierce / passionate / headbanging / crazy debates all the way from optimization to the meaning of engineering.

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.”

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.”


Donald Knuth, “Structured Programming with go to Statements” (1974)

Warning: you are about to peek inside the Pandora’s Box…which may lead to either an insightful soul-searching journey or a mental hurricane or somewhere in between.

Still with me? Then let’s dive in! 🙂

Premature Optimization vs. Technical Debt

A (somewhat) relevant concept to premature optimization is technical debt. Although most in the software engineering world would agree on the definitions of either term, folks are less aligned when it comes to how these two terms relate to each other – are they synonyms or opposites?

Technical debt refers to the “cost of additional rework caused by choosing an easy solution now instead of using a better approach that would take longer. (Wikipedia)” In layman terms, technical debt means if you are lazy now, you will have to make up for it later. Just like if you stock up dirty laundry, you will have to clean them sooner or later. And sooner is better than later better than never – that’s what people really mean when they remind you to “avoid technical debt“.

“Technical debt” as a phrase is looked upon favorably by programmers who believe chivalry isn’t dead. For them, “please avoid technical debt” is a civil alternative to “stop being lazy and get the $%@!#$$# up and do something.” So you could say “technical debt” existed in peace and had its supporters until it was put next to the “premature optimization,” and things get interesting.

This post asks the interesting question of whether premature optimization is “the opposite concept of technical debt”? What’s more interesting than the question itself are the comments that followed – highly recommend a read.

Some believe that “premature optimization” is generally a worse offense than “technical debt”, because at least technical debt saves you time now (although you need to pay back later), and the argument is that technical debt wastes less time than premature optimization on a net basis:

“There is no optimization included in this concept (of premature optimization). Optimization is doing something to improve value delivery. Eliminating waste is one form of optimization. This premature “optimization” introduces waste now (time is spent while not adding value). And if that isn’t bad enough, it introduces future waste as well.

“To me it (premature optimization) seems even worse than technical debt. Both (premature optimization and technical debt) result in future waste, but with technical debt you at least don’t waste a whole lot of time now.”


Comment by Henri van der Horst

However, is it really true that premature optimization only wastes time and creates no benefit at all? Randall Hyde argues that premature optimization is not as bad as it sounds – on the contrary, programmers could gain experience and the code as a whole does not suffer a lot:

“One thing nice about optimization is that if you optimize a section of code that doesn’t need it, you’ve not done much damage to the application. Other than possible maintenance issues, all you’ve really lost is some time optimizing code that doesn’t need it. Though it might seem that you’ve lost some valuable time unnecessarily optimizing code, don’t forget that you have gained valuable experience so you are less likely to make that same mistake in a future project.”

“The Fallacy of Premature Optimization”, Randall Hyde

To put it simply, Hyde considers premature optimization to be a “tuition” paid for how-to-code-better. If we go with Hyde’s argument, then the logical implication would be that technical debt is worse than premature optimization – the former teaches you nothing (other than that being lazy in the moment has its consequences down the road – which is something you have chosen to conveniently forget the moment you decide to go lazy and let the technical debt accumulate).

Some say premature optimization and technical debt, instead of being opposite concepts, overlap in meaning:

“You suggest premature optimization as an opposite, but I would say that premature optimization is technical debt. At least in a software context, optimization usually comes at the expense of readability and maintainability of the underlying code. If you didn’t need the optimization to support the use of the system under design, all you accomplished is making the code more difficult to maintain. This difficulty in maintenance is likely to cause new features to take longer to design, develop, test, and deploy, which is a key indicator of technical debt.”

Comment by Thomas Owens

To rephrase, Owens’ comment above argues that premature optimization creates problems that need to be remedied later, and I agree with him on that. What I disagree with, however, is that premature optimization creates “technical debt.” If we use the definition from Wikipedia above, technical debt refers specifically to problems caused by being lazy now (going for an easy solution or not doing anything), instead of being inappropriately / unwisely diligent (i.e., premature optimization). Owens has broadened the definition of “technical debt” in his comment to refer to code with any kind of problems – regardless of whether the cause was laziness (technical debt) or wrongly-guided diligence (premature optimization). And the preceding sentence is a nice way to summarize where I stand on this:

I believe both “premature optimization” and “technical debt” create problematic code that need to be fixed later – the key difference is in the root cause of the problem. Premature optimization is caused by misguided diligence, which creates very low ROI at its best or 0% ROI (100% wasted efforts) at its worst; technical debt is caused by mere laziness. While technical debt reinforces the old lesson that one should not be lazy, premature optimization shows that too much of diligence could be a bad thing.

Writing great code does not mean writing perfect code at every single step, and not every single line is worth investing the same amount of time & energy. Premature optimization is the result of incorrectly optimizing your time – which is of limited supply – and is the cause of failed maximization of the quality of your code output.

That was a mouthful yet just the start on the interesting debates surrounding premature optimization. We then slide further down the slippery slope to talk about the slippery slope itself.

The Premature Slippery Slope and the “Swiss Cheese” Model

Slippery slope means “a relatively small first step leads to a chain of related events culminating in some significant effect. (Wikipedia)” It has been more than four decades since Donald Knuth first popularized “premature optimization” in his 1974 paper – and four decades is a time long enough for his statement to fall down a premature slippery slope. 🙂

Some programmers say they want to avoid “premature optimization” as an excuse for being lazy or thoughtless. In this post, Joe Duffy expresses frustration when programmers use Knuth’s statement “to defend all sorts of choices, ranging from poor architectures, to gratuitous memory allocations, to inappropriate choices of data structures and algorithmsin other words, laziness.” It sounds like “premature optimization is the root of all evil” has been slipped down the slope to “optimization is the root of all evil” to “optimization is evil”.

Check out this humorous yet witty take by Randall Hyde on the various manifestations of the “slippery slope” gone too far: “The Fallacy of Premature Optimization”. My favorite part are the sarcastic observations he makes of programmers – some are a bit exaggerating and obviously don’t apply to every programmer, yet they are food for thought and I find myself guilty of slipping into some errors in non-programming fields:

“Observation #3: Software engineers use the Pareto Principle (also known as the “80/20 rule”) to delay concern about software performance, mistakenly believing that performance problems will be easy to solve at the end of the software development cycle. This belief ignores the fact that the 20 percent of the code that takes 80 percent of the execution time is probably spread throughout the source code and is not easy to surgically modify. Further, the Pareto Principle doesn’t apply that well if the code is not well-written to begin with (i.e., a few bad algorithms, or implementations of those algorithms, in a few locations can completely skew the performance of the system).”

“Observation #4: Many software engineers have come to believe that by the time their application ships CPU performance will have increased to cover any coding sloppiness on their part. While this was true during the 1990s, the phenomenal increases in CPU performance seen during that decade have not been matched during the current decade.”

“Observation #6: Software engineers have been led to believe that their time is more valuable than CPU time; therefore, wasting CPU cycles in order to reduce development time is always a win. They’ve forgotten, however, that the application users’ time is more valuable than their time.


“The Fallacy of Premature Optimization”, Randall Hyde

The central point that Hyde is trying to get across is that when some programmers claim to be “minimizing premature optimization”, what they are actually doing is “minimizing the time spent on thoughtful design” and, as a consequence, is a betrayal of the engineering ethos to maximize performance. There is no excuse for not investing the time to think through the systematic performance of the system as a whole – this is what expected of any good software developer, per Charles Cook (unfortunately, the link to Cook’s blog article is no longer valid):

“Its usually not worth spending a lot of time micro-optimizing code before it’s obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine tuning at a later stage will fix any problems.”

Charles Cook

Rico Mariani is making a similar point when he says: “Never give up your performance accidentally.

Now, it’s time for the simple yet clever rule: Never give up your performance accidentally. That sums it up for me, really. I have used other axioms in the past — rules such as making sure you measure, making sure you understand your application and how it interacts with your system, and making sure you’re giving your customers a “good deal.” Those are all still good notions, but it all comes down to this: Most factors will tend to inexorably erode your performance, and only the greatest vigilance will keep those forces under control.

If you fail to be diligent, you can expect all manner of accidents to reduce your system’s performance to mediocre at best, and more likely to something downright unusable. If you fail to use discipline, you can expect to spend hours or days tuning aspects of your system that don’t really need tuning, and you will finally conclude that all such efforts are ‘premature optimizations’ and are indeed ‘the root of all evil.’ You must avoid both of these extremes, and instead walk the straight and narrow between them.

Rico Martiani (Microsoft, Performance Architect, 2004)

Rico’s principle of “never give up your performance” – whether accidentally or consciously – is applicable to all walks of life, not just programming. It is particularly important when we are dealing with complex systems:

What are good values for performance work? Well, to start with you need to know a basic truth. Software is in many ways like other complex systems: There’s a tendency toward increasing entropy. It isn’t anyone’s fault; it’s just the statistical reality. There are just so many more messed-up states that the system could be in than there are good states that you’re bound to head for one of the messed-up ones. Making sure that doesn’t happen is what great engineering is all about.

Rico Martiani (Microsoft, Performance Architect, 2004)

There you go: great engineering is about great performance indeed, but great engineering is not about guaranteeing a perfect performance – in fact, that is downright impossible. Great engineering is about preventing, or minimizing, the chance of resulting in performance that is so messed up that you bring about catastrophic consequences. Great engineering is not about delivering a perfect show 100% of the time – it is about making sure that a messed up sh*t-show happens 0% (or close to 0%) of the time. Therefore, a truly great engineer will steer away from wasteful “premature optimization”, while never forgetting or giving up on the goal of performance optimization. In fact, avoiding premature optimization itself is a tactic to optimize performance by investing time where it matters the most for the output.

On the point about avoiding a sh*t-show from happening, I came across the Swiss cheese model on accidental management, as explained by Matt Parker in his book Humble Pi: When Math Goes Wrong in the Real World:

[The] Swiss cheese model of disasters, which looks at the whole system, instead of focusing on individual people. The Swiss cheese model looks at how ‘defenses, barriers, and safeguards may be penetrated by an accident trajectory.’ This accident trajectory imagines accidents as similar to a barrage of stones being thrown at a system: only the ones that make it all the way through result in a disaster. Within the system are multiple layers, each with its own defenses and safeguards to slow mistakes. But each layer has holes. They are like slices of Swiss cheese.”

” I love this view of accident management, because if acknowledges that people will inevitably make mistakes a certain percentage of the time. The pragmatic approach is to acknowledge this and build a system robust enough to filter mistakes out before they become disasters. When a disaster occurs, it is a system-wide failure, and it may not be fair to find a single human to take the blame.


Humble Pi: When Math Goes Wrong in the Real World (Matt Parker)

The Swiss cheese model is very easy to visualize: imagine putting slices of Swiss cheese on top of each other, each slice with holes on them representing problems. Imagine catastrophic events only happen if the holes on each slice happens to line up, and an error could pass through them in a straight line. As Matt Parker points out, when a bunch of mistakes “conveniently” line up and result in a gigantic mistake, it is usually indicative of some systematic issues. This is not to say that individuals or specific actions are not at fault – but one should not focus on the tree and forget about the forest, i.e., the system as a whole. There is often lots to be done on a systematic level, e.g., improved processes or better tools.

Two final remarks:

(1) I am not a programmer and I don’t code myself, so yes, I am commenting on an area of trade that I have little experience of. That being said, just as you don’t have to be a professional mathematician to apply mathematical thinking in your daily life, I believe you don’t have to be a full-time software engineer to appreciate computational thinking. At the end of the day, although concepts like “premature optimization” and “technical debt” originated in the context of software, they could be applied to and maintain relevance in all walks of life;

(2) I highly recommend Matt Parker’s highly entertaining & educational book on mathematics: Humble Pi: When Math Goes Wrong in the Real World. If you love mathematics, there is no reason not to read it. If you hate mathematics, the biggest reason to read it is it will make you fall in love with math. Mathematics is a truly beautiful language and way of thinking.

See you later, world.

“Uncommon Sense” About COVID-19: Data & Opinions Worth Knowing (live updating)

Read-Me-First: Much is being posted about the coronavirus on a daily, or even hourly, basis – sometimes a bit too much with fake news / data / pictures coupled with conspiracy theories, accusations of racism, and doomsday predictions. This blog post – live updated from time to time – aims to filter out the signal amidst the noise: data & opinions on the COVID-19 that (a) I think are worth knowing & reflecting about, and (b) are inevitably colored with my own biases & POV. Do your own research, form your own (informed) opinions, and stay safe!

Table of Contents (updated April 17, 2020)

  • [Set the Stage] Other than masks, stop up some humor too
  • [Science] Getting familiar with COVID-19 symptoms (vs. cold, flu, allergy)
  • [Science] Understanding how fast the virus spreads and incubates
  • [Protective Measures] Response of Individuals: Stock-Up vs. Laissez-Faire
  • [Protective Measures] Response of Governments: Lock-Down vs. Herd Immunity
  • [Thinking Smart] What a conspiracy theory teaches us about critical thinking
  • [Thinking Smart] Veterans merely make better guesses – nobody knows for sure
  • [Thinking Smart] “Aha” moments from working from home
  • [Thinking Smart] Defining information
  • [Thinking Smart] What went wrong with media coverage? A failure, but not of prediction

[Set the Stage] Other than masks, stock up some humor too

If you have not yet heard about the “coronavirus disease 2019” (COVID-19) – which is aptly named with a “19” suffix because we were obviously certain it would spread into 2020 and achieve monopoly over this year’s headlines (joking) – you must be living in a cave.

Rest assured, even if that were the case, I would not mock you. On the contrary, I would envy you, because living in a cave like Robinson Crusoe these days is probably one of the safest ways to protect yourself from the coronavirus. 🙂 Moreover, if you were able to get Wi-Fi connection in your cave, you could post on social media with glorious hashtags like #not-lonely-when-am-alone, #perfect-social-distancing, #responsible-self-quarantine etc.,

Just joking (again). We all need some positive energy in times like this. Some wise folk once said: “If you can’t laugh about it, you lose.” I, for one, am a big fan of John Oliver’s funny, sarcastic & witty take at the recent coronavirus news on “Last Week Tonight” (HBO, March 1, 2020):

Let’s not forget to keep some happy smiley faces up even when COVID-19 was called a pandemic by the WHO and the stock market + oil market + crypto market + [insert your past-favorite / now-most-hated market] are trapped by NOVGRA-20, a shorthand for “novel gravitational force 2020”. Can the Einstein-of-our-times come up with a new theory of relativity to explain what the h*** is going on?

Since searching for the next Einstein-of-our-times is too challenging, I opted for an easier option – searching on Google about what is interesting to know about the COVID-19. Here is your curated feed on “uncommon sense” about the coronavirus: not-your-typical headlines, yet probably worthy of attention.

[Science] Getting familiar with COVID-19 symptoms (vs. cold, flu, allergy)

To start with, let us first familiarize ourselves with what the virus does. As Peter Attia, MD with training in immunology, said in a podcast, the coronavirus mainly attacks the type II pneumocyte cell that makes surfactin. Surfactin lets the air sacs of the lungs to overcome the tension on the surface and hence open successfully. In other words, without sufficient surfactin, individuals could suffer from respiratory collapse. Dr. Attia recommends all infected persons with difficulty breathing to seek medical attention ASAP – regardless of their age.

COVID-19 could be tricky to diagnose because of overlapping symptoms with the cold, the flu and allergies. This article from Business Insider (March 2020) gives a good comparison of the symptoms across the 4 diseases.The key point is the three most common symptoms of COVID-19 are: fever + dry cough + shortness of breath.

covid 19 compared to other common conditions table

The good news is: if you are sneezing and have a runny nose, it is very unlikely that you have COVID-19 – the flu or allergies are probably to blame.

The important footnote is: while nausea and diarrhea are rare for COVID-19, these symptoms could still be “early cues of infection (of COVID-19)” and thus should not be taken lightheartedly.

[Science] Understanding how fast the virus spreads and incubates

When it comes to studying the spread of the virus, a key concept to know is the viral coefficient, denoted by “R”. “R” stands for the number of people that each infected person goes on to infect. You may have also heard about R0, which stands for the viral coefficient in a community with no natural immunity against the virus and takes no special protective measures. Getting a fair estimate of R (and R0) could help us assess how viral the virus is and how effective interventions are:

“In the long term, the only way that this pandemic can actually end is for the R value of the virus to plunge below 1, consistently, in every part of the world, for a prolonged period of time.”

“A framework for thinking through what’s next for COVID-19”
(March 11, 2020)

I recommend reading “An in-depth look at four academic models of the Wuhan coronavirus outbreak’s spread” (January, 2020) for a concise summary of what scientists say (or infer) about the spread of the virus. The key takeaway on the virality factor is this:

“[T]here is still not an academic consensus on the basic replication number of the Wuhan coronavirus. Models range from finding an Ro of 1.4 after assuming a latent period of 14 days, to finding one of 4.0 after assuming only 4 days.

“An in-depth look at four academic models of the Wuhan coronavirus outbreak’s spread”
(January, 2020)

Next, let us look at the incubation period of the coronavirus. In this March, 2020 study led by Johns Hopkins University, the researchers find the median incubation period is around 5 days:

“There were 181 confirmed cases with identifiable exposure and symptom onset windows to estimate the incubation period of COVID-19. The median incubation period was estimated to be 5.1 days (95% CI, 4.5 to 5.8 days), and 97.5% of those who develop symptoms will do so within 11.5 days (CI, 8.2 to 15.6 days) of infection. These estimates imply that, under conservative assumptions, 101 out of every 10,000 cases (99th percentile, 482) will develop symptoms after 14 days of active monitoring or quarantine.”

The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application
(March 10, 2020)

The Johns Hopkins research suggests that 14 days is a reasonable length for quarantine – cases that have longer incubation periods are possible yet unlikely outliers:

“Based on our analysis of publicly available data, the current recommendation of 14 days for active monitoring or quarantine is reasonable, although with that period, some cases would be missed over the long-term.”

5.1 days incubation period for COVID-19
(March 9, 2020)

Despite progress in understanding the viral coefficient (R) and the length of the incubation period, we are still not sure about when someone is contagious – in particular whether a person is contagious during the incubation period. The website of the US Center for Disease Control (accessed on March 16, 2020) reads: “[D]etection of viral RNA does not necessarily mean that infectious virus is present…it is not yet known what role asymptomatic infection plays in transmission. Similarly, the role of pre-symptomatic transmission (infection detection during the incubation period prior to illness onset) is unknown.

That being said, Bill Hanage, an associate professor of epidemiology from Harvard, believes the answer “is an unambiguous yes” when it comes to “a person can transmit before they are aware they might be infectious.” Though please do note Hanage’s statement is yet to be backed with peer-reviewed research.

[Protective Measures] Response of Individuals: Stock-Up vs. Laissez-Faire

On the question of how to respond to the COVID-19 outbreak, the responses fall on two-ends of the spectrum (for individuals): go full force or do (almost) nothing. We see a juxtaposition of two contrasting camps: (1) Camp-Stock-Up rushing to supermarkets and stocking up on years of toilet paper vs. (2) Camp-Laissez-Faire wandering the streets without masks – assuming they have or are able to get masks – either a/ thinking optimistically that the COVID-19 is not that dangerous and everyone is making a fuss or b/ thinking pessimistically that all prevention measures are useless because they would get infected sooner or later.

Where should we pick our stance between the two extremes? Below is a stance that I find to be reasonable, which thinks about social distancing the way we think about car safety: “not as a single binary decision to go Full Turtle and shelter in place, but as a collection of little risk-reducing behaviors that add up to a big win“:

“To really get your mind around how this works, think about all the little things you do to manage risk when driving a car: wear a seat-belt, use a turn signal, drive the speed limit, don’t drink or text and drive, have your brakes checked regularly, etc. Each of these things helps a little, and when done together they all add up to a dramatically safer driving experience — both for you and those you share the road with — than if you didn’t do any of them at all.”

Even if you can’t go full lockdown right now, you can still #FlattenTheCurve
(March 13, 2020)

Another key point this author points out is “every new day is riskier than the previous one” – at least in the short term – as the number of infections increases and we are not yet fully equipped with dealing with the disease. What this entails is it makes sense for each individual to progressively level-up their self-protection every single day, at least until (a) we see reliable signs that the spread of the virus has been contained and / or (b) we have developed a solid cure and / or vaccine.

Most of us are probably working from home, but for those who are working in the office or in public places, consider this piece of advice:

Take on progressively more social and reputational risk in order to reduce your physical risk: e.g., If you’re working a retail counter tomorrow and an obviously ill customer approaches you, discretely excuse yourself for the restroom at the risk of having that person try to get you fired. You might want to start using sick days next week. Get bold and creative with how to distance yourself in-the-moment, and be more willing to offend people as this progresses.”

Even if you can’t go full lockdown right now, you can still #FlattenTheCurve
(March 13, 2020)

“Be more willing to offend people.” If you are working in the office and a colleague is coughing, ask him / her to work from home or see a doctor. Do not be afraid to offend your colleague, because it is a responsible thing to do for both you and your colleague and everyone else in the office. Plus, if you were asking in a nice way and explain your rationale, most people in your colleague’s shoes should be able to understand.

[Protective Measures] Response of Governments: Lock-Down vs. Herd Immunity

The response of governments around the world could be broadly put into 2 types:

  1. Camp Eradicate: represented by China, this group takes a resolute stance including city-wide lock-downs and quarantine at the cost of disrupting economic activities;
  2. Camp Herd Immunity: represented by the UK (which has since then modified its stance to be more hard-line) is to focus on “flattening the curve,” i.e., focus on protecting the more vulnerable people. Instead of trying to eradicate the virus, this camp would try to slow down the spread of the disease a bit so as to “flatten the curve,” i.e., a slower spread of the disease could prevent over-burdening the healthcare system.

Scott Adams asks an interesting question about whether these two camps could co-exist in harmony. As long as Camp #2 Herd Immunity exists, does this mean Camp #1 Eradicate cannot possibly exist or sustain its success?

The UK’s proposal of “herd immunity” has been under criticism:

Some argue that “herd immunity” is a by-product of preventive measures, and should not be mistaken as an end in itself:

“[T]alk of ‘herd immunity as the aim’ is totally wide of the mark. Having large numbers infected isn’t the aim here, even if it may be the outcome. A lot of modellers around the world are working flat out to find best way to minimise impact on population and healthcare. A side effect may end up being herd immunity, but this is merely a consequence of a very tough option – albeit one that may help prevent another outbreak.”

Adam Kucharski, London School of Hygiene and Tropical Medicine

[Thinking Smart] What a conspiracy theory teaches us about critical thinking

A Reddit post from February 2020 went viral with the title: “Quadratic Coronavirus Epidemic Growth Model seems like the best fit” – it posits that the total case numbers reported by China fits “uncannily” well with a quadratic curve (15 days’ of data, R-squared value of .9995). Given none of the current epidemiological models supports a quadratic growth curve, the Reddit post makes a not-so-subtle hint that the Chinese numbers may be fabricated to fit a quadratic curve.

And the situation quickly gets dramatic, and like all (good) dramas do, the situation quickly gets messy with people pointing their fingers at the Chinese government and / or the WHO for allegedly making up and / or covering up the number of total cases in China.

Before anyone gets excited thus far, let us take a look at both sides of the debate. Ben Hunt from Epsilon Theory – one of my frequently-read and highly-recommended blogs on “the narratives that drive markets, investing, voting and elections” – sides with the Reddit skeptic:

All epidemics – before they are brought under control – take the form of a green line, an exponential function of some sort. It is impossible for them to take the form of a blue line, a quadratic or even cubic function of some sort. This is what the R-0 metric of basic reproduction rate means, and if – as the WHO has been telling us from the outset – the nCov2019 R-0 is >2, then the propagation rate must be described by a pretty steep exponential curve. As the kids would say, it’s just math.”

“[T]o be clear, at some point the original exponential spread of a disease becomes ‘sub-exponential’ as containment and treatment measures kick in. But I’ll say this … it’s pretty suspicious that a quadratic expression fits the reported data so very, very closely. In fact, I simply can’t imagine any real-world exponentially-propagating virus combined with real-world containment and treatment regimes that would fit a simple quadratic expression so beautifully.”


Ben Hunt, “Body Count”
(February 10, 2020)

On the same day when Ben Hunt published his article, there is an Op-Ed published defending the validity of the case numbers, with a title that sums up the author’s stance: “No, 2019-nCoV case numbers were not fabricated to fit a curve”. It points out a few loopholes with the skeptics’ conspiracy theory:

  1. Add a few more days of data to the original data-set (of 15 days) and what we get “is far from being a perfect quadratic”;
  2. “If you look at the data from outside China, which is definitely not being faked by China, and fit a quadratic to cumulative case numbers, you’ll get a similarly eye-catching R-squared value of .992.”
  3. Fitting data into a quadratic function is easier than it may sound: “Any data whatsoever with n points can be fit perfectly, with absolutely no error, using a polynomial of degree n-1.”

The author goes on to say we should pay attention to the fact that “modern statistical software can fit many types of models to the same data,” and therefore we should be extra-cautious with what conclusions we draw – especially when the data has a small sample size:

“[A]s our Redditor friend acknowledges, he tried many models before choosing the one with the most eye-catching R-squared value.”

“And the curve of a growing epidemic has some properties that inherently can make it kind of similar to a quadratic. It will be monotonically upward, and growing at an increasing rate. This means the regression calculation’s job is made easier by this crude similarity, and allows those eye-catching R-squared numbers. The R-squared value is calculated using the square of the differences between the model and reality, so it punishes a few large deviations more harshly than many small ones. That is, the joint information of the two curves being high is really just the observation that in general the curves look pretty similar, not a clinching judgment that the curve was faked using a model.”


“No, 2019-nCoV case numbers were not fabricated to fit a curve”
(February 10, 2020)

The author concludes with this stance: “We’re not saying the data is reliable, just that it’s not faked,” citing “even if every single authority in the world were the most competent they could possibly be and were reporting everything they knew with complete candor, the data would still not be accurate, because many cases are latent with no symptoms, and even among symptomatic cases, most are not known to public health authorities.” In short, it is impossible to have “accurate” (and timely) data when it comes the total number of cases – just as it is impossible to have “perfect” testing that covers every single case in real time.

The purpose of me sharing the above is not to tell you which side you should pick – to be honest, I think the real question here is not who to side with, but how to analyze data (& inferences, opinions) critically. To help us remember how easy it is to misinterpret data – whether intentionally or by accident – I would like to show you this graph where a quadratic curve and an exponential curve look very similar within a small range of data:

Here is an explanation of the graph above:

“We generated two curves, one exponential and one quadratic, that both start at 100 on day 1 and end at 1440 or so on day 29. We then fit a quadratic to the exponential, and vice versa. These data really are synthetic and perfect, and we’re fitting the wrong model to each one. But in both cases, the fit is close and the R-squared value is .97 when we fit the exponential to the quadratic, and .994 when we fit the quadratic to the exponential.

“You can see that both fitted models start to fail at the end, as the exponential data grows faster than the quadratic model will allow, and vice versa.”


“No, 2019-nCoV case numbers were not fabricated to fit a curve”
(February 10, 2020)

It may be a good time to remind everyone of Cowen’s first law from Tyler Cowen, professor of economics: “There is something wrong with everything (by which I mean there are few decisive or knockdown articles or arguments, and furthermore until you have found the major flaws in an argument, you do not understand it).” I would say that is a good attitude to adopt when we read anything, what do you say? And that is a trick question – because if you agree with me, then it implies you think there is nothing wrong with my statement, but that is self-defeating of Cowen’s First Law; if you disagree with me, then it implies you think there is something wrong, which is an example that fits Cowen’s First Law.

Okay – I am just having fun with logic games. 🙂 The point is: do your own research, do your own research on the pro vs. against, and do your own research from every possible angle. Everyone could be wrong. Everyone must be wrong in some way – the only difference is whether you spot where they are wrong or not.

[Thinking Smart] Veterans merely make better guesses – nobody knows for sure

Howard Marks is the co-founder of Oaktree Capital Management, one of the largest investors in distressed securities. He publishes memos on his views on the market, investing, current affairs and other topics. In his latest memo “Nobody Knows II”, which I think is worth a 10-minute read from start to end, Howard shared his take on the coronavirus and the recent market downturn.

Howard breaks down information about the virus into 3 types:

As Harvard epidemiologist Marc Lipsitch said on a podcast on the subject, there are (a) facts, (b) informed extrapolations [inferences] from analogies to other viruses and (c) opinion or speculation. The scientists are trying to make informed inferences. Thus far, I don’t think there’s enough data regarding the coronavirus to enable them to turn those inferences into facts. And anything a non-scientist says is highly likely to be a guess.

Memo from Howard Marks: Nobody Knows II
(March 3, 2020)

In Howard’s previous memo called “You Bet” (January, 2020), he shared some quotes by Annie Duke, a PhD dropout who later became what Howard calls “the best-known female professional poker player” with over $4 million winnings from tournaments:

“[W]orld-class poker players taught me to understand what a bet really is: a decision about an uncertain future…[T]here are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.”

“[W]inning and losing are only loose signals of decision quality. You can win lucky hands and lose unlucky ones…What makes a decision great is not that it has a great outcome. A great decision is the result of a good process, and that process must include an attempt to accurately represent our own state of knowledge. That state of knowledge, in turn, is some variation of ‘I’m not sure.’…What good poker players and good decision-makers have in common is their comfort with the world being an uncertain and unpredictable place…instead of focusing on being sure, they try to figure out how unsure they are, making their best guess at the chances that different outcomes will occur.”

“[W]e can make the best possible decisions and still not get the result we want. Improving decision quality is about increasing our chances of good outcomes, not guaranteeing them.

Annie Duke, “Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts”
(February, 2018)

Nowadays with (almost) everyone being called (or calling themselves) an “expert” and giving their (solicited and unsolicited) opinions on the Internet, let’s take a step back to ask ourselves what it means to be an expert:

“An expert in any field will have an advantage over a rookie. But neither the veteran nor the rookie can be sure what the next flip will look like. The veteran will just have a better guess.

Annie Duke, “Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts”
(February, 2018)

I applaud this tweet of Francois Balloux, a computational / system biologist working on infectious diseases. In sharing his opinion of the virus, he candidly admits: “Predictions from any model are only as good as the data that parametrised it. There are two major unknowns at this stage. (1) We don’t know to what extent covid-19 transmission will be seasonal. (2) We don’t know if covid-19 infection induces long-lasting immunity.” I recommend reading his full Twitter thread here:

We need more consciously-responsible experts as such – experts who are candid in sharing their opinions and in admitting that they could be wrong and they could never be perfectly right. Nobody ever knows for sure. I’d like to share this quote on humility:

Humility not in the idea that you could be wrong, but given how little of the world you’ve experienced you are likely wrong, especially in knowing how other people think and make decisions.”

Morgan Housel, “Different Kinds of Smart”
(September 27, 2018)

[Thinking Smart] “Aha” moments from working from home

This Tweet on technical difficulties people run into when they are working from home is a vivid illustration of the point: it is time to rethink work and work-tech.

The thing with a business continuity plan is it rarely gets the credit when business continues as usual. To the contrary, it is only missed (or blamed) when the business cannot continue as usual.

Re-imagining work extends to re-imagining the office building – this Tweet predicts voice or gesture controlled activation could become more prevalent. Imagine your office building lift becomes a mini Siri, Alexa or Google Assistant. Try saying: “Hey Lift, take me to the 19th floor.”

Other than conversations on work-tech, this “hot” Tweet takes it to the level of class consciousness:

[Thinking Smart] “Aha” moments from working from home

With the surge of cases worldwide comes a surge in “information” about the coronavirus – though the information we see vary greatly in quality. I strongly recommend Defining Information from the Stratechery blog that shares insights on how to think about information:

“Given that over 90% of the PCs in the world ran Windows, writing a virus for Windows offered a far higher return on investment for hackers that were primarily looking to make money. Notably, though, if your motivation was something other than money — status, say — you attacked the Mac.”

“I suspect we see the same sort of dynamic with information on social media in particular; there is very little motivation to create misinformation about topics that very few people are talking about, while there is a lot of motivation — money, mischief, partisan advantage, panic — to create misinformation about very popular topics. In other words, the utility of social media as a news source is inversely correlated to how many people are interested in a given topic.


Defining Information (Stratechery, April 2020)

In simple terms, as more people start talking about a topic, the average quality of the information you get drops. This is not surprising for two reasons: (a) you are more likely to hear higher number of repetitions of popular opinions and narratives; (b) there is a higher incentive for people to create or spread misinformation on a hot topic.

The Stratechery blog goes on to propose some helpful heuristics on how to deal with different types of information:

“For emergent information, like the coronavirus in February, you need a high degree of sensitivity and a high tolerance for uncertainty.”

“For facts, like the coronavirus right now, yo uneed a much lower degree of sensitivity and a much lower tolerance of uncertainty: either something is verifiably known or it isn’t.”


Defining Information (Stratechery, April 2020)

[Thinking Smart] What went wrong with media coverage? A failure, but not of prediction

Slate Star Codex is one of my favorite blogs by far. Scott Alexander’s post A FAILURE, BUT NOT OF PREDICTION is an insightful take on what went wrong with the media coverage on the coronavirus. A key concept that Scott discusses is that of probalistic reasoning:

“A surprising number of these people had signed up for cryonics – the thing where they freeze your brain after you die, in case the future invents a way to resurrect frozen brains. Lots of people mocked us for this – ‘if you’re so good at probabilistic reasoning, how can you believe something so implausible?’ I was curious about this myself, so I put some questions on one of the surveys.”

“The results were pretty strange. Frequent users of the forum (many of whom had pre-paid for brain freezing) said they estimated there was a 12% chance the process would work and they’d get resurrected. A control group with no interest in cryonics estimated a 15% chance. The people who were doing it were no more optimistic than the people who weren’t. What gives?”

“I think they were actually good at probabilistic reasoning. The control group said ‘15%? That’s less than 50%, which means cryonics probably won’t work, which means I shouldn’t sign up for it.’ The frequent user group said ‘A 12% chance of eternal life for the cost of a freezer? Sounds like a good deal!'”

A failure, but not of prediction (Slate Star Codex, April, 2020)

Scott summarized it well when he said: “Making decisions is about more than just having certain beliefs. It’s also about how you act on them.

He shared a diagram showing two types of people: Goofus and Gallant. Goofus requires “incontrovertible evidence” before believing something is true, i.e., false until proven true. On the contrary, Gallant embraces uncertainty and does not look at things in an all-or-nothing fashion: he reasons in probability.

Scott argued that people behaved like Goofus when the coronavirus first started to spread:

“I think people acted like Goofus again.”
People were presented with a new idea: a global pandemic might arise and change everything. They waited for proof. The proof didn’t arise, at least at first. I remember hearing people say thing like ‘there’s no reason for panic, there are currently only ten cases in the US’. This should sould like ‘there’s no reason to panic, the asteroid heading for Earth is still several weeks away’. The only way I can make sense of it is through a mindset where you are not allowed to entertain an idea until you have proof of it. Nobody had incontrovertible evidence that coronavirus was going to be a disaster, so until someone does, you default to the null hypothesis that it won’t be.

Gallant wouldn’t have waited for proof. He would have checked prediction markets and asked top experts for probabilistic judgments. If he heard numbers like 10 or 20 percent, he would have done a cost-benefit analysis and found that putting some tough measures into place, like quarantine and social distancing, would be worthwhile if they had a 10 or 20 percent chance of averting catastrophe.

A failure, but not of prediction (Slate Star Codex, April, 2020)

Goofus-Gallant reasoning could also be applied to the debate about whether face masks are effective:

“Goofus started with the position that masks, being a new idea, needed incontrovertible proof. When the few studies that appeared weren’t incontrovertible enough, he concluded that people shouldn’t wear masks.”

“Gallant would have recognized the uncertainty – based on the studies we can’t be 100% sure masks definitely work for this particular condition – and done a cost-benefit analysis. Common sensically, it seems like masks probably should work. The existing evidence for masks is highly suggestive, even if it’s not utter proof. Maybe 80% chance they work, something like that? If you can buy an 80% chance of stopping a deadly pandemic for the cost of having to wear some silly cloth over your face, probably that’s a good deal. Even though regular medicine has good reasons for being as conservative as it is, during a crisis you have to be able to think on your feet.”

A failure, but not of prediction (Slate Star Codex, April, 2020)

[To be updated from time to time]

Secret to Longevity: Make Frequent “Quantum Jumps” to New Reality-Matrices

Disclaimers: (1) This post may mess with your mind, and (2) this post is intended to mess up your mind. #smirk#

Cultural conditioning, in every tribe, is a process of gradually narrowing your tunnel-reality. The way to stay young (comparatively; until the longevity pill is discovered) is to make a quantum jump every so often and land yourself in a new reality-matrix.

Robert Anton Wilson, Cosmic Trigger I: Final Secret of the Illuminati

“Reality” is Messed Up!

I mean the word “reality” itself is a messed-up word that is misleading about the reality it intends to convey (pun intended). 😉

In the crazy – and/or – magical – and/or – daring – and/or -neurotic – and/or -creepy- and/or – [insert-your-adjective(s) of choice] book (more like mind-bender), Cosmic Trigger I: Final Secret of the Illuminati, Robert Wilson thinks it is a misleading pity that “reality” is (a) a noun, and (b) in singular form.

“Reality” (more like realities) is / are “always plural and mutable“. Forget about a single source of truth. Forget about realizing reality. We could each construct our own ‘reality,’ but there is no such thing as THE REALITY that we could all arrive at.

Consider the “conventional wisdom” that seeing is believing:

“We perceive an orange as really orange, whereas it is actually blue, the orange light being the light bouncing off the real fruit. And, everywhere we look, we imagine solid objects, but science only finds a web of dancing energy.”

“The orange has the orange color” is a statement that describes your mental projection (identified image, conscious recognition) of “a web of dancing energy”:

“All of our perceptions have gone through myriads of neural processes in the brain before they appear to our consciousness. At the point of conscious recognition, the identified image is organized into a three-dimensional hologram which we project outside ourselves and call ‘reality’.”

The next time you hear yourself say: “The reality is…” Catch yourself. It is more accurate to say: “My model of the reality is…” The map is not the territory. The menu is not the meal. The model of reality is not the reality itself – if it even exists in the first place. This line of thinking is known as the Copenhagen Interpretation, or “model agnosticism”.

As an extension of the “model agnoticism,” there are two principles / rules of the game:

  1. The principle of neurological relativism by Timothy Leary: “No two people ever report exactly the same signals.
  2. The way to “double your practical intelligence” according to Robert Wilson: “Try to receive as many signals as possible from other humans, however wrong-headed their reality-map may seem” and avoid the “habit of screening out all human signals not immediately compatible with our own favorite reality-map.”

Reality (and all behavior) is a Giant Game?

According to the Morgenstern-von Neumann game theoretic model, “most human transactions can be analyzed mathematically by treating them as if they were games”, and personality could be analyzed as “a group process defined by rules of interpersonal politics”.

If you are wondering WTH that really means, consider the application of model by Timothy Leary, a psychologic best known for his exploration of psychedelics:

What are the players actually doing in space-time? […] What are the rules of the game? How many strikes before you’re out? Who makes the rules? Who can change the rules? These are the important questions.

Timothy Leary

Leary developed a seven-dimensional game model to analyze all behavior, with respect to:

  1. Roles being played;
  2. Rules tacitly accepted (by all payers);
  3. Strategies for winning;
  4. Goals of the game;
  5. Language of the game (and the semantic world-view implied);
  6. Characteristic space-time locations, and
  7. Characteristic movements in space-time.

As Leary said: “If you can’t describe those seven dimensions of a group’s behavior, you don’t understand their game. Most so-called ‘neurosis’ is best analyzed as somebody programmed to play football wandering around in a baseball field. If he thinks football is the only game in the universe, the other players will seem perverse or crazy to him; if they think baseball is the only game, he’ll seem crazy to them.”

As of such, in the eyes of Leary, most psychological terminology are “pre-scientific” and “vague.” He thinks it makes much more sense to analyze it like a game.

Is Discordianism (the Cosmic Giggle Factor) the Best Way to View Reality?

So far it sounds a bit depressing – “reality” is / are messed up, “reality” is a complicated game with seven dimensions, and “THE reality” may be forever beyond our grasp (if it even exists).

You may feel your mind exploding. What is the best way to view reality?

One approach is Discordianism, invented by Thornley & Gregory Hill in the 1950s, dubbed as the first true “true religion.” Discordianism worships the Greek goddess of chaos & confusion, Eris:

Discordianism is the religion or belief in which chaos is thought to be as important as order…in contrast with most religions, which idealize harmony and order.

Discordia Wiki
“Sacred Chao” – the symbol of Discordianism

In the words of Wilson, the first law of Discordianism is: “Convictions cause convicts.” In other words, “whatever you believe imprisons you,” “”belief is the death of intelligence,” and “the more certitude one assumes, the less there is to think about.”

Some view Discordianism as a parody religion, but Wilson makes the case to take it more seriously:

“I saw Discordianism as the Cosmic Giggle Factor, introducing so many alternative paranoias that everybody could pick a favorite, if they were inclined that way. I also hoped that some less gullible souls, overwhelmed by this embarrassment of riches, might see through the whole paranoia game and decide to mutate to a wider, funnier, more hopeful reality-map.”

Wilson hopes Discordianism would persuade more people to “make a quantum jump” to a “new reality-matrix”, different from the narrow tunnel-reality that culture has conditioned them into.

To sum up, the biggest takeaway from Wilson’s book is probably this:

Our models of “reality” are very small and tidy, the universe of experience is huge and untidy, and no model can ever include all the huge untidiness perceived by uncensored consciousness.

Robert Anton Wilson, Cosmic Trigger I: Final Secret of the Illuminati

What are things that blew your mind about how you view reality? Leave a comment or write to me at fullybookedclub.blog@gmail.com!

How Strangers Confused Spies and Diplomats (Reading “Talking to Strangers” by Malcolm Gladwell)

Malcolm Gladwell is back in town with a new book this month: Talking to Strangers. Great read – insightful & crisp like Gladwell’s earlier works. Never dry, sometimes actionable, frequently inspiring. Full of specific stories & research, a walking example of Gladwell’s belief: “Most interesting people talk about things with a great deal of specificity.

For podcast lovers: Oprah Winfrey interviewed Gladwell about this book in the latest episode of Super Soul Sunday. A nice intro into the book.

Related image

From Neighbors to Strangers: Change in Interactions

Setting up the context in the opening chapter, Malcolm talks about how we interact with others have changed:

Throughout the majority of human history, encounters – hostile or otherwise – were rarely between strangers. The people you met and fought often believed in the same God as you, built their buildings and organized their cities in the same way you did, fought their wars with the same weapons according to the same rules.

Our ancestors mostly interacted with “neighbors”, as in people who lived in close proximity and had a common base for communication – including a common language & common cultural norms. This “common ground” reduced the cost of communication, making it very unlikely that things were “lost in translation” – both literally & metaphorically.

In contrast:

“Today we are now thrown into contact all the time with people whose assumptions, perspectives, and backgrounds are different from our own…struggling to understand each other.”

Today, we live in an Era of Strangers – people whose beliefs, upbringings & habits that are drastically different from our own. Yet, we could be terrible at times in understanding these differences. As Malcolm put it, the book “Talking to Strangers is about why we are so bad at that act of translation.”

Let’s dig in to look at key takeaways from the book.

Two Puzzles We Got from Spies & Diplomats

Fidel Castro released a documentary on Cuban national television titled The CIA’s War Against Cuba:

“Cuban intelligence, it turned out, had filmed and recorded everything the CIA had been doing in their country for at least ten years – as if they were creating a reality show…On the screen, identified by name, were CIA officers supposedly under deep cover…The most sophisticated intelligence service in the world had been played for a fool.”

The Cuban government had, in effect, converted almost all of CIA agents in Cuba into their agents, and fed fake information back to CIA for years. Years!

Malcolm says the CIA’s spectacular failure brought up Puzzle #1: “Why can’t we tell when the stranger in front of us is lying to our face? Why did the CIA – with the world’s top minds trained in espionage – failed to realize their agents lied to them for years?

Similar misjudgments happened on the other side of the world, in Britain. Before World War II broke out:

“(Then UK Prime Minister) Chamberlain’s negotiations with Hitler are widely regarded as one of the great follies of the Second World War. Chamberlain fell under Hitler’s spell. He was outmaneuvered at the bargaining table. He misread Hitler’s intentions.”

Others in Britain saw through Hitler – Winston Churchill was one of the people who “never believed for a moment that Hitler was anything more than a duplicitous thug.”

What’s interesting, though, is although Chamberlain spent hours with Hitler in person, Churchill only read about Hitler on paper. “The people who were right about Hitler were those who knew least about him personally.” Here comes Puzzle #2: “How is it that meeting a stranger can sometimes make us worse at making sense of that person than not meeting them?

Even trained spies & diplomats could get it all wrong when it comes to strangers – just imagine how complicated this whole thing is:

“We have people struggling with their first impressions of a stranger. We have people struggling when they have months to understand a stranger. We have people struggling when they meet with someone only once, and people struggling when they return to the stranger again and again. They struggle with assessing a stranger’s honesty. They struggle with a stranger’s character. They struggle with a stranger’s intent.”
* * *
“It’s a mess.”

Talking to strangers is a mess indeed. Below are some tips that may provide some guidance.

“Default to Truth” is A Mental Shortcut that Works Most of the Time, but Trips Us Over at Unexpected Times

Psychologist Tim Levine did an experiment: he asked participants to watch videos of students talking, and try to spot liars among them. The result:

“We’re much better than chance (>>50%) at correctly identifying the students who are telling the truth. But we’re much worse than chance (<<50%) at correctly identifying the students who are lying. We go through all those videos, and we guess – ‘true, true, true’ – which means we get most of the truthful interviews right, and most of the liars wrong.”

Malcolm calls this “default to truth: our operating assumption is that the people we are dealing with are honest.” More importantly, Levine finds “we stop believing only when our doubts and misgivings rise to the point where we can no longer explain them away”. In other words, for us to switch off the default-truth mode, we not only require some doubt – we require enough doubt, unshakable doubt, undeniable doubt that it would take an insane person to not change his or her opinion.

Borrowing words from the legal principle of “innocent until proven guilty” here, we all practice the mental shortcut of “trust until proven a lie” – and this burden of proof has an extremely high threshold. We require evidence to go way, way, way beyond reasonable doubt.

As Malcolm summarizes it:

“That is Levine’s point. You believe someone not because you have no doubts about them. Belief is not the absence of doubt. You believe someone because you don’t have enough doubts about them.
* * *
“Just think about how many times you have criticized someone else, in hindsight, for their failure to spot a liar. ‘You should have known. There were all kinds of red flags. You had doubts.’ Levine would say that’s the wrong way to think about the problem. The right question is: were there enough red flags to push you over the threshold of belief? If there weren’t, then by defaulting to truth you were only being human…doubts trigger disbelief only when you can’t explain them away.”

Our mental shortcut of “default to truth” is not completely useless – to the contrary, it is an evolutionary toolkit that gives us “efficient communication and social coordination” – at the cost of “an occasional lie”:

Lies are rare…it doesn’t matter so much that we are terrible at detecting lies in real life. Under the circumstances, in fact, defaulting to truth makes logical sense. If the person behind the counter at the coffee shop says your total with tax is $6.74, you can do the math yourself to double-check their calculations, holding up the line and wasting 30 seconds of your time. Or you can simply assume the salesperson is telling you the truth, because on balance most people do tell the truth.”

Every day, we make countless decisions about whether or not to trust someone. Our default decision is to opt for the higher-probability scenario, i.e., the other side is telling the truth. In a handful of scenarios, we misjudge and pay for misplaced belief in a liar.

But overall, the total cost we pay is lower than the reverse “default to lie” position – imagine aggressively fact checking & analyzing every word others say, every action others take. It would be impossible to go on with life without becoming schizophrenic!

“Default to truth biases us in favor of the mostly likely interpretation.”

Related reading: A case in point of when “default to truth” goes wrong is the story of the scandal of Theranos – a company that made repeated lies that tripped over some of the world’s best investors & experts, who refused to change their belief in the company despite red flags. I highly recommend the investigative journalism into this: Bad Blood. Page-turner. Amazing story about ethics, business, and the human mind.

Image result for bad blood john carreyrou

What the TV Show “Friends” Got Wrong: Transparency of Feelings is Rarer than We Think

For those who watched Friends, think about this: “it is almost impossible to get confused (when watching the show)…you can probably follow along even if you turn off the sound.” Why is this?

Malcolm cites research done via the Facial Action Coding System (FACS), a scoring system for facial expressions:

“FACS analysis tells us that the actors in ‘Friends’ make sure that every emotion their character is supposed to feel in their heart is expressed, perfectly, on their face…the facial displays of the actors are what carry the plot. The actors’ performances in Friends are transparent.”

Malcolm defines “transparency” as “the idea that people’s behavior and demeanor – the way they represent themselves on the outside – provides an authentic and reliable window into the way they feel on the inside.”

I would define transparency as an idea about consistency: “transparency” = the facial expression someone displays is consistent with what the majority of people would display, if they felt the same feelings. Borrowing the terminology “group-think”, perhaps this could be called “group-face”, i.e., have facial displays that the majority of your group would put on if put in the same shoes.

For example, a person who feels happy and wears a wide grin is being ‘transparent’, whereas the same person would be considered ‘not transparent’ if he frowns instead. Friends is a TV show of high transparency.

Image result for friends TV show

Fans of Friends, beware – the transparency you see in the show is rarely seen in practice!

“Transparency is a myth – an idea we’ve picked up from watching too much television and reading too many novels where the hero’s ‘jaw dropped with astonishment’ or ‘eyes went wide with surprise.'”

German psychologists Schutzwohl and Reisenzein carried out an experiment – they created a scenario that would surprise participants, who were later asked to describe their facial expressions. Almost all of the participants “were convinced that surprise was written all over their faces.”

But it was not:

“In only 5% of the cases did they (researchers) find wide eyes, shooting eyebrows and dropped jaws. In 17% of the cases they found two of those expressions. In the rest they found some combination of nothing, a little something, and things – such as knitted eyebrows – that you wouldn’t necessarily associate with surprise at all.”

The researchers concluded “participants in all conditions grossly overestimated their surprise expressivity…[t]hey inferred their likely facial expressions to the surprising event from…folk-psychological beliefs about emotion-face associations.”

So the next time you think you have “read” someone from their facial expressions, think again. People are less transparent than you think.

Related TV show: Lie To Me is a US TV series about solving crimes analyzing micro-expressions, i.e., voluntary & involuntary facial expressions which happen so fast that they are not captured by the untrained naked eye. The show’s story-line rests on the premise that certain micro-expressions may be involuntary and universal across cultures, a helpful tool for investigators to decipher the real feelings that criminals are trying to mask. Consider it as an alternative to your regular lie detector. There is academic research into micro-expressions too, though I have not looked at it in-depth.

Image result for lie to me TV show

What Suicide & Criminal Behaviors Have in Common: Both are Coupling Behaviors

In 1962, gas suicide was the #1 form of suicide in England, accounting for over 40% of the cases. By the 1970s, town gas throughout the country was replaced with natural gas that contained no carbon monoxide, that would give you “a mild headache and a crick in your neck” at the worst, but nowhere near lethal.

“So here is the question: once the number-one form of suicide (town gas) in England became a physiological impossibility, did the people who wanted to kill themselves switch to other methods? Or did the people who would have put their heads in ovens now not commit suicide at all?”

If you think people will go for alternative forms of suicide, then you believe in displacement, which “assumes that when people think of doing something as serious as committing suicide, they are very hard to stop.” If you think suicides will drop once the top form of suicide becomes impossible, then you believe in coupling: “the idea that behaviors are linked to very specific circumstances and conditions.” Statistics suggest suicide and crimes are both coupling behaviors tied to specific contexts.

For example, after a suicide barrier was installed on the Golden Gate Bridge in San Francisco, a survey followed up on 515 participants who once attempted to jump from the bridge – only 25 of them (<5%) tried to kill themselves in other ways.

Similarly, crime is also shown to be a coupling behavior. Studies in different cities have converged on the same result: “Crime in every city was concentrated in a tiny number of street segments.” This is referred to as the Law of Crime Concentration. Malcolm thinks the lesson for takeaway is:

“When you confront the stranger, you have to ask yourself where and when you’re confronting the stranger – because those two things powerfully influence your interpretation of who the stranger is.”

Don’t Fall Into the “Illusion of Asymmetric Insight”

Let’s play a game of word completion. Suppose I showed you “G L _ _”, which word would you fill it with?

Now suppose I handed you 3 words that a participant has wrote: WINNER, SCORE, GOAL, what could you infer about this participant’s personality? In one response, an interviewee wrote: “It seems this individual has a generally positive outlook toward the things he endeavors…indicate some sort of competitiveness.”

Now let’s flip the game on its head – suppose I asked you to complete the words, and then asked you what these words you completed reveal about your personality. Guess what? The majority of participants in this game refused to”agree with these word-stem completions” as a measure of their own personality.

This is what the psychologist Pronin calls the Illusion of Asymmetric Insight:

The (biased) conviction that we know others better than they know us – and that we may have insights about them they lack (but not vice versa) – leads us to talk when we would do well to listen and to be less patient than we ought to be when others express the conviction that they are the ones who are being misunderstood or judged unfairly.”

As Malcolm phrases it, it is easy to blame it on the stranger: “We think we can easily see into the hearts of others based on the flimsiest of clues. We jump at the chance to judge strangers. We would never do that to ourselves, of course. We are nuanced and complex and enigmatic. But the stranger is easy. If I can convince you of one thing in this book, let is be this: Strangers are not easy.

If I could leave you with only one takeaway, then let it be this: strangers are not easy. What is easy to do is to blame the strangers for any meaning lost in translation – without assessing our own biases. Hopefully this book has given all of us some actionable tips on “talking to strangers”. Once again, I highly recommend reading the whole book from cover to cover – I hope you will find it to be a page-turner as I did.

[Big Ideas – Special] Understanding Markets via “Narrative Economics”

The secret of effective market game-playing is to recognize that the market game hinges on the Narrative, on the strength of the public statements that create Common Knowledge.

Epsilon Theory Manifesto

Nobel-winning economist Robert Shiller recently published Narrative Economics, a book on “How Stories Go Viral and Drive Major Economic Events“. Shiller gave a talk at LSE on the big ideas (video, audio, related 2017 paper).

Context: This article is part of the Big Ideas series, where I synthesize takeaways from the world’s best experts in multiple disciplines. This article is a special in the series, because unlike other articles that are synthesized from Discover magazine expert interviews, this piece is largely inspired by a public lecture.

What is a Narrative?

Let’s start with definitions. According to Shiller:

  • Narrative = a telling of a story that attaches significance, meaning or emotions to it;
  • Story = a chronology of events.

What is Narrative Economics?

Shiller makes a key distinction between narrative economics as defined in the dictionary vs. defined by himself. The textbook definition of narrative economics is “economics research that takes the form of telling a narrative about economic events”.

For Shiller, narrative economics should have a narrower focus, i.e., only investigating popular economics narratives that “went viral”, “changed things” and “became contagious”.

Shiller thinks economics narratives are powerful in affecting (& shaping) economic decisions. He identifies 9 perennial economics narratives:

  1. Panic vs. confidence narratives – e.g., the Big Depression is a panic narrative;
  2. Frugality vs. conspicuous consumption – e.g., Trump’s book “Think Like a Billionaire”;
  3. Monetary standards – e.g., the Gold Standard vs. Bimetallism debate;
  4. Technical unemployment, i.e., labor-saving machines replace many jobs;
  5. Automation & AI replace most jobs;
  6. Real estate booms & busts;
  7. Stock market bubbles;
  8. Boycotts, profiteers & evil business;
  9. The wage-price spiral & evil labor unions.

Broadly speaking, the 9 narratives above focus on the macro economics momentum / “culture” (1-3), employment (4-5), investment (6-7) or actors in power (8-9).

Shiller argues that data sources are at the root of economics evolutions. He believes the recent “digitization of search” is and will bring shifts to narratives. Moreover, Shiller claims that big events occur often not because of a single narrative, but because of a “confluence of narratives“, i.e., as a result of the chemical reaction of multiple narratives.

With an interesting twist, the word “narrative” appears less frequently academic articles in economics & finance compared with other subjects – see this analysis of JSTOR articles below:

Studying Narrative Economics via the Virality Model of Epidemics

If we think of a narrative as a disease, then we could study its spread by borrowing patterns from research on epidemics. In other words, we could leverage research on how viruses “go viral”, and try to figure out how narratives get popular.

The Kermack-McKendrick (1927) mathematical theory of disease epidemics is a breakthrough in medicine, because it “gave a realistic framework for understanding the all-important dynamics of infectious diseases” in the words of Shiller.

The Kermack-McKendrick model divides the population into three groups: susceptibles, infectives, and recovered. Importantly, the model suggests the curve of the number of infectives to take a “humpback” shape, i.e., rising sharply before declining at a similarly fast speed:

We could see similar “humpback” shaped curves in data that could serve as proxy measurements for how popular an economics narrative is.

Here’s an example on how frequent the phrase “stock market crash” appears in news & newspapers:

Here’s another example on how frequent the phrase “Great Depression” appears in news & newspapers:

The Future of Narrative Economics

Shiller is hopeful that ” the advent of big data and of better algorithms of semantic search might bring more credibility to the field”.

Meanwhile, narrative economics faces challenges, including:

  • On data collection, we need to move beyond “passive collection of others’ words, towards experiments that reveal meaning and psychological significance”, e.g., via focus groups or social media – though the proper design & implementation of such experiments is not easy;
  • Dealing with the overlap & “chemical reactions” of multiple overlapping narratives is difficult;
  • Causality is tricky. As Shiller says, one challenge is in “distinguishing between narratives that are associated with economic behavior just because they are reporting on the behavior, and narratives that create changes in economic behavior.”

Nevertheless, the challenges make the field more interesting. I am particularly interested in predicting which narratives will gain momentum. Perhaps the narrative machine will serve, to some extend, as a crystal ball that offers a narrow glimpse into the future.

The Tastiest Pizza is often the Messiest One

Splitting a city into residential, commercial and business zones is like throwing dough, cheese and pepperoni into the different compartments of a bento box and calling it a pizza.” In this article, Uber product manager Florent Crivello write about what he calls the “efficiency-destroying magic of tidying up”.

Florent shares this picture that he calls “an urban planner’s dream pizza” – I bet it’s not what you have in mind as your perfect pizza:

The word chaos has a negative connotation in most contexts. In fact, the Oxford dictionary defines chaotic as “in a state of complete confusion and disorder“. Chaos tends to stir up emotions of being lost, not knowing what to do.

When we are at a loss of what to do, more often than not it is because we do not truly understand. The flip side of that is, in the words of Austrian philosopher Ludwig Wittgenstein: “To understand is to know what to do.”

This was echoed in Florent’s article:

If outsiders complain, but people living inside the system seem happy with it, it probably means that the chaos is serving them right, and that it’s just foreign eyes who are unable to perceive its underlying order.

The Efficiency-Destroying Magic of Tidying Up, by Florent Crivello

It is tempting to equate a lack of order (or at least lack of what we perceive to be order) with a lack of value or quality, which justifies a need for intervention. This is not ill-advised in some cases, with the emergency of rule of law as a case in point. A complete lack of any legal order in a community threatens the safety of its members.

In contrast, some corrections of chaos could produce outcomes that go against our wishes instead of in their favor. Apart from the pizza example above (I assume 99.9999% of the population prefers a ‘messy’ pizza where the ingredients are mixed instead of separated), another example is the free market vs. central planning: a “chaotic” free market is magically more efficient than central planning, in terms of the total sum of outputs produced. Of course, free market is not without its limitations – which is a separate topic.

The point here is: the presence of chaos does not automatically equate a need for correction. If chaos should warrant anything, it should warrant a drive to understand the underlying order, the “invisible hand”, the hidden structure that are yet elusive to our foreign eyes.

Resisting the urge to “correct” chaos may not be that easy. As Brian Arthur, pioneer of complexity theory & complexity science, mentioned in an interview, subjects such as economics seek “equilibrium, a place of statis (stability) and simplicity”. In a sense, equilibrium is (perceived to be) at the opposite side of chaos.

Brian Arthur points out what seems to be in equilibrium could be different from what is actually in equilibrium – this depends on how macroscopic vs. microscopic our view is. For example, the sun seems to be in equilibrium when we look up at it in the sky – it is a beautiful sphere held in place by gravitational forces. Yet, the sun close up is full of plasma bursts – what you could call “chaotic” reactions.

Instead of viewing chaos & equilibrium as opposing concepts, we could view them as relative concepts instead. Instead of being either chaotic or in equilibrium, an object could be both – depending on the context & our level of understanding.

So give chaos some credit – just as the tastiest pizza is not the orderliest one, the best scenario may not necessarily be the most organized one. The next time you find yourself anxious about a chaotic environment? Think about how delicious that bite of pizza littered with messy toppings is – then sit back & relax.

Enjoyed reading this? Apart from publishing articles on this blog, I also send out a newsletter with original content and curated ideas. Subscribe here or view past issues here.

Heal With Herbs & Honey

Context: I attended a workshop on herbs & honey in Hong Kong this week, featuring Peggie Zih, a certified nutritionist & herbalist from Zenses in Health. Special thanks to The Hive co-working space for hosting. In the post below, I share some key takeaways below, supplemented by secondary research.

What honey should I buy?

Peggie recommends going for raw honey, which could be defined as honey in its original state taken from beehives. A more technical definition is:

Raw honey also contains bee pollen and bee propolis, which is a sticky, glue-like substance bees use to hold their hive together.

What are the health benefits of raw honey?” Medical News Today

How does raw honey differ from ‘regular’ honey? The latter goes through additional processing, including pasteurization, “a process that destroys the yeast found in honey by applying high heat. This helps extend the shelf life and makes it smoother.” (Healthline)

So take note – the next time you go shopping for honey in a store, don’t pick the jar with the smoothest texture. Instead, embrace raw honey that may not have the clearest color, but is free from commercial processing and preserves nutrients better.

What are the benefits of raw honey?

Raw honey provides benefits including:

  • Antioxidant effects
  • Antibacterial and helpful for cleansing wounds
  • Contains vitamins & minerals
  • Cough relief
  • Helps with digestion and eases diarrhea

What are watch-outs for taking raw honey?

Peggie mentions infants less than 1 year old should not be fed raw honey, as their digestive system is not mature enough to properly break down & absorb raw honey. The Center for Food Safety of HK Government recommends something similar:

Honey including raw honey can contain the spore forming bacterium, Clostridium botulinum, that causes intestinal botulism (also called infant botulism). Intestinal botulism mainly affects children less than one year old. Early symptom is constipation, followed by lethargy, difficulties in feeding, generalised muscle weakness and weak cry.

The Risks of Eating Raw Honey“, The Center for Food Safety of the HK Government

What are some herbal honey combos?

Here are some recipes for mixing organic herbs with honey to create tasty drinks as raised by Peggie:

  • Digestion: cinnamon
  • Digestion: fennel seeds (could add some chamomile flowers)
  • Cooling in summertime: hibiscus & rose
  • Calming: catnip, chamomile & lemon balm

Peggie also recommends sourcing organic herbs for health benefits.

What are some health tips you have come across? I’d love to hear from you! Reach me at fullybookedclub.blog@gmail.com or on LinkedIn.

Enjoyed reading this? Apart from publishing articles on this blog, I also send out a newsletter with original content and curated ideas. Subscribe here or view past issues here. Stay tuned for more articles in the “Big Idea” series!

[Big Ideas 003] Role of Museums in Education & Science vs. Religion

Context: This article is part of the Big Ideas series, where I synthesize takeaways from interviews by Discovery Magazine with the world’s best experts in multiple disciplines. This series is inspired by Peter Kaufman’s take on the multidisciplinary approach to thinking. Peter spent 6 months reading 140+ of these interviews, and came out knowing “every single big idea from every single domain of science”. I wrote more about Peter’s insightful ideas in this article.

Credit: Special thanks to ValueInvestingWorld for compiling the interviews in a single PDF here.

Former Head of Chicago’s Field Museum John McCarter

John McCarter is the CEO & president of the Chicago Field Museum. He “oversees the work of 200 scientists” on diverse research topics from protecting endangered tropical environments, to molecular evolution. He is also “one of the leading critics of the intelligent design movement (that argues life is created by an intelligent cause, or God)” and “an outspoken proponent of teaching modern evolutionary theory to all students.” Read the original interview in the May 2006 issue of Discover magazine here.

Why it’s hard to sustain kids’ interest in science

McCarter thinks there are two challenges to science education:

First, “kids get turned off to science at some point—fifth grade, sixth grade, seventh grade —when science is perceived as too hard and too complicated.” He proposes counteracting the problem “by telling stories”:

We try to make the museum experience telling enough that it becomes a conversation with families over the dinner table two nights later.

John McCarter

Second, it’s hard to attract or sustain attention amidst the “competition for time” in the digital age:

Two comedians with light talk on CBS and NBC had 80 percent of the market in that time slot…yet only 2 percent of the population is listening to NPR (National Public Radio). I think institutions like this don’t have a crack at people’s attention and time, so you have to be really good at delivering messages or explaining controversies in a way that sticks in people’s minds.

John McCarter

Museums in the science vs. religion debate

Shortly before the interview with McCarter took place, the Chicago Field Museum launched an exhibit – Evolving Planet – in March 2006. It showcased the 4-billion-year evolutionary journey of life on Earth.

McCarter shares the Evolving Planet exhibit was motivated by a dissatisfaction with current exhibits on evolution “constructed in such a way that visitors rushed through to get to the dinosaurs”.

Yet, he was also challenged on whether this exhibit was intended to promote the evolutionary perspective (that he is a strong advocate of):

Interviewer
What is the harm in telling the other story (of religious narrative)?
* * *
John McCarter
I don’t think there is any harm, as long as it is not posed as a scientific alternative to the story of evolution.

McCarter believes religion itself has undergone a shift:

The mainstream theological community is already way beyond the literal interpretation of the biblical accounts of Adam and Eve and the Garden of Eden and seven days of creation. Instead, they are saying that those are wonderful stories, created 2,000 years ago by people who were trying to explain their world, not that they are scientific fact.

John McCarter

For McCarter, the key issues in theology worth focusing our attention on are “applied morality of behavior and guidance”.

McCarter shares that the population that visit museums are skewed to have a higher % of those who subscribe to the evolution theory (instead of religious explanations on intelligent creation). He cites ~50% of the US public accepts the evolution theory, but this number has grown to 75% amidst museum-goers.

“And for those people who don’t accept it (evolution theory), the exhibit may enable the families to have a discussion about what their 15-year-old saw and how that fits into the overall faith of the family. We are not against religion. We are very supportive of religions and religious institutions. Much of this museum is a celebration of the impact of religion on cultures. But we do that in anthropology. We don’t do that in paleontology.”

Museum as a powerful storytelling platform

I particularly like this Q&A snippet in the interview:

Interviewer
It seems museums have switched from being repositories of artifacts and information and history to being advocates for a specific viewpoint?
* * *
John McCarter
I don’t think I’d call it advocacy…I call it storytelling…You would see an object, but there was no contextual story around that object. What we are doing now is using the artifacts to tell a story.

Museums don’t just lay out facts – they use facts to present a story, a narrative. Museums could be another powerful form of storytelling or propaganda.

Stay tuned for more articles in the “Big Idea” series! And please share interesting “big ideas” by reaching me at fullybookedclub.blog@gmail.com or on LinkedIn.

Enjoyed reading this? Apart from publishing articles on this blog, I also send out a newsletter with original content and curated ideas. Subscribe here or view past issues here. Stay tuned for more articles in the “Big Idea” series!