Thursday, October 19, 2017

Nightmare Boss: The Dangers of Sleep-Deprived Leadership


Let’s say you’re a sleep pro. You’ve learned the key role of REM cycles in encoding new memories, and the troubling statistics linking tired driving with automobile accidents. You’ve taken steps to log your 7 to 9 hours every night, and are enjoying the benefits of a healthier, more energetic you. But there is still a way that sleep-deprivation could be shaping—and harming—your day-to-day life.

You may be sleeping enough. But what about your boss? What about your boss’s boss?

After all, if it’s unwise to operate heavy machinery on a sleep debt, it’s surely a bad idea to operate an entire business, with millions of dollars—not to mention the careers of real-life human beings on the line. However, it seems to be all too common. In a recent survey of more than 180 business leaders, 43% reported getting insufficient sleep at least four nights a week. In other words, for four out of ten respondents at the top of the corporate ladder, those unrested nights are more common than not.

When we list the attributes of a good leader, “a refreshed prefrontal cortex” likely doesn’t make the list. Yet, without a thorough nightly recharge of this region, higher thinking suffers, and the result can be some classic Nightmare Boss behavior:

  • Low attention span. The connection between will power and sleep is well-known at this point, and that means a habitually tired leader may struggle with any activity that requires extended focus (such as reading and internalizing an entire memo, or staying mentally present in a meeting).
  • Murky strategic thinking. Since the prefrontal cortex governs long-term goal-setting, a boss who doesn’t get enough Z’s might be noticeably more impulsive in their judgment calls—for instance, dropping an important vendor due to some petty perceived slight, or promoting an unqualified person based on what seems to be a whim.
  • Temper tantrums. If you’ve ever observed a toddler who skipped their daily nap, you have some insight into the link between slumber and mood. While we should be careful not to make too many excuses for cruel behavior, some studies have shown that sleep-deprived people are more likely to verbally abuse their underlings.


Unfortunately, our culture tends to make excuses for powerful people who display unconventional or even unpleasant behavior. An absent-minded, impetuous fry cook is unfit to work at McDonald’s, and yet those same qualities might feed into a CEO’s reputation as an “eccentric genius.” When we hear about some visionary leader only sleeping for four to five hours a night, we sometimes even view this as a sign of their superior mental powers, and yet studies show that staying awake for 20 consecutive hours impairs one’s mental faculties to a degree analogous to a .1 blood alcohol content—legally drunk, in the U.S

What can you do if you suspect your boss—and by extension, your entire workplace experience—could benefit from better sleep habits? The answer may depend on your individual relationship with your individual supervisor. If they respond well to feedback, maybe share a thinkpiece or two about the benefits of slumber. If they’re too cranky and muddled to take a look at their own lifestyle, it may be time for a wakeup call of your own, to start updating that resume.

Thursday, October 12, 2017

The Frequency Illusion: How Not to Fall for Argle-Bargle


Let’s say you’re reading up on landmark Supreme Court decisions of the past decade. (Hey, everybody needs hobbies.) You’re skimming an article from 2013 about the abolition of DOMA, when you encounter a soundbite from the late Justice Scalia, dismissing his opponent’s arguments as “legalistic argle-bargle.”

Hold on a minute. You blink at the page. “Argle-bargle”? Is that a typo? If so, what would it possibly be a typo for? You backtrack and discover, nope, that quote appears in multiple sources.

Just what’s going on here? Had Justice Scalia taken leave of his senses? Your answer might depend on your political beliefs, but at least as far as this particular phrase goes, the man was not talking nonsense. As The Atlantic explained to its confused readers at the time, Antonin was simply using an old-fashioned Scottish word for “nonsense,” basically an across-the-pond equivalent to “mumbo-jumbo.”

You absent-mindedly add a new word to your vocabulary, and continue on with your work.

And that’s where things get weird.

The next morning, you check Facebook. Your co-worker who loves goofy board games is posting about a new card game where you play by creatively insulting your friends. Its name? Argle-Bargle

That weekend, you decide to watch a few old episodes of the Simpsons, but all is not well in Springfield. “Tonight on Smartline,” says cartoon news anchor Kent Brockman, “the power plant strike: argle-bargle, or fooforaw?” 

This is no longer a one-time oddity, and you’re a little shaken. Maybe it’s time to wind down with some music. Didn’t your friend recommend some mellow acoustic group from Birmingham last month? You click back to the link she sent you, and the band’s name stares back at you. Argle-bargle

Before reading that article, you wouldn’t have been able to recall hearing or seeing it a single time in your entire life, and now it seems inescapable. What gives? Are you being haunted by the ghost of Antonin Scalia?

Before you call the Ghostbusters, you might want to Google “frequency illusion.” It’s also known as the “Baader-Meinhof effect.” (Interestingly, it’s not named for its discoverers. During the Cold War, a reader wrote into St. Paul Press remarking that, after learning about the Baader-Meinhof West German radicals, their name seemed to be popping up everywhere.) 

The good news is: it’s all in your head. The brain has limited capacity to sort through the vast amounts of information we’re exposed to every day. You might be surprised how easy it can be for one random data point to get tagged as relevant and rise to the top of the pile. Personal relevance is a big part of this—you’re probably pretty good at picking out the sound of your name in a crowded restaurant, or the sight of a loved one in a crowd. However, novelty is a factor as well. A word is much more likely to jump out at you if you’ve just learned what it means.

Your co-worker could’ve even posted about that card game before, but without context, maybe the name failed to catch your eye as you scrolled through social media. Your friend could’ve mentioned that band at a party last week, but maybe you didn’t notice because someone else was calling your name. 

So no, there's no conspiracy here. Nothing has changed, except, temporarily, the filter through which you view the world. The grouchy vengeful spirit of a conservative Supreme Court Justice is not following you around. That’s just a bunch of—well, you know.


Check out Robb's new book and more

Thursday, October 5, 2017

How the Decoy Effect Makes Dupes of Us All


Imagine that, on the second morning of a business trip, you go to unzip your suitcase, only to realize you forgot to pack any pants. With just a few hours before a big presentation, you duck into the closest mall, for a shopping trip with a firm deadline.

You step into a promising store, and with the help of a sales clerk, you quickly narrow the pool of potential pants to just two options.

Pair A is stylish and flattering, but they’re pushing your emergency pants replacement budget at $70.

Pair B won’t turn any heads, but they fit fine, and they’re on sale for $20.

‘Is it worth it to splurge on a pair of pants I’ll only need to wear once?’ you ask yourself, frowning in the dressing room mirror. ‘The cheaper ones will get the job done, and it’s not like I don’t have nice pants at home.’

Just then, the clerk knocks on the door. “Hey,” she says, “As long as you’re trying on slacks, we just got a new shipment in, and I think we’ve got your size.” She passes you a new pair of pants over the door, and you try them on. They’re not as show-stopping as that $70 pair, but they do complement your shirt better than the $20 pants.

Then you check the price tag: $100.

Suddenly, dropping $70 for a high-quality pair of pants doesn’t seem like that big of a deal. They’re the nicest option, and not even the priciest. You quickly come to a decision.

Walking out the door five minutes later, you are $70 lighter, but you have a great new pair of pants, and that inner glow of a savvy shopper.

Congratulations: you just got played.

Psychologists call it the decoy effect. When trying to choose between options (Pair A or Pair B) by weighing several variables (in this case, price and pants quality), your perceptions can be thrown off by introducing an extra option that is inferior in all ways to one candidate, while still preferable in some ways to the other one. (There’s a term for this kind of asymmetric dominance of traits. Actually, the term is “asymmetric dominance.”)

To tilt you towards the more expensive, higher-quality pair of pants, Option C needs to be more costly than Option A, while also not quite as nice.

Conversely, a sales clerk determined to save you money (perhaps someone not on commission) might bias you towards the affordable if middle-of-the-road Option B pants by suggesting you try on a pair of $30 trousers made with cheap fabric and sloppy stitching. Now Option B, with its $20 price tag, looks like a steal.

At the end of the day, the human brain is bad at weighing multiple factors at once, and easily swayed by any available reference points—and companies know this. This isn’t a conspiracy theory; Duke University’s Joel Huber first published an experiment demonstrating the influence of those asymmetrically dominating decoys in 1982. These days, marketers actively discuss the Decoy Effect as a strategy to influence customers towards the “right” decisions.

In his book Predictably Irrational, Dan Ariely refers to this phenomenon as the “secret agent” of many decisions. It’s hard to guard against, because on some level, it looks like smart comparison shopping. Your sales clerk might not have even consciously noticed the subtle manipulations at play—after all, her job is to show you all the pants. A kind clerk might even be relieved to see that you didn’t go with that costly, inferior pair.

Still, there’s something creepy about never knowing which of your consumer decisions might be shaped in part by marketing executives in a boardroom somewhere. Perhaps the best way to get around this—assuming you’ve got the time—is to comparison shop not just between items but between stores. After all, companies have no incentive to compete against themselves, but they do want to offer better deals than their nemeses.

So the next time you need to make a purchase, don’t just consider your options carefully; consider them wisely. Maybe you can even make a game out of trying to spot the decoys…

Check out Robb’s new book and more 

Thursday, September 28, 2017

The Problem(s) With Myers-Briggs in Business

Imagine going to a corporate team-building event where everyone in your office was sorted and evaluated based on the results of taking one of those time-wasting Buzzfeed quizzes, like “Which Disney Character Would You Be Friends With?” or “Which Muppet Are You?” It might make for an interesting afternoon, or at least a weird story to tell the next time you’re at a party, but it probably wouldn’t leave you feeling like you had gained some crucial self-insight to help you function better at work or in your home life.

Clicking through an online quiz might be a decent way to procrastinate, but in the back of our minds, we all know that Buzzfeed doesn’t employ a lab of white-coated psychologists to study, for instance, the deep-seated implications of preferring Cartoon Network to Nickelodeon. It may be fun, but we wouldn’t call it science.

This brings us to the Myers-Briggs. You’ve probably heard of Myers-Briggs, or as it’s more formally known, the Myers-Briggs Type Indicator survey, or MBTI. Your boss has probably heard of it, too. According to Business Insider, as of 2014, it was used by at least 89 of the Fortune 100 companies.

The Myers-Briggs purports to divide people into 16 “types,” by placing you on four different continuums: are you more extroverted or more introverted (E or I)? More likely to intuit subtleties or focus on sensing objective information (N or S)? Driven more by thinking or by feeling (T or F)? Do you prefer to come to solid conclusions or continue to keep your options open (J or P)?

Each possible four-letter combination is associated with an archetype, and each archetype is said to approach the world in a certain way, with certain skills, shortcomings, wants, and needs. For instance, a type more adept at reading people might be better suited to a job with a lot of face-to-face communication.

There’s just one problem with this.

Actually, a couple of problems.

Okay, more than a couple.

Developed in the 1940’s by a mystery novelist and a magazine writer, neither of whom had any formal psychological training, the MBTI is based on the 1923 English translation of Carl Jung’s book Psychological Types—which itself was based on Jung’s own personal observations, anecdotes, and guesses; hardly rigorous, by today’s standards. Even so, Jung considered the MBTI to be an oversimplification of his ideas, noting that each and every individual is "an exception to the rule."

While the MBTI may frequently be used by successful businesses, we should be careful not to attribute correlation to causation. If 89 of the Fortune 100 companies also use the same brand of water cooler or coffee bean, do we assume that coffee brand is helping, or simply that it’s broadly popular?

In “Goodbye to Myers Briggs, the Fad That Wouldn’t Die,” Adam Grant quotes management researchers William Gardener and Mark Martinko, who, after a comprehensive study of the MBTI, noted that, “Few consistent relationships between type and managerial effectiveness have been found.” Interestingly, this study was published in 1996; the information has been available for a while.

Criticizing the Myers Briggs has enjoyed a bit of a resurgence over the past couple of years, in pieces like Vox.com’s video “Why the Myers Briggs test is totally meaningless,” an episode of the myth-debunking series Adam Ruins Everything, and Vice.com’s bluntly titled “The Myers-Briggs Personality Test is Bullshit.

As some have pointed out, retaking the test just a few months later can yield dramatically different results, suggesting the measured traits are not so immutable after all.

In addition, it's been argued that measuring people on these four particular axes creates a number of false dichotomies. A person can avidly gather both facts and interpretations of those facts, weighing them a roughly equal amount. A person could also not feel very driven by either type of analysis. A person could selectively, in some areas of their life, seek hard data, while taking a more emotional, subjective approach when it comes to other issues. And all three people could find themselves placed at the midpoint between S and N, without much commonality to their experiences.

And as comprehensive as that four-letter profile can sound, even two people who consistently test as Extroverted, iNtuitive, Feeling, and Judging might behave in wildly different ways depending on their personal values—for instance, a group-oriented ENFJ might use their assertiveness and people-reading to advocate for fairness and harmony in a workplace, while a more individually-driven ENFJ might seek to influence a team in order to advance their own ideas.

Certainly, it can feel good to read about your “type,” and if you’ve never stopped to consider whether you’re more extroverted or introverted, you might even feel like you’ve gained some additional understanding of yourself. On the other hand, the same could be said for asking a couple of ‘90’s kids to debate which Hogwarts house they belong in. Maybe the real magic of your average personality test is just giving people an excuse to open up and describe themselves. As team-building exercises go, it beats trust falls.

Still, the next time your boss tries to institute a Myers-Briggs training module, maybe you should suggest a Buzzfeed quiz instead. It’s free, roughly as scientific, and anyway, at least you could finally confirm who is your office’s Cookie Monster…

Check out Robb’s new book and more 

Thursday, September 21, 2017

The Pygmalion Problem

Just how much is your performance shaped by the way your boss talks to you? To what extent are our behaviors, attitudes, and identities simply a reflection of somebody else’s expectations?

This been the subject of controversy since at least 1968, when Robert Rosenthal and Lenore Jackson published their book Pygmalion in the Classroom. In it, Rosenthal and Jackson described studies in which first and second grade teachers seemed to treat students differently based on how smart they believed each child to be, which in turn had a direct impact on that child’s actual performance. This held true even when researchers lied to the teachers about the children’s intelligence—children who were labeled as gifted were treated as such and still performed as such, independently from their actual IQs.

The following year, J. Sterling Livingston wrote his now-legendary article for the Harvard Business Review, “Pygmalion in Management.” Citing several studies, Livingston asserted that the same held true in the workplace: when managers expected more from their employees, those employees tended to deliver. Expectations set tone, and tone had a powerful impact on the entire work culture. As with any new concept in the business world, he was sure to coin a catchy term: the Pygmalion Effect.

Could your boss turn your whole office around if only he or she believed in you guys a little more? Could it really be that simple?

Well, probably not.

As Katherine Ellison discusses in her piece “Being Honest About The Pygmalion Effect,” there has always been plenty of pushback to Rosenthal and Jackson’s work. For one thing, the phenomenon seems to only work when the cues are entirely subconscious. In later versions of the study, telling the teachers that it was important to treat their students like smart kids did not yield meaningful results; the teachers had to truly believe the children possessed high IQs, in this case because they were lied to. Apparently, the proper signals were sent only when the teachers had no idea it was happening. 

In other words, a boss who routinely underestimates their workers is unlikely to study the Pygmalion Effect at a corporate retreat and come back with renewed ability to inspire. He or she has likely already formed an opinion about you—right or wrong—and you can’t mirror the results of the original study without somehow bringing in a researcher to make the case for your super-competency and get that positive feedback loop rolling.

So, where do we go from here?

Perhaps the secret moral of the story is to remember that the entire concept IQ is hogwash. As Carol Dweck ably demonstrated in her book Mindset, our level of intelligence, our potential to learn and achieve, is not a static, fixed trait that can be measured with a single number. The best way to improve performance—from children or employees—is to reject these binary labels entirely, and to create an environment that allows for growth and genuinely rewards embracing challenges.

Some critics of the Pygmalion Effect have advanced the arguably more empowering “Galatea Effect,” which suggests that the real secret is to set your own high expectations. By pursuing loftier goals, the theory goes, you will strive harder, creating your own positive self-fulfilling prophesy. However, again, we seem to run into danger of oversimplifying. And as a management technique, this seems difficult to implement: how do you force your employees to believe in themselves? Inspirational posters can only do so much.

It is not just a matter of dreaming big (as you stand at the top of a majestic mountain, arms outstretched, with a glorious sunset in the background). Dweck’s work suggests that people can’t reach their full potential without being willing to sometimes fail. Stretching yourself means taking risks, and not every risk pans out; that’s why they’re called “risks.” 

Perhaps when a student or a worker is labeled as exceptional in a Pygmalion study, the people in charge don't just expect more, but paradoxically, show greater patience when the learning curve rears its ugly head. A "low IQ" child who doesn't instantly grasp a concept may be dismissed as "stupid," but a child who has been labeled gifted, struggling in the exact same way, may come off more like a sensitive eccentric who just needs a little more time to get up to speed.

In other words, being a truly effective boss isn’t a mere matter of setting big goals. Great leaders see their team as capable of great things, but they also know that to create lasting, sustainable success, you must send the message that triumph is within reach, but sometimes the journey may get a little messy—and that’s okay. There must be room to soar, but also room to occasionally crash, get back up, and learn from the fall.

In other words, the most effective bosses may be the ones who respect their employees, while also still seeing them as, well, human beings.


Okay, maybe it is kind of simple after all.

Check out Robb’s new book and more 

Thursday, September 14, 2017

Fallen Empires and Phony Wine: The Seersucker Effect

Making iffy predictions about our future: it’s been a hallmark of human behavior for at least as long as we’ve been recording our present.

Perhaps the most famous early example of forecasting took place in Mount Parnassus in Delphi, Greece. From roughly the 8th century BCE to the 4th century BCE, for nine days each year, travelers could pay to pose their burning questions to the Pythia, a priestess said to channel the wisdom of Apollo.

To a modern reader, perhaps a bit skeptical about the whole Greek gods thing, it may be unsurprising to learn that the Pythia phrased her answers as riddles, arguably granting her some convenient wiggle room.

For instance, as the story goes, King Croesus once asked a Pythia if he should attack the neighboring Persian Empire. "Cross the border and a great empire will fall,” she told him. Encouraged, Croesus brought his troops over the border—and suffered a crushing defeat. Her reaction was to coolly point out that he had failed to make her specify which empire.

(Anyone who has ever spent an hour playing Simon Says with a pedantic seven-year-old can imagine how Croesus must’ve felt.)

Why would the ancient Greeks, the very people who brought us the root word for “logic”, buy into a riddle-based forecasting system? In a Guardian.com book review for Michael Scott’s Delphi: A History of the Centre of the Ancient World, writer James Davidson draws a direct parallel between the Pythias of yesteryear and a fund manager of today: “someone who gets paid vast sums for divining the future even though their well-informed bets produce slightly worse results than the stock-market average.”

Statistician Salil Mehta, who actually took the time to crunch the numbers, has emerged with an even grimmer picture of these modern day stock soothsayers. “It’s not easy to be as bad as they are,” Mehta told a New York Times reporter in 2016. “They are much worse than random chance alone would predict.”

And yet stock forecasting remains a lucrative career. TV news networks continue to employ the same political pundits who spectacularly failed to predict the 2016 election cycle.  In a 1993 survey covering the 50 largest cities in the U.S., one third of police departments reported that they had accepted predictions from “psychic detectives.” And no matter how many times a group of wine experts fail the simplest of tests—for instance, even noticing that their “red wine” was simply a glass of white with dye in it —you still tend to believe your wine snob buddy when he insists he can detect notes of oak, moss, and autumn leaves in his Cabernet.

In 1980, writing for Technology Review, J. Scott Armstrong of the University of Pennsylvania Marketing Department coined a somewhat cynical term to describe this phenomenon of clinging to false prophets: the “seersucker effect.” As Armstrong puts it, “No matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers.”

The 37-year-old paper is more than a little dated—Armstrong cites one study that measured the ability to predict if a given stranger was “heterosexual or homosexual,” and compared a layperson’s guesses to those of a “relevant expert” (Armstrong never explains what field of study this would even be.) He also had no way of knowing that in 2017, technology would allow for at least one type of hyper-accurate seer: as Nate Silver has noted, meteorologists continue to improve their predictions all the time.

Still, Armstrong provides some useful insight into why, millennia after the fall of Ancient Greece, we are still putting our faith in fortune-tellers: it’s not just about wanting to trust someone, he suggests, but the desire to offset some of the responsibility of a decision. Even when we know that third party could be fallible, we want to be able to say we went with the expert verdict. That way, if the results are disappointing, we can absolve ourselves. Sometimes, we want somebody else to take the wheel so badly, we don't pause to check whether or not they can drive.

If there’s a little King Croesus in all of us, maybe all we can do is remember to ask some key follow-up questions...
Check out Robb’s new book and more 

Thursday, September 7, 2017

Confirmation Bias and the Parasites Nesting in Your Brain

"What is the most resilient parasite?” asks Leonardo DiCaprio in the 2011 movie Inception. “Bacteria? A virus? An intestinal worm?”

Before anyone can ask what exactly he’s getting at—or volunteer some gruesome facts about, say, toxoplasmosis —DiCaprio answers his own question. “An idea,” he declares, with a meaningful squint. “Resilient... highly contagious. Once an idea has taken hold of the brain, it's almost impossible to eradicate.”

Metaphors aside, Leo’s got a point. We all know someone clinging steadfast to a belief that is provably false. How can an otherwise intelligent person end up with such blatant blind spots, seemingly immune to logic or fact? One culprit is confirmation bias.

Confirmation bias occurs when you selectively choose what information to value, and what information to discard, based on the picture you want to paint for yourself. Imagine a detective who needs to believe his client is innocent so badly, he keeps insisting that their alibi is the most important piece of evidence, all while maintaining that the bloody knife found in their glove compartment means nothing at all.

There’s a reason we do this. Life is full of contradictions, and the human brain was not built to wrestle with constant paradoxes. In order to assemble a coherent view of the world around us, sooner or later, we all have to do some mental sorting—who to listen to, how to interpret it, and what even matters. Ideally, we’ll pursue the truth above all else, considering the evidence as it emerges, questioning our assumptions, and constructing the ultimate solution from the facts we’ve gathered.

The trouble is, we are far more emotional than we want to believe. Left unchecked, our feelings seep into the proceedings, and that means all too often, not only do we give our idea-parasites a free ride—we actually let them call the shots.

Take job interviews. What qualities are important in a police chief? One study had participants examine two (fictitious) resumes: one for a streetwise risk-taker beloved by local cops but with a low aptitude for paperwork, and one for an educated administrator with an excellent understanding of procedure but no rapport with the rest of the force. The survey respondents then had to decide which traits to prioritize in hiring. The catch: at random, one applicant was assigned the name “Michael” and the other was “Michelle.” 

Which qualities were overall valued more highly? Whichever qualities were attached to Michael.

When the more educated candidate was assumed to be male, the respondents thought education was a key requirement for a police chief. When Michelle was the educated one, respondents were noticeably more likely to devalue education. Overall, this pattern was slightly true among female survey takers, and significantly more present among men.

Interestingly, the pro-male bias was especially pronounced among survey takers who rated themselves as “highly objective.”

So, how do we avoid these confirmation bias pitfalls?

1. Have the courage to examine your own built-in biases.

There’s a word for people who insist they are always purely rational and objective: wrong. Your point of view is informed by your experiences, your politics, and the types of news you consume. Ask yourself, “What do I already believe? What do I want to believe? How might this color my perceptions?

2. Try to define your guidelines ahead of time, and apply them evenly to all cases.

If a scientific study only surveys twenty people, even a freshman Biology student could tell you that the sample is too small to draw any real conclusions. But we’re way more likely to notice this sort of shortcoming if we don’t agree with the results. 

Take the police chief hiring study cited above. It was an attractive example to use in this blog post because it so keenly illustrated our point. However, had it failed to emphasize the power of confirmation bias, a person writing a post like this one might be more likely to decry its sample size of 73 people and caution against drawing broad conclusions. In this case, a 2015 meta-study looked at 136 hiring bias studies—with an overall sample size higher than 20,000—and found similar results. Still, this is something to always look out for.

3. Make an effort to reject black-and-white thinking.

We can all name public figures whose work is important to us. However, be careful not to put any human on a pedestal. If you can begin by accepting that someone you like can still make mistakes—even big mistakes—you are more likely to retain a degree of critical thinking about them, thus leaving the door open to re-evaluate your opinion if need be.

4. Consider your sources.

Expertise matters. A climate scientist is more likely to have an accurate stance on climate change than a politician. A nutritionist is a better source on the health benefits of pomegranate juice than say, the label on a bottle of pomegranate juice. And in both cases, if ninety five percent of climate scientists or nutritionists have the same position on something, their word should be weighted more heavily than a random holdout—especially if that random holdout is, say, in the pocket of Pom Wonderful.


When it comes to ideas we cherish and protect in the face of any challenges, we all have a parasite or two. But to escape the clutches of confirmation bias, we need to remember not to let that parasite steer. 
Check out Robb’s new book and more