Skip navigation
All Places > The Psychology Community > Blog > Author: David Myers
1 2 3 Previous Next

The Psychology Community

244 Posts authored by: David Myers Expert

“Memory is insubstantial. Things keep replacing it.”

~Annie Dillard, “To Fashion a Text,” 1988

 

Often in life we do not expect something to happen until it does, whereupon—seeing the forces that produced the event—we feel unsurprised. We call this phenomenon the hindsight bias (aka the I-knew-it-all-along phenomenon).

 

Hindsight bias is fed by our after-the-fact misremembering of our before-the-fact views. We don’t just retrieve our memories, we reweave them. Like those who continuously revise Wikipedia pages, our brain often replaces a retrieved memory with a modified one. Memory researchers call this process reconsolidation.

 

Reconsolidation explains some unnerving observations. In several studies, people whose beliefs or attitudes have changed have nevertheless insisted that they always felt much as they now feel. In one study, Carnegie Mellon University students answered a survey that included a question about student control over the curriculum. A week later, after being induced to write an essay opposing student control, their attitudes shifted toward what they’d written—greater opposition to student control. Yet when recalling their earlier opinion, the students “remembered” holding the opinion that they now held.

 

After observing other students similarly denying their former attitudes, researchers D. R. Wixon and James Laird noted that “The speed, magnitude, and certainty” with which the students revised their own histories “was striking.”

 

And so it has been even for President Trump and his self-proclaimed “world’s greatest memory,” as reflected in his evolving views of the coronavirus (documented here and elsewhere):

  • January 22: “We have it totally under control. It’s one person coming in from China. It’s going to be just fine.”
  • February 2: “We pretty much shut it down coming in from China.”
  • February 24: “The Coronavirus is very much under control in the USA… Stock Market starting to look very good to me!”
  • February 26: “We’re going to be pretty soon at only five people. And we could be at just one or two people over the next short period of time. So we’ve had very good luck.”
  • February 27: “It’s going to disappear. One day—it’s like a miracle—it will disappear.”
  • March 6: “I think we’re doing a really good job in this country at keeping it down … a tremendous job at keeping it down.”
  • March 7: “I’m not concerned at all. No, I’m not. No, we’ve done a great job.”
  • March 13: “National emergency. Two big words.”
  • March 17: “I’ve always viewed it as very serious. … This is a pandemic. I felt it was a pandemic long before it was called a pandemic.”

 

Our explanation of these contradictory statements depends partly on the lens through which we view them. Democrats may see them as evidence of dishonesty, Republicans as normal memory flaws. Or perhaps the truth lies somewhere in between.

 

After following a group of adults from the 1930s through the 1990s, George Vaillant observed,  “It is all too common for caterpillars to become butterflies and then to maintain that in their youth they had been little butterflies. Maturation makes liars of us all.” Decades later, his observation holds true—not only for the President but for us all. How easy it is to be wise after events.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

For psychology teachers everywhere—many with students displaced to their homes—the COVID-19 pandemic’s dark clouds offer a potential silver lining: some teachable moments. In so many ways, we are experiencing social psychology writ large, with so much to study.

 

Here’s my initial list of opportunities for online and in-class discussion of social dynamics in action.

 

Concept: The need to belong. We humans are social animals. We live and find safety in groups. We flourish and find happiness when connected in close, supportive relationships. Separation (or, worse, ostracism) triggers pain.

Discussion questions:

  1. Are there ways in which the pandemic thwarts our need to belong?

Possible answers: by social distancing, cancelled communal gatherings (sports, parties, worship), the isolation of off-site learning and work, diminished travel to be with loved ones or for shared experiences.

   2. If so, might the isolation increase risk of physical or mental health problems?

Possible answers: Isolation may exacerbate loneliness and depression, both of which can make people vulnerable to ill-health and, ironically, compromised immune functioning. [P.S. My colleague Jean Twenge offers more on this here.]

   3. Are there ways we can nevertheless satisfy our need to belong?

Possible answers: online meetings through video conferencing; connecting through social media (Facebook’s mission: “to give people the power to build community and bring the world closer together”); Facetime conversations; acts of caring to those in need or at-risk; love-bombing friends and family with messages and emails. Perhaps physical distancing needn't correspond with social distancing?

 

Concept: The social responsibility norm. Norms are social expectations for desirable behavior. The social responsibility norm is the expectation that we help those in need.

Discussion question: Have you observed or read examples of the social responsibility norm in operation during the current crisis?

Possible answers: People doing grocery runs for neighbors at risk; friends reminding peers “even if you aren’t at risk for serious illness, you need to protect yourself so older and at-risk folks you meet aren’t imperiled and hospitals overwhelmed.”

 

Concept: The availability heuristic’s influence on our fears. Heuristics are thinking shortcuts. The availability heuristic is our automatic tendency to estimate the likelihood of an event by how readily it comes to mind (how available it is in memory). Vivid media images of disasters can therefore lead us to fear things that kill people in bunches (such as plane crashes, when auto travel is vastly more dangerous).

Discussion question: Although it’s too early to know the coronavirus’ lethality (because we don’t yet know how many people have undiagnosed infections), have you witnessed examples of some panicked people fearing it too much? And of others, by failing to appreciate its exponential future spread, fearing it too little?

Discussion question: Do you agree with statistician-writer Nate Silver’s speculation that these two tendencies (fearing too much and fearing too little) might balance each other?

 Concept: Unrealistic optimism. We are natural positive thinkers. In study after study, students have believed themselves far more likely than their classmates to be destined for a good job and salary and as less likely to develop a drinking problem get fired, or have a heart attack by age 40. Likewise, smokers think themselves less vulnerable to cancer or better able to quit. Newlyweds believe themselves invulnerable to divorce.

Discussion question: If cognitively available COVID-19 horror stories inflate too much fear in some, does unrealistic optimism create too little in others? If so, what are (or were) examples of such? (People, despite initial warnings, flocking to bars and beaches?)

 

Concept: Selective exposure to information. Selective exposure is the human tendency to prefer and seek information and news feeds that affirm rather than challenge our preexisting views.

Discussion question: A recent survey (replicated by NPR/Marist) found that 58 percent of Republicans and 29 percent of Democrats believed “the threat of coronavirus has been exaggerated.” Might selective information exposure explain this difference? If so, how?

Discussion question: Are you selectively exposing yourself solely to news and social media sources that affirm rather than challenge your views?

 

Concept: Group polarization. In experiments, discussion among like-minded people tends to enhance their preexisting views.

Discussion question: In times of crisis, does the internet enable the segregation of like-minded people clustered in echo chambers, progressives with progressives, and conservatives with conservatives—each group sharing links to sites that affirm their own views?

Discussion question: Does this polarization describe you and your friends?

Discussion question: Are there other ways in which you engage views other than your own?

 

Concept: Individualism vs. collectivism. Cultures vary in the extent to which they prioritize “me” or “we”—personal (my) goals and identity or group (our) goals and identity.

Discussion question: Have you observed examples of individualism or collectivism in response to health or government guidelines for controlling the spread of the virus?  

Possible answers: Individualism—objecting to limits on one’s activities—“I’m fine and not at risk, so why shouldn’t I be able to party with my friends?” Collectivism: “We’re responsible for each other and could pass the virus on to an older person or someone with an underlying condition.”

Discussion question: Does China’s collectivism help explain its plummeting rate of new COVID-19 cases—from several thousand per day during February to just 27 on March 15?

Possible answer: Students may note that China is more collectivist—more “we” focused—but also more autocratic.

 

Concept: The motivating power of social perceptions. Stock market drops and bank runs occur when people perceive that others will be selling their holdings or withdrawing their money, causing collapse. People who may not think conditions are terrible may create a downturn by fearing that others think so.

Discussion question: Has your community experienced a similar run on goods—by people who may not fear a lack of goods, but worry that others do, and will empty shelves?

 

Concept: Terror-Management. Some 300 studies explore the effects of reminding people of their mortality. “Death anxiety” provokes varied defenses, which range from aggression toward rivals to shoring up self-esteem to prioritizing close relationships to embracing worldviews and faith that remind us of life’s meaning.

Discussion question: Have you observed any examples of people’s heightened death anxiety and their adaptive responses to such?  

 

Concept: The unifying power of a common enemy and a superordinate goal. When diverse people experience a shared threat—a common enemy, a natural disaster, a mean boss—they often feel a kinship, as many Americans did after 9/11. Moreover, working cooperatively toward a shared (“superordinate”) goal can transform distant or conflicting people into friends.

Discussion question: Have you seen instances when the shared threat of a pandemic virus helped someone appreciate our common humanity?

Discussion question:  Have you seen instances when the awareness of the virus made you or a loved one more suspicious of others—whose mere cough might make them seem like an external threat?

 

For psychological science (the most fascinating science, methinks), the world around us is a living laboratory in which we observe powerful social forces at work in others . . . and in ourselves.

 

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

 

 

Paul Krugman’s Arguing with Zombies (2020) identifies “zombie ideas”—repeatedly refuted ideas that refuse to die. He offers economic zombie ideas that survive to continue eating people’s brains: “Tax cuts pay for themselves.” “The budget deficit is our biggest economic problem.” “Social Security is going broke.” “Climate change is nonexistent or trivial.”

 

That triggered my musing: Does everyday psychology have a similar army of mind-eating, refuse-to-die ideas?  Here are some candidates, and the research-based findings that tell a different story:

  1. People often repress painful experiences, which years later may later reappear as recovered memories or disguised emotions. (In reality, we remember traumas all too well, often as unwanted flashbacks.)
  2. In realms from sports to stock picking, it pays to go with the person who’s had the hot hand. (Actually, the combination of our pattern-seeking mind and the unexpected streakiness of random data guarantees that we will perceive hot hands even amid random outcomes.)
  3. Parental nurture shapes our abilities, personality, and sexual orientation. (The greatest and most oft-replicated surprise of psychological science is the minimal contribution of siblings’ “shared environment.”)
  4. Immigrants are crime-prone. (Contrary to what President Donald Trump has alleged, and contrary to people’s greater fear of immigrants in regions where few immigrants live, immigrants do not have greater-than-average arrest and incarceration rates.)
  5. Big round numbers: The brain has 100 billion neurons. 10 percent of people are gay. We use only 10 percent of our brain. 10,000 daily steps make for health. 10,000 practice hours make an expert. (Psychological science tells us to distrust such big round numbers.)
  6. Psychology’s three most misunderstood concepts are that: “Negative reinforcement” refers to punishment. “Heritability” means how much of a person’s traits are attributable to genes. “Short-term memory” refers to your inability to remember what you experienced yesterday or last week, as opposed to long ago. (These zombie ideas are all false, as I explain here.)
  7. Seasonal affective disorder causes more people to get depressed in winter, especially in cloudy places, and in northern latitudes. (This is still an open debate, but massive new data suggest to me that it just isn’t so.)
  8. To raise healthy children, protect them from stress and other risks. (Actually, children are antifragile. Much as their immune systems develop protective antibodies from being challenged, children’s emotional resilience builds from experiencing normal stresses.)
  9. Teaching should align with individual students’ “learning styles.” (Do students learn best when teaching builds on their responding to, say, auditory versus visual input? Nice-sounding idea, but researchers—here and here—continue to find little support for it.)
  10. Well-intentioned therapies change lives. (Often yes, but sometimes no—as illustrated by the repeated failures of some therapy zombies: Critical Incident Stress Debriefing, D.A.R.E. Drug Abuse Prevention, Scared Straight crime prevention, Conversion Therapy for sexual reorientation, permanent weight-loss training programs.)

 

Association for Psychological Science President Lisa Feldman Barrett, with support from colleagues, has offered additional psychology-relevant zombie ideas:

  • Vaccines cause autism (a zombie idea responsible for the spread of preventable disease).
  • A woman’s waist-to-hip ratio predicts her reproductive success. (For people who advocate this idea about women, says Barrett, “There should be a special place in hell, filled with mirrors.”)
  • A sharp distinction between nature and nurture. (Instead, biology and experience intertwine: “Nature requires nurture, and nurture has its impact via nature.”)
  • “Male” and “female” are genetically fixed, non-overlapping categories. (Neuroscience shows our human gender reality to be more complicated.)
  • People worldwide similarly read emotion in faces. (People do smile when happy, frown when sad, and scowl when angry—but there is variation across context and cultures. Moreover, a wide-eyed gasping face can convey more than one emotion, depending on the context.)

 

Westend61/Getty Images

 

Ergo, when approached by a possible zombie idea, don’t just invite it to become part of your mental family. Apply psychological science by welcoming plausible-sounding ideas, even hunches, and then putting each to the test: Ask does the idea work? Do the data support its predictions?

 

When subjected to skeptical scrutiny, crazy-sounding ideas do sometimes find support. During the 1700s, scientists ridiculed the notion that meteorites had extraterrestrial origins. Thomas Jefferson reportedly scoffed at the idea that “stones fell from heaven.”

 

But more often, as I suggest in Psychology 13th Edition (with Nathan DeWall), “science becomes society’s garbage collector, sending crazy-sounding ideas to the waste heap atop previous claims of perpetual motion machines, miracle cancer cures, and out-of-body travels. To sift reality from fantasy and fact from fiction therefore requires a scientific attitude: being skeptical but not cynical, open-minded but not gullible.”

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

 

In a long-ago experiment by Columbia University social psychologist Stanley Schachter, groups discussed how to deal with fictional juvenile delinquent “Johnny Rocco.” One “modal” group member (actually Schachter’s accomplice) concurred with the others in arguing for leniency and became well liked. A second accomplice, the “deviate,” stood alone in arguing for harsh discipline. At first, the study participants argued with the nonconforming deviate, but eventually they ignored him and then reported disliking him.

 

Recent experiments with children and adults confirm the lesson: Groups respond harshly to members who deviate from group norms and threaten their group identity. Other studies show how agonizingly difficult it can be to publicly state truths after hearing consensus falsehoods from one’s peers, and how “groupthink” suppresses dissent. After President John F. Kennedy’s ill-fated Bay of Pigs invasion, his adviser Arthur Schlesinger, having self-censored his misgivings, reproached himself “for having kept so silent during those crucial discussions.”

 

To dissent from one’s group—one’s fraternity, one’s religion, one’s friends—can be painful, especially when a minority of one.

 

Mitt Romney understands. For being a minority of one in voting for President Trump’s removal, he anticipated being “vehemently denounced. I’m sure to hear abuse from the President and his supporters.”

 

And so he has. “I don't like people who use their faith as justification for doing what they know is wrong,” vented the President, before ridiculing Romney for “one of the worst campaigns in the history of the presidency.” Donald Trump, Jr. went further, calling for Romney to “be expelled” from the GOP. Romney, some Congressional colleagues derided, was a “sore loser” who acted “to appease the left” and was “not very collegial.”

 

The rewards of conformity, and the rejection of dissenters, are no secret. As President Kennedy recalled in Profiles in Courage (1955), “‘The way to get along,’ I was told when I entered Congress, ‘is to go along.’” It is a temptation we all face.  When feeling alone, we may silence our voice. We may join a standing ovation for something we do not inwardly applaud. We may succumb to the power of our herd and its leader.

 

And then, feeling some dissonance over conforming, we rationalize. Observing our own silence and our false witness, our mind mutates, and we begin to believe what we reluctantly stood up for. Our attitudes follow our actions, which grow their own self-justifying legs. As C. S. Lewis noted in Mere Christianity, “Every time you make a choice you are turning the central part of you, the part of you that chooses, into something a little different from what it was before.”

 

For those who endure the distress of dissent, there are compensations.

 

First, minorities of one can matter. “All history,” wrote Ralph Waldo Emerson, “is a record of the power of minorities, and of minorities of one.” Think of Copernicus and Galileo, of Martin Luther King, Jr. and Susan B. Anthony, of Rosa Parks and Nelson Mandela. In the short term, these heroes, and the conformity-resisting former senators whom Kennedy later celebrated in Profiles in Courage, were scorned for flouting team play and resisting expectations. It was only later that historians and filmmakers honored their heroism. Mitt Romney can take the long view.

 

Second, experiments on “minority influence” show how a minority of one can matter; When such individuals, despite ridicule, persist with consistency, they can sway their laboratory group, or even change history. Being a persistent dissenting voice may get you disliked and even ignored, but it can also, eventually, stimulate rethinking. It punctures the illusion of unanimity and can enable others to express their doubts. That voice is especially potent when it represents a defection from the ingroup rather than a voice from the opposition. A Republican Mitt Romney is harder for Republicans to dismiss than a Democrat Alexandria Ocasio-Cortez.

 

Ergo, those who dissent—who deviate from group norms and threaten a group’s identity—are often scorned. Yet a persistent, consistent, cogent voice sometimes moves the needle. “If the single man plant himself indomitably on his instincts, and there abide,” said Emerson, “the huge world will come round to him.”

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

A recent Templeton World Charity Foundation conference, Character, Social Connections and Flourishing in the 21st Century, expanded my mind, thanks to a lecture by famed evolutionary biologist David Sloan Wilson. This much about him I had known: His multilevel selection theory argues that evolution favors survival-enhancing group as well as individual behaviors. Within groups, selfishness beats altruism. Yet altruistic groups triumph over selfish groups.

 

What I learned from his lecture and our ensuing dinner conversation was that his passion has shifted to understanding and enabling effective real-world groups—from nonprofit organizations to schools to faith communities to businesses. How might people in such groups more effectively work together to accomplish goals?

 

To enhance work team effectiveness, Wilson and his colleagues suggest implementing a group of basic principles. They point out that groups that effectively manage shared resources, such as irrigation, forests, and fisheries, follow principles that (a) integrate evolutionary principles of group selection with (b) “core design principles” identified by political scientist and economics Nobel laureate Elinor Ostrom, seasoned with (c) behavior-change insights articulated by psychologist Steven Hayes. The resulting eight principles for success:

  1. Strong group identity and purpose. Groups know who they are and what sets them apart from other groups.
  2. Fair sharing of benefits and costs. Proportional sharing (without some members benefiting at the expense of others) advances group over individual advancement.
  3. Fair and inclusive decisions. Consensus decision-making, with uncensored input, enables smart decisions, and, again, safeguards against some benefiting at others’ expense.
  4. Tracking results ensures that agreements are honored.
  5. Graduated sanctions. Accountability for misbehaviors ranges from gentle reminders to expulsion.
  6. Conflict resolution mechanisms. When disagreements occur, the group implements fair and fast resolution procedures.
  7. Authority to self-govern. In larger societies and organizations, subgroups are empowered to organize and operate.
  8. Appropriate coordination with other groups. In larger social systems, operating subgroups must integrate with other subgroups.

 

How striking it is, notes Wilson, that the principles Ostrom identified from successful commons resource-managing groups are so similar to “the conditions that caused us to evolve into such a cooperative species.” These principles—when implemented by effective leaders—build a group’s moral foundation, protect it against self-serving behaviors, and allow its members to freely express themselves.

 

To assist groups in implementing the core design principles drawn from evolutionary, political, and psychological science, Wilson and colleagues have authored a book (Prosocial: Using Evolutionary Science to Build Productive, Equitable, and Collaborative Groups), developed a websitethat offers training and resources, and produced an online magazine that tells implementation stories.

 

Wilson’s life journey—from son of a famous author (The Man in the Gray Flannel Suit) to science theorist to social entrepreneur—is unique. Yet in other ways, his professional pilgrimage is similar to our own . . . as our lives have unfolded in unanticipated ways—sometimes with false starts leading to brick walls, sometimes with gratifying new directions. Little did I expect, when first encountering Wilson’s work, that it would later produce practical resources for helping groups “learn about and adopt design principles to improve their efficacy.”

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

Caring parents understandably want to protect their children from physical harm and emotional hurt. We do this, we presume, for their sakes. And, if the truth be told, we do it for our own as well. Many of us knowingly nodded when Michelle Obama shared the common parental experience: “You are as happy as your least happy child.”

 

But as my friend and fellow social psychologist, Jonathan Haidt, recently explained to a large West Michigan audience, sometimes parental good intentions prepare kids for failure.

 

Haidt began by documenting what I’ve previously described—the stunning recent increase in teens’ (especially teen girls’) depression, anxiety, suicidal thinking, and self-harm (as documented in ER visits). This tsunami of mental health problems has now also reached college campuses, as evident in collegians’ increased depression rates and visits to campus mental health services.

 

What gives? What accounts for this greater fragility of today’s youth? Teen biology hasn’t changed. They’re not drinking more (indeed, they’re drinking less). They’re not working more (they’re less often employed).

 

What has changed, Haidt observed, is, first, technology—the spread of smart phones, the explosion of social media, and the addition of social comparison-promoting social media features, such as visible likes and retweets of one’s posts. Haidt offered correlational studies that associate teens’ social media use with their mental health, and experiments that reveal the emotional benefits of a restrained social media diet. (For more, see this prior blog essay, and Haidt’s recent Atlantic essay, with Tobias Rose-Stockwell: “The Dark Psychology of Social Networks.” See also this new response by his collaborator, Jean Twenge, to skeptics of the social media explanation.)

 

As an antidote to social media’s emotional toxicity (and diminished sleep and face-to-face relationships), Haidt offered three practical family guidelines for healthy media use:

 

He also attributes the increase youth mental health issues to a second cultural change: Today’s parents often fail to appreciate the “antifragility” principle—that children’s emotions, like their bones and immune systems, gain strength from being challenged. Bones and muscles gain strength from exercise. Immune systems develop protective antibodies from challenges (soaring peanut allergies are a sorry result of routinely protecting infants from peanut exposure). And children’s emotional health and resilience likewise builds through their unpleasant experiences. There is truth to Nietzsche’s aphorism, “What doesn’t kill you makes you stronger.”

 

Alas, as Haidt demonstrated by surveying his audience, members of Generation Z (people born since 1996) have grown up more protected—with parents restraining their roaming free until later childhood. Their grandparents, by contrast, and to some extent their parents, were experienced a less restricted “free range childhood.” (And no, today’s world is not more dangerous—it’s actually much safer than the 1970s.)

 

Moreover, he argued (also in The Coddling of the American Mind with Greg Lukianoff, and in a new essay with Pamela Paresky), schools are ill-serving students by protecting them from uncomfortable speech. Colleges ill-prepare students for life outside the campus when they suppress unpopular perspectives and offer “safe spaces” and “trigger warnings” that insulate students from “micro-aggressions.”

 

As an alternative approach, Haidt welcomes viewpoint diversity—the thrust of the Heterodox Academy. He and his colleagues also offer resources for open-minded engagement at the new OpenMindPlatform.org.

 

Haidt’s case for viewpoint diversity and open dialogue remind me of the long-ago wisdom of social psychologist William McGuire, whose experiments taught us an important lesson:  Unchallenged beliefs existing in “germ-free ideological environments” are the most vulnerable to later being overturned. To form one’s beliefs amid diverse views is to become more discerning, and ultimately more deeply grounded in less fragile convictions.

 

Ergo, concludes Haidt, to support teen mental health be intentional about screen time and social media, and remember: character—like bones, muscles, and immunity—grows from challenge.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

Cognitive dissonance theory—one of social psychology’s gifts to human self-understanding—offers several intriguing predictions, including this: When we act in ways inconsistent with our attitudes or beliefs, we often resolve that dissonance by changing our thinking. Attitudes follow behavior.

 

That simple principle explains why smokers often dismiss health warnings, why racial attitudes improved following school desegregation and civil rights laws, and why we tend to dislike those whom we’ve harmed and to love those to whom we have been kind. Although we sometimes do persuade ourselves to act, we also can act ourselves into new ways of thinking. Our deeds forge our understandings.

 

The principle reaches into our political attitudes. Consider how U.S. attitudes followed U.S. behavior as events unfolded during the 2003 war with Iraq, which was premised primarily on the need to rid Iraq of its Weapons of Mass Destruction (WMDs). Four in five Americans told Gallup they believed WMDs would be found, leading 4 in 5 also to support the war. Was the war justified even if Iraq did not have WMDs? Only 38 percent of Americans believed it would be; if there were no WMDs, there should be no war.

 

When no such weapons were found—and the war’s human, financial, and terrorism-enhancing costs became known—how did Americans resolve their dissonance? They changed their primary rationale for the war from eliminating WMDs to ridding the world of Iraqi President Saddam Hussein. Thus, three months after the war’s launch, the 38 percent who supported the war if there were no WMDs now had mushroomed to 58 percent. Despite the war’s discounted initial rationale, support for a war that didn’t eliminate WMDs had increased.

 

Will such self-persuasion ride again in the 2020 American conflict with Iran? Prior to the January 3, 2020, killing of Major General Qasem Soleimani, Americans overwhelmingly disapproved of war with Iran:

  • In June 2019, about 4 in 5 Americans (78 percent) approved of President Trump’s calling off a retaliatory strike after Iran downed a U.S. drone. Few believed that retaliation against Iran was a good idea.
  • In July 2019, only 18 percent told Gallup they favored “military action against Iran.”
  • In September 2019, only 21 percent responding to a University of Maryland survey said that, to achieve its goal with Iran, “the U.S. should be prepared to go to war.”

I wrote the above words on January 8, 2020, and now await follow-up surveys—with the expectation that cognitive dissonance will ride again, as some Americans wrestle with the dissonance between their support for the president and their prior opposition to such military action—a tension that can be resolved by now thinking the retaliatory strike was warranted.

 

* * * *

P.S. Initial post-strike surveys:

  • A January 4-5, 2020, POLITICO/Morning Consult survey reported that “47% of voters approve of President Donald Trump's decision to kill top Iranian military commander Qassem Soleimani while 40% disapprove.”
  • A January 6–7, 2020, post-assassination Reuters/Ipsos survey found that “a growing minority of Americans say they are now in favor of a ‘preemptive attack’ on Iran’s military.’ The poll found that 27 percent said ‘the United States should strike first.’”
  • A January 7–8, 2020. USA Today/Reuters survey found Americans concerned about increased threats to U.S. safety, yet 42 percent supported the Soleimani assassination—far more than the 1 in 5 who favored such action in the summer of 2019.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

It’s the new year transition, the line between our last year’s self and our hoped-for healthier, happier, and more productive 2020 self. To become that new self, we know what to do. We know that a full night’s sleep boosts our alertness, energy, and mood. We know that exercise lessens depression and anxiety, sculpts our bodies, and strengthens our hearts and minds. We know that what we put into our bodies—junk food or balanced nutrition, addictive substances or clean air—affects our health and longevity.

 

Alas, as T. S. Eliot foresaw, “Between the idea and the reality . . . Falls the Shadow.” So how, this year, can we move from knowing the needed behaviors to doing them?

 

Rocky89/iStock/Getty Images

 

First, do make that New Year’s resolution. Research by Gary Latham, Edwin Locke, and others confirms that challenging goals motivate achievement. Specific, measurable, realistic goals—such as “finish the business plan by the month’s end”—direct attention, promote effort, motivate persistence, and stimulate creativity.

 

Second, announce the goal to friends or family. We’re more likely to follow through after making a public commitment.

 

Third, develop an implementation plan—an action strategy that specifies when, where, and how you will march toward achieving your goal. Research shows that people who flesh out goals with detailed plans become more focused in their work, and more likely to complete it on time.

 

Through the ups and downs of goal-striving, we best sustain our motivation when we focus on immediate subgoals. Better to have our nose to the grindstone than our eye on the ultimate prize. Better to attend to daily study than the course grade. Better to center on small steps—the day’s running target—than to fantasize the marathon.

 

Fourth, monitor and record progress, perhaps aided by a tracker such as a Fitbit. It’s all the better when that progress is displayed publicly rather than kept secret.

 

Fifth, create a supportive environment. When trying to eat healthy, keep junk food out of the cupboards. Use small plates and bowls. When focusing on a project, hole up in the library. When sleeping, stash the smartphone. Choose the right friends. Such “situational self-control strategies” prevent tempting impulses, Angela Duckworth and her colleagues have found.

 

Sixth, transform the hard-to-do behavior into a must-do habit. Habits form when we repeat behaviors in a given context—sleeping in the same comfy position, walking the same route to work, eating the same breakfast oatmeal. As our behavior becomes linked with the context, our next experience of that context evokes our habitual response. Studies find that when our willpower is depleted, as when we’re mentally fatigued, we fall back on our habits—good or bad. To increase our self-control, to connect our resolutions with positive outcomes, the key is forming “beneficial habits.”

 

“If you would make anything a habit, do it,” said the stoic philosopher Epictetus. But how long does it take to form a beneficial habit? A University College London research team led by Phillippa Lally asked 96 university students to choose some healthy behavior, such as eating fruit with lunch or running before dinner, and to perform it daily for 84 days. The students also logged whether the behavior felt automatic (something they did without thinking and would find it hard not to do). When did the behaviors turn into habits? On average, after about 66 days.

 

Gwyneth Paltrow recalls that when she first started working with a personal trainer, “finding motivation was hard. She advised me to think of exercise as an automatic routine, no different from brushing your teeth, to avoid getting distracted. Now it is part of my life—I exercise Monday to Friday at 10 a.m. and always stick with it.”

 

Friskie.Cin Then do it every day for two months, or a bit longer for exercise. You likely will find yourself with a new habit, and perhaps a healthier, happier, and more productive life.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com, where this essay originally appeared—here.)

 

If you have watched a 2019 Democratic Party debate, you perhaps have taken note: While Pete Buttigieg, Elizabeth Warren, and Cory Booker glide smoothly through their spoken words, Joe Biden sometimes hesitates, stammers, and stumbles. Is he just less mentally agile than his more lucid counterparts?

 

Perhaps we should cut him some slack, suggests John Hendrickson in an upcoming Atlantic essay. Biden experiences the lingering effects of childhood stuttering that made him a subject of mockery. An empathic Hendrickson, himself a stutterer, illustrates from Biden’s July debate:

 

“My plan makes a limit of co-pay to be One. Thousand. Dollars. Because we—”

He stopped. He pinched his eyes closed. He lifted his hands and thrust them forward, as if trying to pull the missing sound from his mouth. “We f-f-f-f-further support—” He opened his eyes. “The uh-uh-uh-uh—”

 

Hendrickson is not the only one who empathizes. As a childhood stutterer who received speech therapy in my Seattle public elementary school, and for whom such dysfluency has occasionally resurfaced in adulthood, I know the dismay of coming up to a word that gets stuck in the roof of the mouth, to everyone’s embarrassment, especially my own. For me, K has been a difficult consonant, and sometimes there seems no other way to call on “K-k-k-kathy.”

 

But often, those who stutter have learned that they can fake normal fluency by backing up and detouring around the verbal roadblock, rendering the impediment invisible. As with Joe Biden’s debate responses, listeners may notice the pauses and mid-sentence changes of direction. They just don’t attribute the dysfluency to stuttering (which Biden also does not blame).

 

And so it happens with the great invisible disability, hearing loss. “Can everyone hear me?” asks the person on stage. Given the inevitable answer from those hearing the question, the nodding heads lead the speaker to think, “I don’t need a mic.” And most in the audience likewise presume all’s well—oblivious to the unseen exclusion of so many of us (and hence my advocacy for user-friendly hearing accessibility technology in such settings—see here).

 

Like stutterers, those of us with hearing loss also finesse awkward situations. At a noisy party or in a restaurant, we fake hearing. As our conversational partner makes unheard social chatter, we smile and nod—not wanting to be a pain by asking people to repeat and repeat. Sometimes our response is inappropriate—smiling at someone’s sadness, or being unresponsive to a question. But mostly, after straining and failing to carve meaning out of sound, our pretending to hear hides our disability.

 

There’s practical wisdom to socially finessing one’s speech or hearing challenges. But some go further to hide their hearing disability. They respond to ads for “invisible hearing aids” that can keep people from knowing that—shame, shame—you have hearing loss. (Shame instead on the hearing professionals whose ads imply that hearing loss is something to be deeply ashamed of, and to hide.) Actually, the more public I am about my hearing loss, the more comfortable I become at seeking people’s help in coping with it—by finding quieter tables in quieter restaurants, facing the wall, sitting with my good ear toward the person, having them speak into that ear, and using a wireless mic that transmits to my hearing aids.

 

We can extend the list of hidden disabilities to include some forms of vision loss, brain injury, chronic fatigue, pain, phobias, dyslexia, depression, dementia, and a host of others. Given the invisibility of such disabilities, we often don’t see the challenges that lie behind everything from a child’s misspellings to a Joe Biden stammer. If only we knew—and if only those of us with the invisible challenges would let others know—we all could be less judgmental, more understanding, and more genuinely helpful.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

Bill Gates wants people he hires to read two of his favorite books: The Better Angels of Our Nature, by psychologist Steven Pinker, and Factfulness by the late Hans Rosling.

 

I, too, have loved these books, which form a complementary pair. Pinker argues—our current malaise notwithstanding—that the world is getting better. World hunger is abating, child labor is disappearing. Murder and wars are less common. Literacy is increasing. Given a choice between living a half-century or century ago or today, any sane person would choose today.

 

Rosling mined world data to document these trends and many more. And now the Rosling family’s Swedish foundation is offering stunning dynamic graphic displays of world data.

 

For example, see here and click on the animation for a jaw-dropping depiction of the life-expectancy increase (in but an eye-blink of our total human history).

 

Today’s average human lives much longer, thanks partly to the dramatic decline in child mortality from a time when nearly half of children died by age 5 (and when there was biological wisdom to having more than two children).

 

Other show-the-class goodies include:

 

These facts should whet your informational appetite. For more, explore www.gapminder.com/data. “Gapminder makes global data easy to use and understand.”

 

And then explore www.OurWorldInData.org, founded by Max Roser. This is an Oxford-based source of world data on all sorts of topics. “Our World in Data is about research and data to make progress against the world’s largest problems.” An example, presenting World Bank/United Nations data on the “missing women” phenomenon in certain countries since the advent of prenatal sex determination:

 

 

On the commercial side, www.statista.com has a wealth of information—such as, from my recent searching, data on anti-Semitic crime trends, social media use, and dating app usage.

 

For us data geeks, so many numbers, so little time.

 

Not everything is “better angels” rosy. In addition to sex-selective abortions, we are menaced by climate change, nationalism, hate speech, and rampant misinformation. Even so, the Pinker/Rosling message—that in many important ways life is getting better—is further typified by these very websites, which provide easy access to incredible amounts of information that our ancestors could never know.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

 

“Death is reversible.” So began NYU medical center’s director of Critical Care and Resuscitation Research Science, Sam Parnia, at a recent research consultation on people’s death experiences during and after cardiac resuscitation.

 

Biologically speaking, he explained, death and cardiac arrest are synonymous. When the heart stops, a person will stop breathing and, within 2 to 20 seconds, the brain will stop functioning. These are the criteria for declaring someone dead. When there’s no heartbeat, no breathing, and no discernible brain activity, the attending physician records the time of death.

 

Yet recent advances in science reveal that it may take many hours for individual brain cells to die. In a 2019 Nature report, slaughtered pigs’ brains, given a substitute blood infusion 4 hours after death, had brain function gradually restored over a 6-10 hour period. For many years now, brain cells from human cadaver biopsies have been used to grow brain cells up to 20 hours after death, explained Parnia. His underappreciated conclusion: “Brain cells die very, very slowly,” especially for those whose brains have been chilled, either medically or by drowning in cold water.

 

But what is death? A Newsweek cover showing a resuscitated heart attack victim proclaimed, “This man was dead. He isn’t any more.” Parnia thinks Newsweek got it right. The man didn’t have a “near death experience” (NDE). He had a death experience (DE).

 

Ah, but Merriam-Webster defines death as “a permanent cessation of all vital functions.” So, I asked Parnia, has a resuscitated person actually died? Yes, replied Parnia. Imagine two sisters simultaneously undergoing cardiac arrest, one while hiking in the Sahara Desert, the other in a hospital ER, where she was resuscitated. Because the second could be resuscitated, would we assume that the first, in the same minutes following the cessation of heart and brain function, was not dead?

 

Of 2.8 million CDC-reported deaths in the United States annually, Parnia cites estimates of possibly 1.1 million attempted U.S. cardiac resuscitations a year. How many benefit from such attempts? And of those who survive, how many have some memory of their death experiences (cognitive activity during cardiac arrest)?

 

For answers, Parnia offers his multi-site study of 2060 people who suffered cardiac arrests. In that group, 1730 (84 percent) died and 330 survived. Among the survivors, 60 percent later reported no recall of their death experience. The remaining 40 percent had some recollection, including 10 percent who had a meaningful “transformative” recall. If these estimates are roughly accurate, then some 18,000 Americans a year recall a death experience.

 

NDEs (or DEs) are reportedly recalled as a peaceful and pleasant sense of being pulled toward a light, often accompanied by an out-of-body experience with a time-compressed life review. After returning to life, patients report a diminished fear of death, a kinder spirit, and more benevolent values—a “transformational” experience that Parnia is planning to study with the support of 17 major university hospitals. In this study, cardiac-arrest survivors who do and don’t recall cognitive experiences will complete positive psychology measures of human flourishing.

 

One wonders (and Parnia does, too), when did the recalled death experiences occur? During the cardiac-arrest period of brain inactivity? During the moments before and at cardiac arrest? When the resuscitated patient was gradually re-emerging from a coma? Or even as a later constructed false memory?

 

Answers may come from a future Parnia study, focusing on aortic repair patients, some of whom experience a controlled condition that biologically approximates death, with no heartbeat and flat-lined brain activity. This version of aortic repair surgery puts a person under anesthesia, cools the body to 70 degrees, stops the heart, and drains the blood, creating a death-like state, during which the cardiac surgeon has 40 minutes to repair the aorta before warming the body and restarting the heart. Functionally, for that 40 or so minutes, the patient is dead . . . but then lives again. So, will some of these people whose brains have stopped functioning experience DEs? One study suggests that at least a few aortic repair patients, despite also being under anesthesia, do report a cognitive experience during their cardiac arrest.

 

Parnia hopes to take this research a step further, by exposing these “deep hypothermia” patients to stimuli during their clinical death. Afterwards he will ascertain whether any of them can report accurately on events occurring while they lacked a functioning brain. (Such has been claimed by people having transformative DEs.)

 

Given that a positive result would be truly mind blowing—it would challenge our understanding of the embodied person and the mind-brain connection—my colleagues and I encouraged Parnia to

  •      preregister his hypotheses and methods with the Open Science Framework.
  •      conduct the experiment as an “adversarial collaboration” with a neuroscientist who would expect a null result.
  •      have credible, independent researchers gather the data, as happens with clinical safety trials.

 

If this experiment happens, what do you predict: Will there be someone (anyone) who will accurately report on events occurring while their brain is dormant?

 

Sam Parnia thinks yes. I think not.

 

Parnia is persuaded by his accumulation of credible-seeming accounts of resuscitated patients recalling actual happenings during their brain-inactive time. He cites the case of one young Britisher who, after all efforts to restart his heart had failed and his body turned blue, was declared dead. When the attending physician later returned to the room, he noticed that the patient’s normal color was returning and discovered that his heart had somehow restarted. The next week, reported Parnia, the patient astoundingly recounted events from his death period. As Agatha Christie’s Miss Marple, reflected “It wasn’t what I expected. But facts are facts, and if one is proved to be wrong, one must just be humble about it and start again.”

 

My skepticism arises from three lines of research: the failure of parapsychology experiments to confirm out-of-body travel with remote viewing, the mountain of cognitive neuroscience evidence linking brain and mind, and scientific observations showing that brain oxygen deprivation and hallucinogenic drugs can cause similar mystical experiences (complete with the tunnel, beam of light, and life review).

 

Nevertheless, Parnia and I agree with Miss Marple: Sometimes reality surprises us (as mind-boggling DE reports have surprised him). So stay tuned. When the data speak, we will both listen.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

 

P.S. For those wanting more information: Parnia and other death researchers will present at a November 18th New York Academy of Sciences symposium on “What Happens When We Die?” (see here and here)--with a live stream link to come.

 

For those with religious interests: My colleagues, British cognitive neuroscientist Malcolm Jeeves and American developmental psychologist Thomas Ludwig, reflect on the brain-mind relationship in their recent book, Psychological Science and Christian Faith. If you think that biblical religion assumes a death-denying dualism (thanks to Plato’s immortal soul) prepare to be surprised.

A tweet from my colleague Jean Twenge—a world-class expert at tracking youth well-being in massive data sets—alerted me to the recently released 2018 National Survey on Drug Use and Health. Among its dozens of results, which you can view here, several struck me as worthy of note by high school and college teachers, administrators, and counselors. 

First some good news: From 2002 to 2018, cigarette smoking plummeted and is now but 2.7 percent of U.S. 12- to 17-year-olds. Reaching back to 1976, high school senior smoking has plunged even more, from 28.8 percent to 3.6 percent. Although smoking has become gauche, seniors’ e-cigarette use has soared—from 1.5 percent in 2010 to 26.7 percent in 2018. (Will widely publicized news of vaping-related lung illnesses and deaths reverse this trend?)

 

The not-so-good news: From 2011 to 2018, major depressive episodes increased from 11 to 14 percent among 12- to 17-year-olds, and, similarly, from 8 to 14 percent among 18- to 25-year-olds.

 

 

 

Not surprisingly, youth and young adults’ increased rate of depression has been accompanied by an increase in suicidal thoughts (shown below), suicide attempts, and actual suicides (see new CDC data here).

 

 

As I explained in a previous TalkPsych.com essay, the increase in teens’ (especially teen girls’) vulnerability to depression, anxiety, self-harm, and suicide has occurred in other Western countries as well, and it corresponds neatly with the spread of smart phones and social media. That fact of life has stimulated new research that 

  • correlates teens’ social media use with their mental health.
  • follows teens longitudinally (through time) to see if their social media use predicts their future mental health.
  • experiments by asking if volunteers randomly assigned to a restrained social media diet become, compared with a control group, less depressed and lonely. 

 

Stay tuned. This scientific story is still being written, amid some conflicting results. As Twenge summarizes in a concise and readable new essay, up to two hours of daily screen time predicts no lessening of teen well-being. But as daily screen time increases to six hours—with associated diminishing of face-to-face relationships, sleep, exercise, reading, and time outdoors—the risk of depression and anxiety rise. 

 

The alarming rise in youth and young adult depression, especially over such a thin slice of history, compels our attention. Is screen time the major culprit (both for its drain on other healthy activities and for the upward social comparisons of one’s own mundane life with the lives of cooler-seeming others)? If not, what other social forces are at work? And what can be done to protect and improve youth and young adult well-being?

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

Photo courtesy Virginia Welle

 

At a recent Teaching of Psychology in Secondary Schools workshop hosted by Oregon State University, I celebrated and illustrated three sets of big ideas from psychological science. Without further explanation, here is a quick synopsis.

 

Questions: Which of these would not be on your corresponding lists? And which would you add?

 

Twelve unsurprising but important findings (significant facts of life for our students to understand):

  • There is continuity to our traits, temperament, and intelligence.
    • With age, emotional stability and conscientiousness increase.
    • Yet individual differences (extraversion and IQ) persist.
  • Specific cognitive abilities are distinct yet correlated (g, general intelligence).
  • Human traits (intelligence, personality, sexual orientation, psychiatric disorders, autism spectrum) are influenced by “many genes having small effects”
  • A pessimistic explanatory style increases depression risk.
  • To a striking extent, perceptual set guides what we see.
  • Rewards shape behavior.
  • We prioritize basic needs.
  • Cultures differ in  
    • how we dress, eat, and speak.
    • values.
  • Conformity and social contagion influence our behavior.
  • Group polarization amplifies our differences.
  • Ingroup bias (us > them) is powerful and perilous.
  • Nevertheless, worldwide, we are all kin beneath the skin (we share a human nature).

 

Eleven surprising findings that may challenge our beliefs and assumptions:

  • Behavior genetics studies with twins and adoptees reveal a stunning fact: Within the normal range of environments, the “shared environment” effect on personality and intelligence (including parental nurture shared by siblings) is ~nil. As Robert Plomin says (2019), “We would essentially be the same person if we had been adopted at birth and raised in a different family.”
    • Caveats:
      • Parental extremes (neglect/abuse) matter.
      • Parents influence values/beliefs (politics, religion, etc.).
      • Parents help provide peer context (neighborhood, schools).
      • Stable co-parenting correlates with children’s flourishing.
  • Marriage (enduring partnership) matters . . . more than high school seniors assume . . . and predicts greater health, longevity, happiness, income, parental stability, and children’s flourishing. Yet most single adults and their children flourish.
  • Sexual orientation is a natural disposition (parental influence appears nil), not a moral choice.
  • Many gay men’s and women’s traits appear intermediate to those of straight women and men (for example, spatial ability).
  • Seasonal affective disorder (SAD) may not exist (judging from new CDC data and people’s Google searches for help, by month).
  • Learning styles—assuming that teaching should align with students’ varying ways of thinking and learning—have been discounted.
  • We too often fear the wrong things (air crashes, terrorism, immigrants, school shootings).
  • Brief “wise interventions” with at-risk youth sometimes succeed where big interventions have failed.
  • Random data (as in coin tosses and sports) are streakier than expected.
  • Reality is often not as we perceive it.
  • Repression rarely occurs.

 

Some surprising findings reveal things unimagined:

  • Astonishing insights—great lessons of psychological science—that are now accepted wisdom include
    • split-brain experiments: the differing functions of our two hemispheres.
    • sleep experiments: sleep stages and REM-related dreaming.
    • misinformation effect experiments: the malleability of memory.
  • We’ve been surprised to learn
    • what works as therapy (ECT, light therapy).
    • what doesn’t (Critical Incident Debriefing for trauma victims, D.A.R.E. drug abuse prevention, sexual reorientation therapies, permanent weight-loss programs).
  • We’ve been astounded at our dual-processing powers—our two-track (controlled vs. automatic) mind, as evident in phenomena such as
    • blindsight.
    • implicit memory.
    • implicit bias.
    • thinking without thinking (not-thinking => creativity).
  • We’ve been amazed at the robustness of
    • the testing effect (we retain information better after self-testing/rehearsing it)  
    • the Dunning-Krueger effect (ignorance of one’s own incompetence).   

 

The bottom line: Psychological science works! It affirms important, if unsurprising, truths. And it sometimes surprises us with findings that challenge our assumptions, and with discoveries that astonish us.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

The greatest enemy of knowledge is not ignorance—it is the illusion of knowledge.”

 

This wisdom, often attributed to American historian Daniel Boorstin, suggests a sister aphorism: The great enemy of democracy is not ill will, but the illusion of understanding. It is social and political opinion that, even if well-intentioned and sincerely believed, sprouts from self-confident misinformation.

 

Such is not the province of any one political perspective. Consider:

  • A CivicScience poll asked 3624 Americans if schools should “teach Arabic numerals as part of their curriculum.” Fifty-six percent answered “No.” Among Republican respondents, 74 percent objected; among Democrats, the number was 40 percent. (Do the respondents advise, instead, teaching Roman numerals?)
  • CivicScience also asked people if schools should teach the “creation theory of Catholic priest Georges Lemaitre as part of their science curriculum.” Democrats overwhelmingly objected: 73 percent opposed such teaching (compared with 33 percent of Republicans) … of the Big Bang theory.

 

Such ill-informed opinions—illusions of understanding—are powered by what social psychologists know as the overconfidence phenomenon (a tendency to be more confident than correct) and the Dunning-Krueger effect (incompetence not recognizing itself). And, as I have previously noted, illusory understanding—and what it portends for our civic life--matters because our collective future matters. Consider further:

  • When—despite plummeting violent and property crime rates—7 in 10 adults annually believe there has been more crime in the current year than in the prior year, then fear-mongering politicians may triumph.
  • When immigrants crossing the southern U.S. border are seen as oftentimes “vicious predators and bloodthirsty killers,” then—notwithstanding the somewhat lower actual crime and incarceration rate of immigrants—we will call for the shifting of public resources to “build the wall.”
  • When statistically infrequent (but traumatizing) incidents of air crashes, domestic terrorism, and school shootings hijack our consciousness—thanks to our heuristic of judging risk by readily available images of horrific happenings—then we will disproportionately fear such things. Gallup reports that nearly half of Americans (38 percent of men and 58 percent of women) now are “worried” that they or a family member will be a mass shooting victim. Feeling such fear, we may allocate scarce public resources in less-than-optimal ways—as when transforming schools into fortresses with frightened children—while being unconcerned about the vastly greater dangers posed by car accidents, guns in the home, and future mass destruction from climate change. (It’s so difficult to feel empathy for the unseen future victims of grave dangers.)

 

Red or blue, we agree that our children’s and grandchildren’s future matters. The problem is that democracy requires an informed and thoughtful populace. Democracy’s skeptics argue that most people lack the motivation and ability to do the needed work—to absorb large amounts of information and then, with appropriate humility and openness, to sift the true from the false. Consider our collective ignorance on such diverse topics as the U.S. federal budget percentage going to foreign aid (1 percent, not Americans’ average guess of 31 percent) to the mere 38 percent knowing which party currently controls the U.S. House of Representatives.

 

Such ignorance needn’t reflect stupidity.  Perhaps you, too, have rationalized: If the odds of my vote affecting an election or referendum outcome are infinitesimal, then why invest time in becoming informed? Why not, instead, care for my family, pay the bills, manage my health, pursue relationships, and have fun? Or why not trust the simple answers offered by authoritarian leaders?

 

Ergo, the great enemy of an informed and prudent populace, and of a flourishing democracy, is misinformation that is sustained by an illusion of understanding. But there is good news: Education matters. Education helps us recognize how errors infuse our thinking. Education makes us less gullible to conspiracy theories. Education, rightly done, draws us out of our tribal social media bubbles. And education teaches us to think critically—to ask questions with curiosity, to assess claims with evidence, and to be humble about our own understanding. Said differently, education increases our willingness to ask the two big critical thinking questions: What do you mean? and How do you know?

 

So three cheers for education. Education informs us. It teaches us how to think smarter. And as Aristotle long ago taught us, it supports civic virtues and human flourishing.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com.)

“Do something!” shouted a lone voice at Ohio’s governor during a post-massacre candlelight vigil in downtown Dayton. Others soon chimed into what became a crowd chant, which has now challenged Congress to, indeed, do something in response to the repeated mass shootings.

 

In response, politicians and pundits offered varied diagnoses and remedies. Some blamed mental illness or violent video gaming or White nationalist hate speech. Others noted that such do not set the United States apart from countries that also have mental illness, video game enthusiasts, and hate speech—yet have vastly fewer homicides and virtually no mass shootings. What distinguishes the United States is, simply, guns.

 

Despite broad and growing public support for strengthened background checks and assault weapon bans, America’s nearly 400 million guns are not disappearing soon. So what, realistically, is something effective we can do?

 

Might “red flag” gun laws, which aim to take guns away from dangerous people, be a remedy? If someone expresses suicidal or destructive fantasies, or is mentally ill, could we save lives by confiscating their weapons?

 

The idea of identifying at-risk individuals is not new. Former Speaker of the U.S. House Paul Ryan had the idea in 2015: “People with mental illness are getting guns and committing these mass shootings.” In the wake of the 2018 slaughter of 17 people at a Parkland, Florida high school, Florida’s Governor (now-Senator) Rick Scott went a step further, urging stronger rules to red-flag high-risk people: “I want to make it virtually impossible for anyone who has mental issues to use a gun. I want to make it virtually impossible for anyone who is a danger to themselves or others to use a gun.” President Donald Trump suggested opening more mental hospitals that could house would-be mass murders: “When you have some person like this, you can bring them into a mental institution.” After the El Paso and Dayton massacres, he declared that mass killers are “mentally ill monsters.” At an August 15th New Hampshire rally he added that "These people are mentally ill. I think we have to start building institutions again."

 

The general public has supported red-flagging. In a 2012 Gallup survey, 84 percent of Americans agreed that “increased government spending on mental health screening and treatment” would be a “somewhat” or “very” effective “approach to preventing mass shootings at schools.”

 

While we psychologists welcome the expressed high regard for our supposed powers of discernment, the hard reality is otherwise. Extremely rare events such as mass shootings are inherently difficult to predict, even by the best psychological science. One analysis reviewed 73 studies that attempted to predict violent or antisocial behavior. Its conclusion: Using psychology’s risk assessment tools “as sole determinants of detention, sentencing, and release is not supported by the current evidence.”

 

Moreover, among the millions of troubled people who could potentially murder or commit suicide, it is impossible to identify in advance the infinitesimal fraction who will do so. And it would surely be unfair to stigmatize all “mentally ill” people. Most mentally ill people do not commit violent acts, and most violent criminals are not mentally ill. Violent acts are better predicted by anger, alcohol use, previous violence, gun availability, and young-male demography. (The El Paso and Dayton shooters were 21 and 24-year-old males.) As the late psychologist David Lykken once observed, “We could avoid two-thirds of all crime simply by putting all able-bodied young men in cryogenic sleep from the age of 12 through 28.”

 

Suicide is likewise hard to predict. One research team summarized 50 years of research on suicide’s unpredictability: “The vast majority of people who possess a specific risk factor [for suicide] will never engage in suicidal behavior.” Moreover, our ability to predict suicide “has not improved across 50 years.”

 

Even given our inability to offer accurate predictions of who will commit murder or suicide, we do know some risk factors. As every psychology student knows, one of the best predictors of future behavior is past behavior:  Prior violent acts increase the risk of future violent acts--and prior suicide attempts raise the risk of a future suicide. This was seemingly illustrated by the death of convicted pedophile financier Jeffrey Epstein, after he was removed from suicide watch, which the New York Times reports would normally be decided by the chief psychologist at a federal prison facility after “a face-to-face psychological evaluation.” Shortly after apparently being deemed not at risk, despite his prior attempt, Epstein reportedly died by hanging in his prison cell.

 

But even without knowing who will commit suicide, we can modify the environment to reduce its probability. For example, fences that negate jumping from bridges and buildings have reduced the likelihood of impulsive suicides. Reducing the number of in-home guns has also been effective. States with high gun ownership rates are states with high suicide rates, even after controlling for other factors such as poverty. After Missouri repealed its tough handgun law, its suicide rate went up 15 percent; when Connecticut enacted such a law, its suicide rate dropped 16 percent.

 

And we can reduce, even if we cannot predict, mass shootings. As my psychologist colleague Linda Woolf wrote after a 2018 massacre, and again after El Paso and Dayton, it is time “to focus on the evidence—mass shootings occur, and guns make these atrocities all too easy and frequent.” Our politicians, she adds, should initiate gun safety reforms including “a ban on assault weapons, ban on large-capacity magazines, universal background checks, stiffer licensing laws, red flag laws, and lifting of all Federal restrictions on gun violence research.” Although we cannot predict the next tragedy, we can act to reduce its likelihood.

 

(For David Myers’ other essays on psychological science and everyday life, visit TalkPsych.com. An earlier essay also reported some of the evidence on the unpredictability of mass shootings.)