Skip navigation
All Places > The Psychology Community > Blog
1 2 3 Previous Next

The Psychology Community

316 posts

In my psychology texts, and in other writings (such as here for the faith community), I have explained the growing evidence that sexual orientation is a natural, enduring disposition (most clearly so for males). The evidence has included twin and family studies indicating that sexual orientation is influenced by genes—many genes having small effects. One recent genomic study, led by psychiatrist and behavior geneticist Alan Sanders, analyzed the genes of 409 pairs of gay brothers, and identified sexual orientation links with parts of two chromosomes.

 

Today, Nature will be releasing (through its Scientific Reports) a follow-up genome-wide association study by the Sanders team that compares 1,077 homosexual and 1,231 heterosexual men. They report genetic variants associated with sexual orientation on chromosomes 13 and 14, with the former implicating a “neurodevelopmental gene” mostly expressed in a brain region that has previously been associated with sexual orientation. On chromosome 14 they identified a gene variant known to influence thyroid functioning, which also has been associated with sexual orientation.

 

Although other factors, including prenatal hormonal influences, also help shape sexual orientation, Sanders et al. conclude that “The continued genetic study of male sexual orientation should help open a gateway to other studies focusing on genetic and environmental mechanisms of sexual orientation and development.” The science of sexual orientation (for females as well) marches on.

David Myers

The Pro-Truth Pledge

Posted by David Myers Expert Nov 29, 2017

In a year-ago post, I observed that “For us educators, few things are more disconcerting than the viral spread of misinformation. Across our varying political views, our shared mission is discerning and teaching truth, and enabling our students to be truth-discerning critical thinkers.”

 

Now some kindred-spirited behavioral scientists have responded to our post-truth culture by inviting public figures and private citizens to sign a pro-truth pledge. To a teaching psychologist, the pledge reads like a manifesto for critical thinking. Along with some higher-profile colleagues, including Jon Haidt and Steve Pinker, I’ve signed, by pledging my effort to:

Share truth

  • Verify: fact-check information to confirm it is true before accepting and sharing it
  • Balance: share the whole truth, even if some aspects do not support my opinion
  • Cite: share my sources so that others can verify my information
  • Clarify: distinguish between my opinion and the facts

Honor truth

  • Acknowledge: acknowledge when others share true information, even when we disagree otherwise
  • Reevaluate: reevaluate if my information is challenged, retract it if I cannot verify it
  • Defend: defend others when they come under attack for sharing true information, even when we disagree otherwise
  • Align: align my opinions and my actions with true information

Encourage truth

  • Fix: ask people to retract information that reliable sources have disproved even if they are my allies
  • Educate: compassionately inform those around me to stop using unreliable sources even if these sources support my opinion
  • Defer: recognize the opinions of experts as more likely to be accurate when the facts are disputed
  • Celebrate: celebrate those who retract incorrect statements and update their beliefs toward the truth

I recently finished Sam Kean’s (2012),  The Violinist’s Thumb the history, the present, and the future of DNA research. Kean writes, “Genes don’t deal in certainties; they deal in probabilities.” I love that – and I’m using it the first day of Intro Psych next term: “Psychology doesn’t deal in certainties; it deals in probabilities.”

 

I already talk about correlations as probabilities. The stronger the correlation, the higher the probability that if you know one variable, you can predict the other variable.

 

In the learning chapter, it’s not unusual for a student to say, “I was spanked, and I turned out okay.” Now I can repeat, “psychology doesn’t deal in certainties; it deals in probabilities.” When children are spanked, it increases the probability of future behavioral problems (Gershoff, Sattler, & Ansari, 2017). It is not a certainty.

 

Whenever aggression comes up as a topic, a student will say, “I play first-person-shooter games, and I’ve never killed anybody.” Again, “psychology doesn’t deal in certainties; it deals in probabilities.” Playing violent video games increases the chances of being aggressive. Watching violent movies increases the chances of being aggressive. Listening to violent-themed music increases the chances of being aggressive. (List is not exhaustive.) The more of those factors that are present, the greater the probability of behaving aggressively (Anderson, C, Berkowitz, L, Donnerstein, E, Huesmann, L, Johnson, J, Linz, D, Malamuth, N, & Wartella, 2003). It is not a certainty.

 

A student says, “I was deprived of oxygen when I was being born, and I haven’t developed schizophrenia” (McNeil, Cantor-Graae, & Ismail, 2000). (Okay, I have never had a student say this, but I wanted one more example.) Being deprived of oxygen at birth increases the probability of developing schizophrenia. It is not a certainty.

 

Any time a student reports an experience that does not match what most in a research study experienced, I can say “Like genetics, psychology doesn’t deal in certainties; it deals in probabilities.”

 

References

 

Anderson, C, Berkowitz, L, Donnerstein, E, Huesmann, L, Johnson, J, Linz, D, Malamuth, N, & Wartella, E. (2003). The influence of media violence on youth: . Psychological Science In The Public Interest (Wiley-Blackwell), 4(3), 81–110. https://doi.org/10.1111/j.1529-1006.2003.pspi_1433.x

 

Gershoff, E. T., Sattler, K. M. P., & Ansari, A. (2017). Strengthening Causal Estimates for Links Between Spanking and Children’s Externalizing Behavior Problems. Psychological Science, 95679761772981. https://doi.org/10.1177/0956797617729816

 

Kean, S. (2012). The Violinist’s Thumb. New York City: Little, Brown, and Company.

 

McNeil, T. F., Cantor-Graae, E., & Ismail, B. (2000). Obstetric complications and congenital malformation in schizophrenia. In Brain Research Reviews (Vol. 31, pp. 166–178). https://doi.org/10.1016/S0165-0173(99)00034-X

One of psychology’s most reliable phenomena is “the overconfidence phenomenon”—the tendency, when making judgments and forecasts, to be more confident than correct. Stockbrokers market their advice regarding which stocks will likely rise while other stock brokers give opposite advice (with a stock’s current price being the balance point between them). But in the long run, as economist Burton Malkiel has repeatedly demonstrated, essentially none of them beat the efficient marketplace.

 

Or consider psychologist Philip Tetlock’s collection of more than 27,000 expert predictions of world events, such as the future of South Africa or whether Quebec would separate from Canada. As Nathan DeWall and I explain in Psychology, 12th Edition,

His repeated finding: These predictions, which experts made with 80 percent confidence on average, were right less than 40 percent of the time. Nevertheless, even those who erred maintained their confidence by noting they were “almost right.” “The Québécois separatists almost won the secessionist referendum.”

 

My fellow Worth Publishers text author and Nobel laureate economist, Paul Krugman, has described similar overconfidence and reluctance to admit error among economists and politicians.

  • When Bill Clinton raised taxes on the rich, conservative politicians and economists predicted economic disaster—but the economy instead boomed, with 23 million jobs added during the Clinton years.
  • When Kansas politicians passed large tax cuts with the promise that growth would pay for them, the result was an unexpected state funding crisis.
  • When, in 2008, the Federal Reserve responded to the recession by cutting interest rates to zero, conservative economists and pundits published an open letter warning of soaring inflation to come. But it hasn’t.

 

When none of the predicted economic outcomes happened, did the forecasters own their error and change their thinking? Contacted by Bloomberg, not one of the inflation open letter signatories acknowledged error. Instead, they offered (in Krugman’s words) “some reason wrong was right … and never, ever, an admission that maybe something was wrong with [their] initial analysis.”

 

Overconfidence—the human bias that our own Nobel laureate, Daniel Kahneman, would most like to eliminate—feeds another potent phenomenon, “belief perseverance”—our tendency to cling to our beliefs in the face of contrary evidence. The more we explain why our beliefs might be true, the more they persist. Thus we welcome belief-supportive evidence—about climate change, same-sex marriage, or the effects of today’s proposed tax cuts—while discounting contrary evidence. To believe is to see.

 

Perhaps, then, we should all aspire to a greater spirit of humility. Such recognizes, as I have written elsewhere, that

We are finite and fallible. We have dignity but not deity. [Thus] we should hold our own untested beliefs tentatively, assess others ’ ideas with open-minded skepticism, and when appropriate, use observation and experimentation to winnow error from truth.

It’s official: Dog owners live longer, healthier lives” reads the headline on Time’s website. The refreshing change is that the headline – and the article – carefully explain that the data are correlational, not causal (MacMillan, 2017). When this article appeared in my local paper, The Seattle Times, it came with a sub-headline: “It may be correlation, not causation, but the risk of death was about 33 percent lower for dog-owners than non-owners, a study found.” You won’t be surprised to hear that the journalist, Amanda MacMillan, has a BA in journalism/science writing with minors in “science, technology, and society” and physics [shout out to Lehigh University, her alma mater.]

 

Researchers looked at national records for 3.4 million people in Sweden over a 12-year span. Those records included whether the people registered a dog and their health reports. “Dog ownership registries are mandatory in Sweden, and every visit to a hospital is recorded in a national database.”

 

Researchers learned that “[p]eople who lived alone with a dog had a 33% reduced risk of death [over that 12-year span], and an 11% reduced risk of cardiovascular disease, than people who lived alone without a dog.” The findings were less pronounced for people who lived with other people,

 

I’m going to put this study into my correlation lecture. After sharing these results, I’ll ask students to work in pairs to generate possible reasons for these relationships and then share their ideas with the class. This is a nice opportunity to show that while correlations do not tell us about cause and effect, they provide a goldmine of hypotheses for future research.

 

One possibility, the article points out, is that owning a dog causes better health in the owner: owning a dog causes people to be more active (“gotta walk the dog”). Or dogs may share their microbiome with their owners, giving their human immune systems a boost – as I reflect on how I woke this morning with my dachshund standing on my head and licking my face. Or by walking our dogs, we meet people, extending our social network; social networks are also correlated with better health.

 

Another possibility, the article also points out, is that more active (read “healthy) people are more likely to get a dog.

And then there are the third variables. For example, “[o]ther studies have suggested that growing up with a dog in the house can decrease allergies and asthma in children.” It may be that having a dog growing up made people more likely to get a dog as an adult and that the exposure to dogs as children gave us a stronger adult immune system.

 

As instructors of psychological science, let’s continue to help our students understand what research does and does not tell us, so that when they get jobs as journalists, they can accurately interpret research findings for the general public as this journalist has done.

 

References

 

MacMillan, A. (2017, November). It’s official: Dog owners live longer, healthier lives. Retrieved from http://time.com/5028171/health-benefits-owning-dog/

Here are some survey data your students may find interesting. This will be most compelling for your psychology majors.

The American Psychological Association (APA) mined the data from the 2015 National Survey of College Graduates (NSCG) and learned some interesting things about psychology’s bachelor’s degree recipients (American Psychological Association, 2017). The NSCG estimates that there are about 58 million people in the United States with a bachelor’s degree; that probably includes you. The NSCG sampled 135,000 of them in 2015 (National Science Foundation, 2017).   

After covering survey research in, say, Research Methods, ask students to work in groups to take a few minutes and think of what variables they would include in such a survey and why. Ask each group, in turn, for one variable that no other group has yet mentioned. Write the variables on the board (or computer screen) as groups report out. Keep rotating through the groups until all variables have been reported or as time allows. Next, share with students this list of key variables (scroll to 2.h.) included in the NSCG survey.

 

Ask students if there are any groups they would exclude from the survey. The NCSG excludes people who are institutionalized, who live outside the U.S., and who are 76 years old or older (scroll to 3.b).

Ask students what kind of sampling design they would use. The NCSG used stratified sampling on “demographic group” (with “an oversample of young graduates), “highest degree type,” and “occupation/bachelor’s degree field” (scroll to 3.c.).

 

Researchers started with a web survey. For those who didn’t respond to that, researchers sent them a survey in the mail. And for those who didn’t respond to that, they got a phone call for “computer-assisted telephone interviewing” (scroll to 4.a.).

 

What did APA find in that 2015 survey data about those of us with bachelor’s degrees in psychology (American Psychological Association, 2017)?

  • 4 million people in the U.S. have at least a bachelor’s degree in psychology
  • 2% got a master’s degree in psychology
  • 8% got a master’s degree in psychology first and then went on to complete a doctorate/professional degree in psychology
  • 3% got a master’s degree in something else and then a doctorate/professional degree in psychology
  • 7% directly earned a doctorate/professional degree in psychology, bypassing the master’s degree

 

Adding up those numbers, that’s 13%. What about the other 87% of psychology bachelor’s degrees holders?

  • 30% earned a masters or doctorate/professional degree in something other than psychology
  • 57% did not earn a graduate degree

 

Those 30% who earned a graduate degree in something else is nice evidence that a psychology degree is a good all-purpose sort of degree. The father of one my students took his bachelor’s in psychology to law school. He is now a judge. I wish all judges had degrees in psychology!

 

Those 57% who did not earn a graduate degree are undoubtedly putting their psychology degrees to good use, no matter what they are doing. Although some of them aren’t fully cognizant of what their education is doing for them now. Give students a copy of the American Psychological Association Guidelines for the Major. Divide students into groups, and give each group two possible jobs a person might have. Drew Appleby’s list is a nice one to choose from. Ask each group to put a checkmark for each job on their Guidelines for the Major the knowledge and skills (outcomes) that would be useful to have in their assigned jobs. For each outcome have one person in each group raise their hand if the group thought the outcome was important for one of their jobs. Have two persons in each group raise their hands if they the outcome was important for both of their jobs. Tally the number of hands for each outcome. Give students an opportunity to share why they thought particular knowledge and skills (outcomes) is important and how the psychology major is helping them achieve that knowledge and skills. For any holes in your students' observations, let them know where in the curriculum students are gaining that knowledge and those skills.

I love my job as a psychology textbook author—sharing my life-relevant science with millions of students worldwide. Every day I get to play with and organize ideas, make words march up a screen, and then sculpt those words with cadence and imagery that I hope will engage and give pleasure to our student readers.

 

But before playing with the words, the greater work is the reading—from several dozen psychological science and science news periodicals. Thanks to this continuing education, I am privileged to learn something new nearly every day.

 

Yesterday, for example, I harvested these gems from the American Journal of Psychiatry:

  • review of “neurocognitive” investigations of transgender people,
  • a meta-analysis of the efficacy of psychodynamic therapy,
  • a meta-analysis of placebo-controlled drug trials for treating acute schizophrenia,
  • an experiment on bright-light therapy for treating bipolar depression, and
  • a study indicating the effectiveness of ketamine for reducing suicidal ideation.

 

The latest issue of Nature reported, from an analysis of 1.5 million medical research papers, that women authors are more likely to include sex and gender variables in their analyses. Another Nature study reported that brain imaging can help predict suicidal ideation.

 

Perhaps most interesting, in today’s climate of political hate speech, was a new Aggressive Behavior report of two large national U.S. surveys. The title says it all: “Exposure to hate speech increases prejudice through desensitization.”

 

With such information in hand, I then print and file each report in a cubbyhole system organized by our books’ chapter topics. When the time comes to make those words march up the screen for a new edition, most of the needed information will be readily at hand. And then I get to start all over again filling up those cubby holes. What a great job.

The accumulated new materials for one recent edition of our Psychology text.

I didn’t start covering hearing in my Intro Psych course until the earbud-style headphones became popular. When I heard music emanating from a student’s earbuds from the back of the room, I knew it was time for us to have a conversation.

 

In the cochlea, the stereocilia closest to the oval window are the ones responsible for hearing high-pitched sounds. Exposure to loud sounds causes a tsunami to rush over those stereocilia, causing them to bend over farther than they are supposed to resulting in permanent damage (Oghalai, 1997).

 

The Center for Hearing Loss Help has a nice image of a bundle of pristine stereocilia and a bundle of damaged cilia. In fact, this is an interesting article on diplacusis, where one ear hears a pitch that is just above or just below the pitch heard by the other ear (Center for Hearing Loss Help, 2015).

 

In class, after walking students through the structure and workings of the ear, I go to this webpage (Noise Addicts, n.d.) that has 3-second sound files of pitches ranging from 22 kHz down to 8 kHz. I start with the 22 kHz, which none of my students can hear, and then move to lower pitches one by one. I cannot hear them until I get down to about 14 kHz. Fifty years of being exposed to sound, with the last 16 years spent in a noisy urban environment – and more than one rock concert – has likely taken its toll. I have friends in their 70s who have spent their lives in a quiet town who have no problem hearing 17 kHz. Of course exposure to loud sounds is not the only factor that can affect hearing loss for high-pitched sounds, but it is a common factor.

 

Some time ago, I had a student who knew that he had some hearing loss, but he had no idea of the extent of it. When I played the sounds in class, he was stunned to see students reacting to the high-pitched sounds that he couldn’t hear. The first frequency he heard was a mere 8 kHz. He immediately made an appointment with an audiologist. He was (just barely) young enough that he qualified for a special program that got him hearing aids for free. The first time he was in class after getting them, he told me that he was floored by how much he could hear – and how much he hadn’t been hearing.

 

Another student who spent a couple years working as a bouncer at a (very loud) club was 23 years old, and the first frequency he heard was 12 kHz.

 

In Mary Roach’s book Grunt, she writes that the problem with most hearing protection is that not only does it protect against loud sounds, but it also makes it hard to hear softer sounds. This is especially problematic for combat soldiers. They need to protect their hearing in case of a sudden explosion or gunfire, but they need to be able to hear what their fellow soldiers are saying. There are now ear cuffs that protect against loud noises but also amplify quieter sounds. In this 3-minute YouTube video, Roach describes the hearing problem and how these new ear cuffs work. A student of mine, who is in the army, said he got to try out the ear cuffs – although not in combat, and he was very impressed with how well they worked.

 

Knowing how their ears work can help students make informed decisions about how they would like to treat their ears. With that knowledge, students may make better decisions that will affect them for their rest of their lives.

 

References

 

Center for Hearing Loss Help. (2015). Diplacusis -- the strange world of people with double hearing. Retrieved from http://hearinglosshelp.com/blog/diplacusisthe-strange-world-of-people-with-double-hearing/ 

 

Noise Addicts. (n.d.). Hearing test -- can you hear this? Retrieved from http://www.noiseaddicts.com/2009/03/can-you-hear-this-hearing-test/ 

 

Oghalai, J. S. (1997). Hearing and hair cells. Retrieved November 4, 2017, from http://www.neurophys.wisc.edu/auditory/johc.html 

As if cell phone use in cars isn’t bad enough, car manufacturers are building distractions into our automobiles, which I affectionately call Built-in Automotive Driving Distraction SystemsTM.

 

Automakers now include more options to allow drivers to use social media, email and text. The technology is also becoming more complicated to use. Cars used to have a few buttons and knobs. Some vehicles now have as many as 50 buttons on the steering wheel and dashboard that are multi-functional. There are touch screens, voice commands, writing pads, heads-up displays on windshields and mirrors and 3-D computer-generated images (Lowy, 2017).

 

In an attempt to save lives, I have been hammering pretty hard on our inability to multi-task in my Intro Psych course. While this topic comes up in greater detail when I cover consciousness, I also embed examples of attention research in my coverage of research methods.

 

Correlation example

After I introduce the concept of correlations, I give my students 5 correlations, and ask them to identify the correlation as positive, negative, or no correlation. One of those correlations comes from a 2009 Stanford study reported by NBC News: people who multitask the most are the worst at it (“memory, ability to switch from one task to another, and being able to focus on a task”) (“Multitaskers, pay attention -- if you can,” 2009).

 

Experiment example

In talking about experimental design, I discuss David Strayer’s driving simulation research at the University of Utah. His lab’s research is easy for students to understand and the results carry a punch. I give this description to my students and ask them to identify the independent variable and the dependent variables.

In an experiment, "[p]articipants drove in a simulator while either talking or not talking on a hands-free cell phone." Those who were talking on a cell phone made more driving errors, such as swerving off the road or into the wrong lane, running a stoplight or stop sign, not stopping for a pedestrian in a crosswalk, than those who were not talking on a cell phone. Even more interestingly, those who were talking on a cellphone rated their driving in the simulator as safer as compared to those who weren't talking on a cellphone. In other words, those talking on the cellphone were less likely to be aware of the driving errors they were making (Sanbonmatsu, Strayer, Biondi, Behrends, & Moore, 2016).  

 

Class demo

When Yana Weinstein of LearningScientists.org posted a link to a blog she wrote on a task switching demo (Weinstein, 2017) to the Society for the Teaching of Psychology Facebook page, I thought, “Now this is what my research methods lecture was missing!” I encourage you to read Weinstein’s original demo once you’re done reading mine.  

I randomly divided my class into two groups. To do that I used a random team generator for Excel, but use whatever system you’d like. Weinstein does this demo with a within subjects design which, frankly, makes more sense than my between subjects design, but in my defense I’m also using this demo to help students understand the value of random assignment.  

 

One group of students recited numbers and letters sequentially (1 to 10 and then A to J). The other group recited them interleaved (1 A 2 B 3 C, etc.). In your instructions, be clear that students cannot write down the numbers/letters and just read them. That’s a different task!

 

Students worked in small groups. While one student recited, another student timed them with a cellphone stopwatch app. (You don’t have to know anything about cellphone stopwatch apps. Your students can handle it.) I didn’t bother dividing students into groups by task. In one group, there might have been three students who recited sequentially and a fourth student who recited interleaved.

 

I asked students to write down their times, and then I came around to each group and asked for those times. I just wrote the times on a piece of paper, and displayed the results using a doc camera. Almost everyone in the sequential condition recited the numbers/letters in under 6 seconds. Almost everyone in the interleaved condition took over 13 seconds.

 

In addition to talking about the independent variable (and experimental and control conditions) and the dependent variable, we talked about the value of random assignment. I had no idea who could do these tasks quickly or slowly. If 20% of them could do these tasks quickly, then random assignment would likely create two groups where the percentage of fast-task participants would be the same in each group. Is it possible that all of the fast-task participants ended up in the sequential task condition? Yep. And that’s one reason replication is important.

 

Oh. And when you’re studying or writing a paper, students, this is why you should keep your phone on silent and out of sight. If you keep looking at your phone for social media or text notifications, it’s going to take you a lot longer to finish your studying or finish writing your paper. Perhaps even twice as long.

 

And driving? As you switch back and forth from driving to phone (or from driving to Built-in Automotive Driving Distraction SystemsTM), it’s not going to take you twice as long to get to your destination. You’re traveling at the same speed, but you’re working with half the attention. That increases the chances that you will not get to your destination at all.  

 

A lot of what we cover in Intro Psych is important to the quality of students’ lives. Helping students see our inability to multitask is important in helping our students – and the people they are near them when they drive – stay alive.

 

References

 

Lowy, J. (2017, October 5). Technology crammed into cars worsens driver distraction. The Seattle Times. Seattle. Retrieved from https://www.seattletimes.com/nation-world/new-cars-increasingly-crammed-with-distracting-technology-2 

 

Multitaskers, pay attention -- if you can. (2009). Retrieved from http://www.nbcnews.com/id/32541721/ns/health-mental_health

 

Sanbonmatsu, D. M., Strayer, D. L., Biondi, F., Behrends, A. A., & Moore, S. M. (2016). Cell-phone use diminishes self-awareness of impaired driving. Psychonomic Bulletin & Review, 23(2), 617–623. https://doi.org/10.3758/s13423-015-0922-4

 

Weinstein, Y. (2017). The cost of task switching: A simple yet very powerful demonstration. Retrieved from http://www.learningscientists.org/blog/2017/7/28-1 

We humans have an overwhelming fear of death. That’s the core assumption of “terror management theory.” It presumes that, when confronted with reminders of our mortality, we display self-protective emotional and cognitive responses. Made to think about dying, we self-defensively cling tightly to our worldviews and prejudices.

 

On the assumption that dying is terrifying—that death is the great enemy to be avoided at all costs—medicine devotes enormous resources to avoiding death, even to extending life by inches. And should we be surprised? I love being alive and hope to have miles of purposeful life to go before I sleep.

 

So, do we have the worst of life yet to come? Are we right to view life’s end with despair?

 

Two psychological science literatures reassure us:

 

The first: The stability of well-being. Across the life span, people mostly report being satisfied and happy with their lives. Subjective well-being does not plummet in the post-65 years. In later life, stresses also become fewer and life becomes less of an emotional roller coaster.

 

The second: Human resilience. More than most people suppose, we humans adapt to change. Good events—even a lottery win—elate us for a time, but then we adapt and our normal mix of emotions returns. Bad events—even becoming paralyzed in an accident—devastate us, but only for a while. Both pleasures and tragedies have a surprisingly short half-life. Facing my increasing deafness, the reality of resilience is reassuring.

 

And now comes a third striking finding: Dying is less traumatic than people suppose. Amelia Goranson and her colleagues examined

  1.      blog posts of terminally ill cancer and ALS patients, and
  2.      last words of death row inmates before their execution.

Others, asked to simulate those posts and words, overly expressed messages filled with despair, anger, and anxiety. More than expected—and increasingly as death approached—the actual words of the dying expressed social connection, love, meaning, and faith.

 

Goranson and her colleagues presume (though it remains to be shown) that the same acceptance and positivity will be exhibited by those dying at the more expected time on the social clock—very late in life, when people (despite stereotypes of grumpy old men) tend to focus on the positive.

 

Thus, conclude the researchers, “death is more positive than people expect: Meeting the grim reaper may not be as grim as it seems.”

As y’all know, females and males are mostly alike—in overall intelligence, in physiology, and in how we perceive, learn, and remember. All but one of our chromosomes is unisex. Yet gender differences in mating, relating, and suffering are what grab our attention. And none more than the amazingly widespread and reliably observed gender difference in vulnerability to depression.

 

In this new Psychological Bulletin meta-analysis, Rachel Salk, Janet Hyde, and Lyn Abramson digest studies of gender and depression involving nearly 2 million people in 90 countries. The overall finding—that women are nearly twice as likely as men to be depressed—is what textbooks have reported. What’s more noteworthy and newsworthy, in addition to the universality of women’s greater risk of depression, is the even larger risk for girls during adolescence. As their figure, below, shows, the gender difference in major depression begins early—by puberty—and peaks in early adolescence.

 

The take-home lesson: For many girls, being 13- to 15-years-old can be a tough time of life.

As is plain to see, Americans are living in a politically polarized era. “Partisan animus is at an all-time high,” reports Stanford political scientist Shanto Iyengar. Nearly 6 in 10 Republicans and Democrats have “very unfavorable” opinions of the other party, and most engaged party adherents feel “angry” about the other party. “Partisans discriminate against opposing partisans, doing so to a degree that exceeds discrimination based on race,” conclude Iyengar and Sean Westwood from their study of “Fear and Loathing Across Party Lines.” If you are American, do you find yourself disdaining those with opposing political views more than those in any other social category (including race, gender, and sexual orientation)? Would you want your child to marry someone aligned with the other party?

 

Americans on both sides also tend to see the other side (compared to their own) as more extreme in its ideology. It’s hard not to agree that those in the other party seem more extreme and biased.

 

But are they? Multiple research teams—at Tillburg University, the University of Florida, the College of New Jersey, and the University of California, Irvine—have found similar bias and willingness to discriminate among both conservatives and liberals. At the latter university, a forthcoming meta-analysis of 41 studies by Peter Ditto and his colleagues “found clear evidence of partisan bias in both liberals and conservatives, and at virtually identical levels.” When evidence supports our views, we find it cogent; when the same evidence contradicts our views, we fault it.

 

We can see equal opportunity bias in opinion polls. Last December, 67 percent of Trump supporters said that unemployment had increased during the progressive Obama years. (It actually declined from 8 to less than 5 percent.) And at the end of the conservative Reagan presidency, more than half of self-identified strong Democrats believed inflation had risen under Reagan. Only 8 percent thought it had substantially dropped—as it did, from 13 to 4 percent.

 

Peter Ditto’s conclusion: “Bias is bipartisan.” This humbling finding is a reminder to us all of how easy it is (paraphrasing Jesus) to “see the speck in your neighbor’s eye” while not noticing the sometimes bigger speck in our own.

In an earlier blog post, I reported on an analysis of 34,000+ Americans’ health interviews with the Centers for Disease Control and Prevention (CDC). To my astonishment, Megan Traffanstedt, Sheila Mehta, and Steven LoBello found no evidence that depression rises in wintertime, or that wintertime depression is greater in higher latitudes, in cloudy rather than sunny communities, or on cloudy days. Moreover, they reported, even the wintertime “dark period” in northern Norway and Iceland is unaccompanied by increased depression.

 

Given the effectiveness of light therapy and the acceptance of major depressive disorder “with seasonal pattern” (DSM-5), I suspected that we have not heard the last word on this. Indeed, criticism (here and here) and rebuttals (here and here) have already appeared.

 

Reading Seth Stephens-Davidowitz’s wonderful new book on big data mining inspired me to wonder if Google depression-related searches increase during wintertime. (To replicate the CDC result, I focused on the United States, though further replications with Canada and the UK yielded the same results.)

 

First, I needed to confirm that Google Trends does reveal seasonally-related interests. Would searches for “basketball” surge in winter and peak during March Madness? Indeed, they do:

We know that Google searches also reveal seasonal trends in physical illnesses. And sure enough, “flu” searches increase during the winter months.

 

So, do searches for “depression” (mood-related) and “sad” similarly surge during wintertime? Nope, after a summer dip, they remain steady from mid-September through May:

 

Ditto for Google entries for “I am depressed” and “I am sad.”

 

 

My surprise at the disconfirmation of what I have taught—that wintertime depression is widespread—is like that experienced by my favorite detective, Agatha Christie’s Miss Marple: “It wasn’t what I expected. But facts are facts, and if one is proved to be wrong, one must just be humble about it and start again.”

 

[Note to teachers: you can generate these data in class, in real time, via Trends.Google.com.]

When biking we hardly notice the wind at our back, until we change directions and try riding against its force. Likewise, we may hardly notice the cultural winds that carry us until we step outside our boundaries. That’s one reason I benefit from the privilege of spending time each year visiting other countries. With each visit I am reminded that cultural norms—from how we meet and greet on the street to how we eat (fork in left hand or right? chopsticks?) to how we weave the social safety net—varies from my place to other places.

 

I write from Scotland, where my wife and I have frequently returned since taking a long-ago sabbatical year here at the University of St. Andrews. The last several days provide two examples of things many Americans take for granted, without realizing how culturally American they are.

 

Example #1: American supermarkets now have “foreign” food sections, where people can buy their favorite international items. Here in St. Andrews the largest foreign food section is “American.” And what foreign foods might you expect to find here (foods that, during our long-ago sabbatical were not available)?  In this American section one can find peanut butter, pancake syrup, canned pumpkin, baking soda, popcorn, sugary cereals, Oreos, Pop Tarts, and Twinkies. American foods!

 

Example #2: The university’s library is a place of study for students from many countries, including cultures with squat toilets.  For me, the way to use a Western flush toilet seems obvious—it’s the way we do it (and I’m unbothered by what might seem gross to others—sitting on a toilet seat that has recently been sat on by others). But for some, a flush toilet needs explanation, just as I might need toileting instruction when visiting their cultures. Thus, this sign appears in the library bathrooms:   

 

Our culture’s widely accepted behaviors, ideas, attitudes, values, and traditions may seem so natural and right to us that we fail to notice them as cultural. Experiencing other cultures’ ways of acting and thinking helps us to see what’s distinctive about our own.

As Nathan DeWall and I explain in Psychology, 11th Edition, “Young infants lack object permanence—the awareness that objects continue to exist even when not perceived. By 8 months, infants begin exhibiting memory for things no longer seen.”

 

Given the early age at which infants display object permanence by looking for a hidden toy after a several second wait do developing primates also display a recall for objects no longer seen?

 

Research suggests that orangutans possess object permanence. . . . a point illustrated in this hilarious 38-second YouTube video pointed out to me by a Facebook engineer who happens to be one of my former students (and also one of my children ).