Skip navigation
All Places > The Psychology Community > Blog
David Myers

Killer Immigrants?

Posted by David Myers Expert Feb 9, 2018

Credit President Trump with consistency in cultivating public fears of immigrants:

  • “When Mexico sends its people . . . they’re bringing drugs. They’re bringing crime. They’re rapists.” (2015)
  • A January, 2018 DonaldJTrump.com ad offered images of an illegal-immigrant murderer while a narrator referred to “evil, illegal immigrants who commit violent crimes,” noting that “Democrats who stand in our way will be complicit in every murder committed by illegal immigrants.”
  • “If we don’t get rid of these loopholes where killers are allowed to come into our country and continue to kill … if we don’t change it, let’s have a shutdown,” Trump said two weeks later.

 

Horrific rare incidents feed the narrative, as in Trump’s oft retold story of the Mexican national who killed a young woman in San Francisco (with a ricocheted bullet), or in his February 6th tweet about the unauthorized immigrant drunk driver who killed a Baltimore Colts linebacker.

 

The effect of this rhetoric and these publicized incidents appears in a recent Gallup survey: “On the issue of crime, Americans are five times more likely to say immigrants make the situation worse rather than better (45% to 9%, respectively).” Are they (and the President) right?

 

With 11 million unauthorized immigrants in the U.S., there will, of course, be ample opportunities to illustrate both immigrant horrors and heroism. Mindful that emotionally compelling stories can illustrate larger truths or deceive us, I searched for data that would answer this question: Are the President’s words illustrating a painful fact that justifies anti-immigrant views, or are they fear mongering demagoguery? Here’s what I found (drawn from my contribution to an upcoming social psychology symposium on human gullibility):

Immigrants who are poor and less educated may fit our image of criminals. Yet studies find that, compared with native-born Americans, immigrants commit less violent crime (Butcher & Piehl, 2007; Riley, 2015). “Immigrants are less likely than the native-born to commit crimes,” confirms a National Academy of Sciences report (2015). After analyzing incarceration rates, the conservative Cato Institute (2017) confirmed that “immigrants are less likely to be incarcerated than natives relative to their shares of the population. Even illegal immigrants are less likely to be incarcerated than native-born Americans.” Noncitizens are reportedly 7 percent of the U.S. population and 6 percent of state and federal prisoners (KFF, 2018; Rizzo, 2018). Moreover, as the number of unauthorized immigrants has tripled since 1990 (Krogstad et al., 2017), the U.S. crime rate plummeted.

 

Alas, when pitted against memorable anecdotes, data—which are merely the sum of all anecdotes—often lose. The availability heuristic—the human tendency to estimate the commonality of an event based on its mental availability (often influenced by its vividness) frequently hijacks human judgments. When data on immigrant arrest or prison population proportions are set against this 2.5 minute excerpt from the 2018 State of the Union address—highlighting the teary parents of two daughters reportedly murdered by a gang with illegal immigrant members—which will people more likely remember?

 

Moreover, social psychologists Leaf Van Boven and Paul Slovic recently noted that the White House has also promoted its immigrants-as-killers thesis with misleading statistics. “Nearly 3 in 4 individuals convicted of terrorism-related charges are foreign-born,” the President tweeted last month. But that statement, and the administration report on which it was based, were “deeply misleading” the psychologists explain, for two reasons. First, the report excluded domestic terrorists, whom Americans fear most, and was inflated with tenuously relevant terrorism-related activities such as perjury and petty theft.

 

Second, the scary-sounding statistic exploited people’s statistical illiteracy. Consider, they say, that 3 in 4 NBA players are African-American. Even so, “a vanishingly small” percentage of African-American men—less than 0.01 percent—play in the NBA. Thus, knowing only that a man is African-American, the chances are 99.99+ percent that he is not an NBA player. And knowing only that someone has been born outside the U.S., you can be similarly confident that the person is not a terrorist, or a killer.

 

Donald Trump’s fear mongering and repeated misrepresentation of truth has me thinking again of George Orwell’s Nineteen Eighty-Four—a world where repeated falsehoods come to be believed: “Freedom is slavery.” “Ignorance is strength.” “War is peace.” I do wonder: When Trump proclaims these falsehoods, does he know they are untrue, or does he believe what he proclaims?

 

Pope Francis offered a possible answer, quoting Dostoevsky’s The Brothers Karamazov: “People who lie to themselves and listen to their own lie come to such a pass that they cannot distinguish the truth within them, or around them, and so lose all respect for themselves and for others.”

 

One of my goals in teaching the abnormal psychology chapter in the General Psychology course is to focus less on symptoms and etiology and more on what it is like to live with a psychological disorder. In 2016 I wrote about an assignment tied to the Stigma Fighters website.

 

In the February 2018 Monitor on Psychology I learned about the Schizophrenia Oral History Project.

 

This website “is an archive of life stories of persons with schizophrenia.  Our narrators are women and men with schizophrenia who are sharing their lives in an effort to increase understanding and reduce stigma related to mental illness.  Their stories reveal not only their struggles, but their remarkable courage and resilience, their hopes, dreams and talents, and their concern for others.  In addition to documenting their histories, we are sharing their stories in presentations for professionals and the general public.”

 

At the time of this writing, 38 people have shared their stories.

 

As an assignment, ask students to read three stories and identify the similarities they find amongst the stories and the biggest differences. At the end of the assignment, ask students to reflect on what they learned from reading the stories. In class, give students an opportunity to speak with each other in small groups to share what they learned. Invite groups to report out to the class.

 

Pro-tip from my Highline College colleague Ruth Frickle: for the first time out with this assignment, go through the stories yourself to identify ten or so your students can choose from. That will make the number you need to be familiar with manageable. As you use this assignment from term to term, expand the number of stories as you feel comfortable.

Category labels matter. On the color spectrum, blue transitions gradually into green. But at some point we place a dividing line between the blue wavelengths (to the left) and the greens (to the right). Once we do so, equally different wavelengths are harder to distinguish when they share the same label, such as blue, than when on opposite sides of the blue-green naming line.

 

Similarly, two locations seem closer and more at risk for the same natural disaster if labeled as in the same state, rather than being equally distant across state lines. As Nathan DeWall and I write in Psychology, 12th Edition, “Tornadoes don’t know about state lines, but people do.”

 

This curious effect of labels on our thinking came to mind when reading about a new study showing that young children think that birthday parties cause aging. We adults don’t have this magical thinking. Moreover, we rationally know that on our birthdays we are only one day older than the day before . . . exactly as the previous day we were but one day older than the day before that.

 

Yet category labels matter. So, do our birthdays make us feel just a tad older?

 

Mine does. You too?

David Myers

Do Guns Protect Us?

Posted by David Myers Expert Jan 17, 2018

Over lunch recently, a friend told about taking a firearm course, which enabled her to carry a concealed pistol and thus, she presumed, to live at less risk of harm.

 

Isn’t it obvious: If more of us have guns, and if gun-toting criminals therefore fear our having a gun, then they will be less likely to rob or attack us? The NRA likes to remind us that “the only thing that stops a bad guy with a gun is a good guy with a gun.”

 

But two new Science reports indicate quite the opposite.

 

It’s no secret, as Stanford researchers Philip Cook and John Donohue report, that some 36,000 Americans a year die of gunshot, and that the U.S. gun suicide rate is eight times that of other high-income countries, and the gun-murder rate is 25 times higher. More newsworthy is their reporting an “emerging consensus,” using sophisticated statistical analyses, that right-to-carry laws “substantially increase violent crime.”

 

For example, from 1977 to 2014, U.S. violent crime rates fell by 4.3 percent in states that adopted right-to-carry laws, but by a whopping 42.3 percent in states that did not adopt such laws. In tense situations, from car accidents to barroom and domestic arguments, guns enable deadly responses. Anecdotes of private guns deterring violence are offset by many more incidents of innocent deaths.

 

In the second report, economists Phillip Levine and Robin McKnight studied firearm sales after the Sandy Hook Elementary School massacre of 20 children and six adults. They associated a spike in firearm sales after the massacre with a corresponding spike in firearm deaths—in the very places where firearm sales had significantly increased. “We find that an additional 60 deaths overall, including 20 children, resulted from unintentional shootings in the aftermath of Sandy Hook.”

 

These new findings confirm what other evidence tells us: Guns purchased for safety make us less safe.

In the FAQ section of my syllabus, I write:

 

The general rule is for every hour you spend in class, you need to spend two hours outside of class. In a face-to-face class, you're in class about 5 hours per week*, so you should spend 10 hours outside of class working on this course. That's also why three 5-credit classes is considered full-time. If you are taking three 5-credit classes, you'd be spending about 45 hours a week, both in and out of class, working on those courses.**

 

As I was writing this post I wondered about the origin of this general rule. It turns out that it is U.S. federal law that applies to any institution that doles out federal financial aid. I have no idea how I’ve managed to make it this long in higher education without knowing that this “general rule” is federal law. In any case, I know now and have changed my syllabus. “The general rule (and the federal law minimum) says for every hour you spend in class…”

 

This is the federal government’s definition of a Carnegie unit, the credits that our courses are worth. Quoting “34 CFR 600.2 of the final regulations,” a Carnegie unit is:

 

An amount of work represented in intended learning outcomes and verified by evidence of student achievement that is an institutionally established equivalency that reasonably approximates not less than:

 

  1. One hour of classroom or direct faculty instruction and a minimum of two hours of out-of-class student work each week for approximately fifteen weeks for one semester or trimester hour of credit, or ten to twelve weeks for one quarter hour of credit, or the equivalent amount of work over a different amount of time; or
  2. At least an equivalent amount of work as required in paragraph (1) of this definition for other academic activities as established by the institution, including laboratory work, internships, practica, studio work, and other academic work leading to the award of credit hours.

 

These 15 pages from the U.S. Department of Higher Education (published in 2011), will tell you all you could possibly want to know about Carnegie units. You’ll find the above definition on page 5.

 

That document also makes clear that each institution of higher learning can divide up those hours per week as they see fit. My 5-credit online class, for example, has 15 hours of work per week that is all outside of class time since the concept of “class time” does not exist in asynchronous courses.

 

Additionally, the 2 hours out for every hour in is the minimum standard. If colleges and universities so desire, they can set a higher standard, say, 3 hours outside for every hour in. Some colleges and universities make their expectations clear on their websites, such as Stanford, Northwestern, and Cal Poly -- all of whom, incidentally, go with the minimum 2 to 1 ratio.

 

Does your class, each week, have 2 hours of work outside of class for every hour in? How do you know?

 

Elizabeth Barre and Justin Esarey at the Center for Teaching Excellence at Rice University created a pretty cool tool, the Course Workload Estimator. Put in what and how much your students should be reading, what and how much your students should be writing, how much time your students should be studying for exams, and how much time students should be spending on any other assignments, then look at the estimated workload – how much time students should be working on your course each week.

 

The website makes it clear that this is an estimator. You would be hard-pressed to find two students who have identical reading rates, identical writing rates, and identical ideas on how they should study. This is a good place for you to plug the study techniques from the LearningScientists.org website. "The course is designed with the expectation that you will spend <x number> of hours studying for each exam. The more efficient and effective your study techniques, the more you will learn in that finite number of hours. Also, put away your phone while you are studying. You lose a lot of precious study time when you are frequently switching between tasks, between your studying and your phone." [This blog post describes a classroom demonstration that illustrates how much time is lost when we switch back and forth between tasks if you'd like to hammer this point home.]

 

On the Course Workload Estimator website, scroll down for the rationale and research that went into creating this tool. Their research points out some gaping holes in our knowledge. If you're looking to start a new research program in the scholarship of teaching and learning arena, their lit review is worth checking out. 

 

Using the Course Workload Estimator, this is how my Intro Psych course breaks down.

 

I added up the total number of pages I’ve assigned students to read and divided that number by 11 for the number of weeks in the term. My students are reading a textbook with many new concepts. I want my students to not just survey or understand the material; I want them to engage with the material, “[r]eading while also working problems, drawing inferences, questioning, and evaluating.”

 

For writing assignments, I sampled what some of my better-performing students submitted last term, and on average, they wrote 27 pages of single-spaced text over the course of the term. I give my students application essay questions to answer, and that sounds the most like writing an “argument,” “[e]ssays that require critical engagement with content and detailed planning, but no outside research.” Students can revise whichever responses they would like, but it is not required ("minimal drafting"). Since students’ engagement while reading the text is part of their writing assignments, I manually adjusted the “hours per written page” to 2 hours. That’s about 30 minutes per essay question. Of course that’s an average. Questions that students find easier will require much less time than questions students find more difficult.  

 

I have a couple other assignments that should take about 2 hours total between them, so I entered 1 hour per assignment.

 

The estimated workload per the Course Workload Estimator? For my class that meets about 5 hours in class each week, students should dedicate about 10.69 hours to this course outside of class each week.

 

To be clearer with my students about my expectations, I just added the image below to my course FAQ along with this text:

About half of your out-of-class time will be spent reading the textbook and thinking about what you are reading (estimated at 5 pages per hour, that's about 5.5 hours per week). The other half of your out-of-class time will be spent responding to the write-to-learn assignment questions (estimated at about 30 minutes per question, that's about 5 hours per week) where each completed assignment, minus the text of the questions themselves, will average out to be approximately 3 single-spaced pages. 

 

Course workload estimate from the Rice University Center for Teaching Excellence

Sue Frantz

The right to fail

Posted by Sue Frantz Dec 20, 2017

I was a squeaky-clean new professor at my very first tenure-track job when one of the counselors (academic and otherwise; it was a small college) gave a presentation to the faculty. She said, "Students have a right to fail." That got my attention. I thought, oh wet-behind-the-ears professor that I was, that whether students passed or failed – whether students learned – was all on me.

 

She went on to say that she’d have students come into her counseling office to say that they needed help because they weren’t passing a class. Her first question: “Are you reading the textbook?” If the answer was no, she told the student that there was nothing else to talk about. “Come back after you’ve done the assigned reading.”

 

To the faculty, she said that she knew that we all had kind hearts, that we wanted to give students a second (third, fourth, nth) chance. But – and this she emphasized – students have to meet you halfway. If students aren’t doing their part to learn, you cannot make them learn.

 

Put that way, it sounded like a relationship I didn’t want to be in. I’m doing everything to help my hypothetical spouse* succeed while my hypothetical spouse does nothing. “Honey, I filled out all of these job applications. You got three interviews all scheduled for Friday. Here is where you need to be and when.” Friday evening after I get home from work, “How did those job interviews go? What?! What do you mean you didn’t go? You spent the day playing Call of Duty?!  Okay, okay, here, let me call those places, and I’ll tell them something, like your grandmother died. I’m sure they’ll let me reschedule the interviews for you.”

 

At the root of “students have the right to fail” is that learning is the responsibility of students. I tell my students that. I say that I’d love to open up their skulls and dump the knowledge in. Or plug them into the Matrix** so they can download it all to their brains. But that’s not how learning works.

 

Faculty can help students learn, but it's the students who need to do the work, who need to do the learning. Sometimes, for a whole host of reasons, students choose not to do the work. And that's okay. It's the student's grade. They know what the assignments are and how much each is worth. Sometimes they're willing to spend the points on something else, including Call of Duty***.

 

My senior year of college, my friend and I skipped the final presentation we were supposed to do for a course. I did the math. Not doing the presentation would drop me from an A to a B. I was already accepted to grad school; it didn't matter if I had an A or a B next to that course on my transcript. I consciously chose not to do the presentation. It was a beautiful spring day; spending the afternoon at the lake was worth the points. None of that had anything to do with the professor; that was all on me.


And sometimes students just aren't ready to be in college, again, for a whole host of reasons. I’ve spent my entire career at community colleges. I have students who come up to me on the first day of class and say something like "I took your class 10 years ago (or “I went to <big state university>,” or “I went to <small liberal arts college>”), and I failed. I wasn't ready to be in college. Now I am." And they are; they are frequently some of my best students.

 

And not all students are striving for an A. Some are shooting for a 2.0 GPA, enough to keep them in college. It may be that they are good with “good enough.” It may be that they have family and job responsibilities that leave them little time for classes. They figure out what they need to do and what they don’t need to do in a course to get the grade that’s going to allow them to keep that 2.0 average. A colleague (an economist) and I were talking about this one day. We now think of that 2.0 as the academic poverty line. Like people living on the economic poverty line, as long as things are good, living paycheck to paycheck is fine. But when the car breaks down or a child gets sick, they find themselves in a financial freefall. The same for students who are aiming for a C in a course. They don’t do some of the work early in the course for whatever reason with a plan to do the later work, enough work to get that C. But then later in the course when the car breaks down or a child gets sick, they don’t have the time to do the later work, and they find themselves in an academic freefall. Now they’re asking for deadline extensions and extra credit; they are asking the professor to bail them out. ("How is it you have the time to do the assignment now or the time to do extra work now when you didn't have time to do the assignment for the last 5 weeks?")

 

My counselor colleague pointed out in her presentation that when students come to you begging you for that deadline extension or extra credit so they don’t lose their scholarship, or so they don't lose their financial aid, or so they don’t get kicked out of college, remember that yours is not the only course that brought the student to this point. Yours is just the last one. The student made a series of decisions that got them here. The result of those decisions, she said, is not your – the professor’s – responsibility. 


As a professor, I take my job very seriously. I have, with much thought and consideration, chosen the content of my courses, the structure of those courses, and the assignments I ask my students to do. The best any professor can do is present content worth knowing in a course structure that will help students who do the work to learn that content.

 

The one thing we cannot control is what the student brings to the table or even if the student comes to the table.

 

Students have a number of rights and responsibilities. Among those rights and responsibilities is the right to fail.

 

************

 

*The hypothetical relationship depicted bears no resemblance to my current or past relationships.

 

**The Matrix was released in 1999. It’s older than a lot of my students. Referencing it probably makes me seem as old as I am. If you can stick with references to the Star Wars universe, you’ll be on less-dated ground.

 

***I didn’t intend it, but Call of Duty is pretty ironic in this blog post.

I was recently at a conference where a symposium speaker had not prepared for her presentation. After introducing herself, she said, “I’m very sorry. I wasn’t able to prepare slides or a speech, so I’m just going to talk for a couple of minutes on <topic> and just leave it open to questions…” In this case, “a couple of minutes” was 40 seconds. I know, because the session was recorded and is available on YouTube. There were no questions. What was supposed to be a 15-minute talk was 30 seconds of introduction, 40 seconds of content, and 20 seconds of awkwardly waiting for questions. The kicker? This was a conference where speakers know they will be presenting 8 months ahead of time.

 

She – a graduate student – missed her deadline. Ten percent was not taken off her grade for being late. She was not allowed to present the following week for half points. She got a zero for her assignment – and her presentation is publicly available for all to see. In perpetuity. Whether you are presenting at a conference, presenting for a new client, or preparing a grant application, there are fixed deadlines. Those deadlines are not going to move no matter what is happening in your life.

 

What were the top 6 reasons the Collegiate Employment Research Institute at Michigan State University found for why new hires got fired (Gardner, 2007)?

  1. “Unethical behavior”
  2. “Lack of motivation/work ethic”
  3. “Inappropriate use of technology”
  4. “Failure to follow instructions”
  5. “Late for work”
  6. “Missing assignment deadlines”

 

A colleague was telling me that he’s struck by how some of his students have no resiliency. When one thing goes wrong, everything else in their lives must come to a stop until the crisis, however small, is resolved.

 

Crisis management is a skill. Powering through adversity is a skill. Project management is a skill. Priority-setting is a skill.

 

The American Psychological Association Guidelines for the Major 2.0  (American Psychological Association, 2016) lists a number of outcomes for goal 5: professional development. These outcomes include at the foundational level:

5.3a. “Follow instructions, including timely delivery, in response to project criteria”

5.3b. “Identify appropriate resources and constraints that may influence project completion”

5.3c. “Anticipate where potential problems can hinder successful project completion”

 

And at the baccalaureate level:

5.3B. “Effectively challenge constraints and expand resources to improve project completion”

5.3C. “Actively develop alternative strategies, including conflict management, to contend with potential problems”

 

If you are going to complete an assignment by the deadline, you need to line up your ducks. Aligning ducks is a skill. When we allow students to turn in late work, we are actively helping students NOT learn these skills.

 

If a student is unable to complete the work in the time allotted, then this is a valuable lesson for a student to learn. Could they have done things differently? For the next project, what will the student do that they didn’t do this time? If the student has just bitten off more than they can chew, this is also important for a student to learn. In the fall I have plenty of students with families who are working full time and trying to go to school full time. They struggle because there are not enough hours in the day to do what they need to do, and what they learn is that taking a few credits per term is plenty.

 

One final note about recently-deceased grandparents. Some grandparents really are recently-deceased. But some are not. Students learned early on that some excuses are more likely to lead to extensions and grace periods than other excuses. Who wants to be the professor that tells a grieving student to suck it up and finish the paper? This puts professors in the awkward position of asking for proof, because who wants to be the professor who doesn’t believe the grieving student? I gave up on all of that a long time ago. I have nothing in my courses that is worth more than 10% of the overall grade, so missing one assignment will not completely tank a grade. And I drop the lowest score in each category of assignment. If a student has submitted all assignments to date, this one missing assignment will be the one that is dropped. No questions asked and no excuses needed. If a student has a whole string of crises during the course, their best option may be to withdraw and try it all again next term after things have settled down.

 

Regardless of whether you accept late work or not, be conscious about what you are trying to accomplish with your late assignment policy.

 

In the end, the question shouldn’t be whether we accept late work or not. The question should be how can we best help our students learn the project management skills they need to complete work on time so they don’t graduate and get hired only to get fired for reason #6. 

 

References

 

American Psychological Association. (2016). Guidelines for the undergraduate psychology major: Version 2.0. American Psychologist, 71(2), 102–111. https://doi.org/10.1037/a0037562

 

Gardner, P. (2007). Moving Up or Moving Out of the Company? Factors that Influence the Promoting or Firing of New College Hires. CERI Research Brief 1, 1–7. Retrieved from http://ceri.msu.edu/publications/pdf/brief1-07.pdf 

In a 2016 post-U.S. Presidential election post, I wondered about Donald Trump’s expressed attitudes towards immigrants, ethnic minorities, Muslims, and women:

 

Is he simply giving a voice to attitudes that are widely shared? . . . Or, with his public platform, will Trump instead model and serve to legitimize the demeaning attitudes, thus increasing their prevalence? Will he make bullying more widely tolerated? (Already, I have heard anecdotes of minority students experiencing harassment, but we need systematic evidence: Will intolerance measurably increase?)

 

As the Trump rhetoric has continued—in August with the White supremacist marchers in Virginia whom he said included “very fine people,” and recent retweeting of inflammatory anti-Muslim videos from a British ultranationalist—my question has lingered: Does exposure to prejudicial attitudes from high places legitimize such attitudes? Is the Southern Poverty Law Center’s new report on “Hate and Extremism in 2017” right to presume that White supremacy and hatemongering is “emboldened [and] energized in the Trump era”?

 

Pardon my hesitancy to assume, before having supporting data, that the answers to these questions are yes. Bullying and hate speech anecdotes are not new. Dylan Roof’s 2015 Charleston, South Carolina, massacre, for example, predated Donald Trump’s campaign travels and presidency.

 

But now we have two new reasons to believe that the answers are, indeed, yes.

 

First, new data from two large surveys and one experiment confirm our suspicions that hate speech is socially toxic. University of Warsaw psychologist Wiktor Soral and his colleagues report that “frequent and repetitive exposure to hate speech leads to desensitization to this form of verbal violence and subsequently to lower evaluations of the victims and greater distancing, thus increasing outgroup prejudice.”

 

Second, as if to illustrate Soral’s findings, the U.S. FBI’s annual hate crimes report confirms that, yes, 2016 saw a 5 percent increase in hate crime incidents. Despite increased overall American acceptance of LGBTQ people, they—as well as ethnic and religious minorities—experienced an uptick in hate crime incidents.

 

Likewise, the United Kingdom experienced a jump in reported hate crimes following passage of the Brexit vote, fueled partly by anti-immigrant sentiments.

 

We should not be surprised. As social psychologists Chris Crandall and Mark White remind us: Presidents have the power to influence norms, and norms matter. “People express the prejudices that are socially acceptable and they hide the ones that are not.”

 

So, I now consider my question answered, and the answer defines a task for us educators and social psychologists as we work to encourage a more just and compassionate world.

Ever since I decided to pare down the personality section of my Intro Psych course to modern day theories of personality and their accompanying research, I have been on the lookout for interesting content to add.

 

The journal Psychological Science recently published a fascinating – to me anyway – article on the relationship between one’s own personality and the ideal personality characteristics of particular jobs and the impact that relationship has on income (Denissen et al., 2017).

 

Jaap Denissen and his colleagues used Big Five trait data from 8,458 individuals who all had full-time work for the previous year. For each job held by the participants, occupation experts identified the ideal Big Five traits a person in that job should have. Take a look at the ratings for each job, available through the Open Science Framework (OSF).

Before sharing these data with your students now would be a good time to remind them that “psychology doesn’t deal in certainties; it deals in probabilities.” Your students’ personality traits will not definitively determine their future income, but if we know their personality traits and the job that they may have, we can figure the probability of them having a certain level of income.

 

After covering the Big Five, can your students assign the same traits to jobs as this study's experts? 

 

Which job goes with which level of the trait, one is high and the other is low? Answers at the bottom of the post.

 

Extraversion:

  1. Actor
  2. Bookkeeper

Agreeableness:

  1. Prison guard
  2. Religious professional

 

Conscientiousness:

  1. Financial manager
  2. Decorator

Emotional stability

  1. Firefighter
  2. Embroiderer

Openness

  1. Farm hand
  2. Actor


Curious to know the ratings the experts assigned for professors in higher education? All ratings are on a 7-point scale; higher numbers mean more of the trait is expected by the job.

Extraversion: 5.7

Agreeableness: 4.5

Conscientiousness: 5.7

Emotional stability: 5.8

Openness: 4.7

 

Now, that’s all really interesting, right? But here’s where it gets downright fascinating.

 

Looking just at the extraversion response surface analysis (RSA) below, people who were high in extraversion (“actual personality”) and were in a high extraversion job (“demanded personality”) had the highest income (vertical axis; green is higher income and orange is lower). Those who were in mismatched jobs (low extraversion person in a high extraversion job or vice versa) had lower income. And those low in extraversion in a low extraversion job also had lower incomes. In other words, those who are lowest in extraversion will have the lowest incomes as compared to their fellow moderate and high extraverts, regardless of the amount of extraversion demanded by the job. (For more on this topic, see Susan Cain’s book Quiet.)

 

[Figure reprinted with permission of the author. For this and the RSA figures for emotional stability, conscientiousness, and agreeableness, see the supplemental materials in OSF. For the RSA figure for openness, please see the original article, also available in OSF.]

 

Emotional stability shows essentially the same pattern. High emotional stability people earned the most money in high emotional stability jobs, e.g., firefighter. Low emotional stability people earned less money in high emotional stability jobs. Ask students to consider why this might be; invite students to share their thinking.

 

For conscientiousness, same thing, except that jobs that require high conscientiousness generally provide higher incomes. High conscientiousness people in high conscientiousness jobs made the most money. Low conscientiousness people in high conscientiousness jobs still made money, just not as much as their high conscientiousness counterparts. Who made the least money in the conscientiousness arena? High conscientiousness people in low conscientiousness jobs. Again, give your students a couple minutes to think about why that may be. For those high conscientiousness employees, perhaps “perfection is the enemy of the good.” In all fairness, though, there are no low conscientiousness jobs, just lower conscientiousness jobs. The lowest jobs came in at 5.17 (again, max score is 7).  

 

High openness people in high openness jobs, e.g., actor, had higher incomes than low openness people in high openness jobs. Again, ask students to consider why this may be.

 

That leaves agreeableness. Who made the least money in this trait? High agreeableness people in low agreeableness jobs, e.g., prison guard. Who made the most money in this trait? Low agreeableness people in moderately low agreeableness jobs, e.g., taxi driver. One last time, ask students to consider why this may be.

 

Alternatively, if you want to give students some practice in reading graphs, divide the class into small groups of 3 to 4 students each. Give each group a different trait RSA. Ask each group to briefly describe the graph, perhaps prompt with something like, “What is the relationship between a person’s personality trait and the trait demanded by the job in terms of the impact that relationship has on income?” Walk through the RSA for one trait first, and then distribute the other four traits to the groups.

 

References

 

Denissen, J. J. A., Bleidorn, W., Hennecke, M., Luhmann, M., Orth, U., Specht, J., & Zimmermann, J. (2017). Uncovering the Power of Personality to Shape Income. Psychological Science, 95679761772443. https://doi.org/10.1177/0956797617724435

 

Extraversion:

  1. Actor (high)
  2. Bookkeeper (low)

Agreeableness:

  1. Prison guard (low)
  2. Religious professional (high)

 

Conscientiousness:

  1. Financial manager (high)
  2. Decorator (low)

 

Emotional stability

  1. Firefighter (high)
  2. Embroiderer (low)

 

Openness

  1. Farm hand (low)
  2. Actor (high)

In my psychology texts, and in other writings (such as here for the faith community), I have explained the growing evidence that sexual orientation is a natural, enduring disposition (most clearly so for males). The evidence has included twin and family studies indicating that sexual orientation is influenced by genes—many genes having small effects. One recent genomic study, led by psychiatrist and behavior geneticist Alan Sanders, analyzed the genes of 409 pairs of gay brothers, and identified sexual orientation links with parts of two chromosomes.

 

Today, Nature will be releasing (through its Scientific Reports) a follow-up genome-wide association study by the Sanders team that compares 1,077 homosexual and 1,231 heterosexual men. They report genetic variants associated with sexual orientation on chromosomes 13 and 14, with the former implicating a “neurodevelopmental gene” mostly expressed in a brain region that has previously been associated with sexual orientation. On chromosome 14 they identified a gene variant known to influence thyroid functioning, which also has been associated with sexual orientation.

 

Although other factors, including prenatal hormonal influences, also help shape sexual orientation, Sanders et al. conclude that “The continued genetic study of male sexual orientation should help open a gateway to other studies focusing on genetic and environmental mechanisms of sexual orientation and development.” The science of sexual orientation (for females as well) marches on.

David Myers

The Pro-Truth Pledge

Posted by David Myers Expert Nov 29, 2017

In a year-ago post, I observed that “For us educators, few things are more disconcerting than the viral spread of misinformation. Across our varying political views, our shared mission is discerning and teaching truth, and enabling our students to be truth-discerning critical thinkers.”

 

Now some kindred-spirited behavioral scientists have responded to our post-truth culture by inviting public figures and private citizens to sign a pro-truth pledge. To a teaching psychologist, the pledge reads like a manifesto for critical thinking. Along with some higher-profile colleagues, including Jon Haidt and Steve Pinker, I’ve signed, by pledging my effort to:

Share truth

  • Verify: fact-check information to confirm it is true before accepting and sharing it
  • Balance: share the whole truth, even if some aspects do not support my opinion
  • Cite: share my sources so that others can verify my information
  • Clarify: distinguish between my opinion and the facts

Honor truth

  • Acknowledge: acknowledge when others share true information, even when we disagree otherwise
  • Reevaluate: reevaluate if my information is challenged, retract it if I cannot verify it
  • Defend: defend others when they come under attack for sharing true information, even when we disagree otherwise
  • Align: align my opinions and my actions with true information

Encourage truth

  • Fix: ask people to retract information that reliable sources have disproved even if they are my allies
  • Educate: compassionately inform those around me to stop using unreliable sources even if these sources support my opinion
  • Defer: recognize the opinions of experts as more likely to be accurate when the facts are disputed
  • Celebrate: celebrate those who retract incorrect statements and update their beliefs toward the truth

I recently finished Sam Kean’s (2012),  The Violinist’s Thumb the history, the present, and the future of DNA research. Kean writes, “Genes don’t deal in certainties; they deal in probabilities.” I love that – and I’m using it the first day of Intro Psych next term: “Psychology doesn’t deal in certainties; it deals in probabilities.”

 

I already talk about correlations as probabilities. The stronger the correlation, the higher the probability that if you know one variable, you can predict the other variable.

 

In the learning chapter, it’s not unusual for a student to say, “I was spanked, and I turned out okay.” Now I can repeat, “psychology doesn’t deal in certainties; it deals in probabilities.” When children are spanked, it increases the probability of future behavioral problems (Gershoff, Sattler, & Ansari, 2017). It is not a certainty.

 

Whenever aggression comes up as a topic, a student will say, “I play first-person-shooter games, and I’ve never killed anybody.” Again, “psychology doesn’t deal in certainties; it deals in probabilities.” Playing violent video games increases the chances of being aggressive. Watching violent movies increases the chances of being aggressive. Listening to violent-themed music increases the chances of being aggressive. (List is not exhaustive.) The more of those factors that are present, the greater the probability of behaving aggressively (Anderson, C, Berkowitz, L, Donnerstein, E, Huesmann, L, Johnson, J, Linz, D, Malamuth, N, & Wartella, 2003). It is not a certainty.

 

A student says, “I was deprived of oxygen when I was being born, and I haven’t developed schizophrenia” (McNeil, Cantor-Graae, & Ismail, 2000). (Okay, I have never had a student say this, but I wanted one more example.) Being deprived of oxygen at birth increases the probability of developing schizophrenia. It is not a certainty.

 

Any time a student reports an experience that does not match what most in a research study experienced, I can say “Like genetics, psychology doesn’t deal in certainties; it deals in probabilities.”

 

References

 

Anderson, C, Berkowitz, L, Donnerstein, E, Huesmann, L, Johnson, J, Linz, D, Malamuth, N, & Wartella, E. (2003). The influence of media violence on youth: . Psychological Science In The Public Interest (Wiley-Blackwell), 4(3), 81–110. https://doi.org/10.1111/j.1529-1006.2003.pspi_1433.x

 

Gershoff, E. T., Sattler, K. M. P., & Ansari, A. (2017). Strengthening Causal Estimates for Links Between Spanking and Children’s Externalizing Behavior Problems. Psychological Science, 95679761772981. https://doi.org/10.1177/0956797617729816

 

Kean, S. (2012). The Violinist’s Thumb. New York City: Little, Brown, and Company.

 

McNeil, T. F., Cantor-Graae, E., & Ismail, B. (2000). Obstetric complications and congenital malformation in schizophrenia. In Brain Research Reviews (Vol. 31, pp. 166–178). https://doi.org/10.1016/S0165-0173(99)00034-X

One of psychology’s most reliable phenomena is “the overconfidence phenomenon”—the tendency, when making judgments and forecasts, to be more confident than correct. Stockbrokers market their advice regarding which stocks will likely rise while other stock brokers give opposite advice (with a stock’s current price being the balance point between them). But in the long run, as economist Burton Malkiel has repeatedly demonstrated, essentially none of them beat the efficient marketplace.

 

Or consider psychologist Philip Tetlock’s collection of more than 27,000 expert predictions of world events, such as the future of South Africa or whether Quebec would separate from Canada. As Nathan DeWall and I explain in Psychology, 12th Edition,

His repeated finding: These predictions, which experts made with 80 percent confidence on average, were right less than 40 percent of the time. Nevertheless, even those who erred maintained their confidence by noting they were “almost right.” “The Québécois separatists almost won the secessionist referendum.”

 

My fellow Worth Publishers text author and Nobel laureate economist, Paul Krugman, has described similar overconfidence and reluctance to admit error among economists and politicians.

  • When Bill Clinton raised taxes on the rich, conservative politicians and economists predicted economic disaster—but the economy instead boomed, with 23 million jobs added during the Clinton years.
  • When Kansas politicians passed large tax cuts with the promise that growth would pay for them, the result was an unexpected state funding crisis.
  • When, in 2008, the Federal Reserve responded to the recession by cutting interest rates to zero, conservative economists and pundits published an open letter warning of soaring inflation to come. But it hasn’t.

 

When none of the predicted economic outcomes happened, did the forecasters own their error and change their thinking? Contacted by Bloomberg, not one of the inflation open letter signatories acknowledged error. Instead, they offered (in Krugman’s words) “some reason wrong was right … and never, ever, an admission that maybe something was wrong with [their] initial analysis.”

 

Overconfidence—the human bias that our own Nobel laureate, Daniel Kahneman, would most like to eliminate—feeds another potent phenomenon, “belief perseverance”—our tendency to cling to our beliefs in the face of contrary evidence. The more we explain why our beliefs might be true, the more they persist. Thus we welcome belief-supportive evidence—about climate change, same-sex marriage, or the effects of today’s proposed tax cuts—while discounting contrary evidence. To believe is to see.

 

Perhaps, then, we should all aspire to a greater spirit of humility. Such recognizes, as I have written elsewhere, that

We are finite and fallible. We have dignity but not deity. [Thus] we should hold our own untested beliefs tentatively, assess others ’ ideas with open-minded skepticism, and when appropriate, use observation and experimentation to winnow error from truth.

It’s official: Dog owners live longer, healthier lives” reads the headline on Time’s website. The refreshing change is that the headline – and the article – carefully explain that the data are correlational, not causal (MacMillan, 2017). When this article appeared in my local paper, The Seattle Times, it came with a sub-headline: “It may be correlation, not causation, but the risk of death was about 33 percent lower for dog-owners than non-owners, a study found.” You won’t be surprised to hear that the journalist, Amanda MacMillan, has a BA in journalism/science writing with minors in “science, technology, and society” and physics [shout out to Lehigh University, her alma mater.]

 

Researchers looked at national records for 3.4 million people in Sweden over a 12-year span. Those records included whether the people registered a dog and their health reports. “Dog ownership registries are mandatory in Sweden, and every visit to a hospital is recorded in a national database.”

 

Researchers learned that “[p]eople who lived alone with a dog had a 33% reduced risk of death [over that 12-year span], and an 11% reduced risk of cardiovascular disease, than people who lived alone without a dog.” The findings were less pronounced for people who lived with other people,

 

I’m going to put this study into my correlation lecture. After sharing these results, I’ll ask students to work in pairs to generate possible reasons for these relationships and then share their ideas with the class. This is a nice opportunity to show that while correlations do not tell us about cause and effect, they provide a goldmine of hypotheses for future research.

 

One possibility, the article points out, is that owning a dog causes better health in the owner: owning a dog causes people to be more active (“gotta walk the dog”). Or dogs may share their microbiome with their owners, giving their human immune systems a boost – as I reflect on how I woke this morning with my dachshund standing on my head and licking my face. Or by walking our dogs, we meet people, extending our social network; social networks are also correlated with better health.

 

Another possibility, the article also points out, is that more active (read “healthy) people are more likely to get a dog.

And then there are the third variables. For example, “[o]ther studies have suggested that growing up with a dog in the house can decrease allergies and asthma in children.” It may be that having a dog growing up made people more likely to get a dog as an adult and that the exposure to dogs as children gave us a stronger adult immune system.

 

As instructors of psychological science, let’s continue to help our students understand what research does and does not tell us, so that when they get jobs as journalists, they can accurately interpret research findings for the general public as this journalist has done.

 

References

 

MacMillan, A. (2017, November). It’s official: Dog owners live longer, healthier lives. Retrieved from http://time.com/5028171/health-benefits-owning-dog/

Here are some survey data your students may find interesting. This will be most compelling for your psychology majors.

The American Psychological Association (APA) mined the data from the 2015 National Survey of College Graduates (NSCG) and learned some interesting things about psychology’s bachelor’s degree recipients (American Psychological Association, 2017). The NSCG estimates that there are about 58 million people in the United States with a bachelor’s degree; that probably includes you. The NSCG sampled 135,000 of them in 2015 (National Science Foundation, 2017).   

After covering survey research in, say, Research Methods, ask students to work in groups to take a few minutes and think of what variables they would include in such a survey and why. Ask each group, in turn, for one variable that no other group has yet mentioned. Write the variables on the board (or computer screen) as groups report out. Keep rotating through the groups until all variables have been reported or as time allows. Next, share with students this list of key variables (scroll to 2.h.) included in the NSCG survey.

 

Ask students if there are any groups they would exclude from the survey. The NCSG excludes people who are institutionalized, who live outside the U.S., and who are 76 years old or older (scroll to 3.b).

Ask students what kind of sampling design they would use. The NCSG used stratified sampling on “demographic group” (with “an oversample of young graduates), “highest degree type,” and “occupation/bachelor’s degree field” (scroll to 3.c.).

 

Researchers started with a web survey. For those who didn’t respond to that, researchers sent them a survey in the mail. And for those who didn’t respond to that, they got a phone call for “computer-assisted telephone interviewing” (scroll to 4.a.).

 

What did APA find in that 2015 survey data about those of us with bachelor’s degrees in psychology (American Psychological Association, 2017)?

  • 4 million people in the U.S. have at least a bachelor’s degree in psychology
  • 2% got a master’s degree in psychology
  • 8% got a master’s degree in psychology first and then went on to complete a doctorate/professional degree in psychology
  • 3% got a master’s degree in something else and then a doctorate/professional degree in psychology
  • 7% directly earned a doctorate/professional degree in psychology, bypassing the master’s degree

 

Adding up those numbers, that’s 13%. What about the other 87% of psychology bachelor’s degrees holders?

  • 30% earned a masters or doctorate/professional degree in something other than psychology
  • 57% did not earn a graduate degree

 

Those 30% who earned a graduate degree in something else is nice evidence that a psychology degree is a good all-purpose sort of degree. The father of one my students took his bachelor’s in psychology to law school. He is now a judge. I wish all judges had degrees in psychology!

 

Those 57% who did not earn a graduate degree are undoubtedly putting their psychology degrees to good use, no matter what they are doing. Although some of them aren’t fully cognizant of what their education is doing for them now. Give students a copy of the American Psychological Association Guidelines for the Major. Divide students into groups, and give each group two possible jobs a person might have. Drew Appleby’s list is a nice one to choose from. Ask each group to put a checkmark for each job on their Guidelines for the Major the knowledge and skills (outcomes) that would be useful to have in their assigned jobs. For each outcome have one person in each group raise their hand if the group thought the outcome was important for one of their jobs. Have two persons in each group raise their hands if they the outcome was important for both of their jobs. Tally the number of hands for each outcome. Give students an opportunity to share why they thought particular knowledge and skills (outcomes) is important and how the psychology major is helping them achieve that knowledge and skills. For any holes in your students' observations, let them know where in the curriculum students are gaining that knowledge and those skills.