Skip navigation
All Places > The Psychology Community > Blog > 2017 > February
2017

The vast majority of students who take Intro Psych are not psychology majors. Our Intro Psych students will be working in medicine, engineering, hotels/restaurants/tourism, politics, business, etc. They are The Public.

 

The first six of “9 Tips for Communicating Science to People Who Are Not Scientists” written by Marshall Shepard, physical meteorologist, apply to teaching Intro Psych as well. (Shout out to Molly Metz for posting this article to the Society for the Teaching of Psychology Facebook group.)

 

  1. “Know your audience.” Who are your Intro Psych students? What are they majoring in or thinking about majoring in? How many have children? How many have full-time jobs? Is this their first year in college? The more you know where your students are coming from, the better you can meet them where they are.
  2. “Don’t use jargon.” Or, for the Intro Psych class, don’t use jargon to explain jargon. When talking about conditioning, for example, it’s easy for instructors to toss around terms like unconditioned response and discriminative stimulus, but we need to remember that these terms are likely brand new to most of our students. Defining as we go can help bring students into our world. “The unconditioned response – the unlearned response in this example is…”
  3. “Get to the point.” Students don’t need the entire history of personality research to understand today’s trait theories. Traditionally Intro Psych instructors talk about Pavlov’s dogs before launching into contemporary examples of classical conditioning. Do we have to talk about dog drool before talking about heroin overdoses? (See Siegel, 2005 for an overview of classical conditioning and drug use.) That’s not to say you may not have good reasons for talking about the history behind a particular concept, but be conscious of the reasons you are telling that history. Don’t just do it because that’s how you’ve always done it or because that’s how you saw it done.
  4. “Use analogies and metaphors.” We know that people learn better when they can connect new concepts to what they already know or what they can visualize. Even better, in small groups, ask your students to create analogies or metaphors for a concept you just covered in class.
  5. “Three points.” My problem is that psychology is just so dang fascinating I want to tell students everything I know. Unfortunately, there is such a thing as too much information. My solution? A couple times a week I post announcements to my students through our course management system with the subject line “Nifty things about…” whatever we recently covered. Those who wanted more information on that topic can get it; those who don’t can move on to something else.
  6. “You are the expert.” I was in my mid-20s when I first walked into a community college classroom as the instructor. I looked around the room at students twice my age and new that the instructor-as-all-knowing-authority model wasn’t going to work. Instead, I acknowledged what I knew and acknowledged what my students brought to the table: “I know the theory, and you have the life experiences. Let’s merge them together and see what we get.” I still come from that perspective today, even though my students are no longer twice my age. Although, I confess, there was stuff then that I thought I understood that I clearly didn’t – oh, to have those students back again!

 

References

 

Shepherd, M. (2017, February 22). 9 Tips for Communicating Science to People Who Are Not Scientists. Retrieved from https://www.forbes.com/sites/marshallshepherd/2016/11/22/9-tips-for-communicating-science-to-people-who-are-not-scientists/#6ef0a79b66ae

 

Siegel, S. (2005). Drug tolerance, drug addiction, and drug anticipationCurrent Directions in Psychological Science, 14(6), 296-300. doi:10.1111/j.0963-7214.2005.00384.x

Last month (January 2017), the U.S. Department of Justice (DoJ) issued a memo to law enforcement and prosecutors. The subject of the memo was “Eyewitness Identification: Procedures for Conducting Photo Arrays.” The DoJ’s last document that dealt with photo arrays was released in 1999. The authors of this memo acknowledge that a lot of research on eyewitness identification has happened since then, and that it was time to incorporate that research into new guidelines.

 

The information provided in this document would make a nice addition to your coverage of memory in Intro Psych. It’s a wonderful example of how psychological research can be applied in real-world settings, and in this case, where people’s lives are at stake.

 

The DoJ recommends that the police officer who is showing the eyewitness photographs of potential perpetrators be blind to the suspect. In other words, the police officer who is showing the photographs has no idea who his/her fellow police officers suspect is the perpetrator. This is for the same reasons researchers are blind to conditions – to avoid unintentionally cuing the eyewitness/research participant. If it’s a small police department, they might not have someone available to show the photographs who does not know who the suspect is. In that case, the DoJ recommends that the officer be “blinded,” where the officer doesn’t know which photograph the eyewitness is looking at any given time. Even better, the DoJ suggests having the photographs presented on a computer screen so no one else needs to be present.

 

Another DoJ recommendation is that eyewitnesses, after identifying a photograph as that of the perpetrator, make a rating of confidence in their own words. “[N]ew research finds that a witness’s confidence at the time of an initial identification is a reliable indicator of accuracy.” Ask students, given what they now know about memory construction, what they would think if they were on a jury and the witness’s confidence at initial identification was very low but when on the witness stand during the trial was very high.

 

Whatever procedures a law enforcement department uses, the DoJ recommends video recording the process. That’s the best proof that the eyewitness’ memory was not inadvertently (or intentionally) contaminated.

 

If time allows, after discussing memory but before discussing the guidelines yourself, invite students to create them. With students working in small groups, ask students to create recommendations to law enforcement regarding the showing of suspect photographs to eyewitnesses. What recommendations do students have for how the photos should be displayed and what the officer showing the photographs should do or not do?

 

After discussion dies down, ask volunteers to share the recommendations from their groups with rationales. And then highlight for students some of the DoJ recommendations.

My friend and psychology colleague, Sue Frantz, alerted me to the pride the University of Kansas athletic department took this week in setting a Guinness World Record—with a 130.4 decibel crowd roar during their men’s basketball team’s come-from-behind win over West Virginia.

 

That took my mind to my hometown Seattle Seahawks’ pride on having the loudest outdoor sports stadium, thanks to its “12th Man” crowd noise—which has hit a record 137.6 decibels . . . much louder than a jackhammer, noted hearing blogger Katherine Bouton

 

As I mentioned in earlier blog post, with three hours of 100+ decibel game sound,

 

“many fans surely experience temporary tinnitus—ringing in the ears—afterwards . . . which is nature warning us that we have been baaad to our ears. Hair cells have been likened to carpet fibers. Leave furniture on them for a long time and they may never rebound. A rule of thumb: If we cannot talk over a prolonged noise, it is potentially harmful.”

 

Coincidentally, Sue Frantz’s Highline College is just 17 miles from Seahawks stadium, where, she tells me “my former postal carrier ruptured his eardrum. He said he felt the sound wave move from one end of the stadium to the other, and when it bounced back, he felt a sharp pain in his ear that faced that end of the stadium. His eardrum never recovered; his hearing loss was permanent.”

The hearing aid industry may welcome the future customers whose hearing decline is hastened by such toxic noise. But for the University of Kansas and my Seahawks, these disability-enhancing Guinness Records are matters for concern, not boasting.

For those gearing up to teach the social psych chapter in Intro, the news is rife with examples. You could talk about the Rattlers and the Eagles at Robber’s Cave State Park in 1954. Or you could talk about Muslims, Jews, and Christians in 2017.

 

“Almost every day in New York last week there was an interfaith conference or prayer service – involving Christian groups as well as Muslims and Jews – devoted to the current crisis over predominantly Muslim immigrants and refugees” (Demick, 2017). It seems that U.S. president Donald Trump’s immigration policies have created a superordinate goal.

 

There also appears to be a shift in ingroup boundaries. Historically, there has been tension between “my” religious group and “your” religious group. But now the ingroup seems to defined as “refugees” – both past and present.

 

“Formed in 1881 to resettle Jews fleeing pogroms in Europe, [the Hebrew Immigrant Aid Society] has in recent years devoted itself to helping non-Jewish refugees. In the last year, it helped resettle more than 4,000 in the United States, about half of them Muslim. [Rabbi Jennie] Rosenn said that 270 synagogues and thousands of congregants nationwide have volunteered their time to find housing and furniture for refugees, to teach them English and enroll their children in school” (Demick, 2017).

 

For those who were Holocaust refugees and their families, the rhetoric and the political action strikes too close to home. The message is simple: We were refugees and you are a refugee; we will take care of you.

 

Amazon has proposed their own ingroup reframing in this commercial where two religious leaders – and friends – discover they have the same problem.

 

 

References

 

Demick, B. (2017, February 5). How Trump's policies and rhetoric are forging alliances between U.S. Jews and Muslims. Retrieved from http://www.latimes.com/nation/la-na-jew-muslim-2017-story.html

Speaking to military personnel on February 6th, President Trump lamented that terrorist attacks are “not even being reported. And in many cases, the very, very dishonest press doesn’t want to report it.” The implication was that opposition to his seven-country immigration ban arises from our being insufficiently aware and fearful of the terrorism threat.

 

Or, we might ask, are we instead too afraid of terrorism? In 2015 and again in 2016, feared Islamic terrorists (none from the seven countries) shot and killed fewer Americans than did armed toddlers (see here and here). Homicidal, suicidal, and accidental death by guns claim more than 30,000 American deaths each year.

 

After vivid media portrayals of terrorist attacks in Paris and San Bernardino, 27 percent of Americans identified terrorism as their biggest worry. In two national surveys (here and here), terrorism topped the list of “most important” issues facing the country.

 

Ergo, does the evidence not compel us to conclude that we are, thanks to the hijacking of our emotions by vividly available images, too much afraid of terrorism . . . and too little afraid of much greater perils? And might we instead fault the media for leaving us too unafraid of the future’s great weapon of mass destruction—climate change?

Are some prominent voices today, as in George Orwell’s 1984, seeking to control us by manipulating our fears? To me, George Gerbner’s cautionary words to a 1981 congressional subcommittee seem prescient:

 

Fearful people are more dependent, more easily manipulated and controlled, more susceptible to deceptively simple, strong, tough measures and hard-line postures.

Are you interested in what the research says about student course evaluations, aka student evaluation of teaching (SET)? Jordan Troisi was, so he asked the members of the Society for the Teaching of Psychology-operated listserv PsychTeacher for articles on SET. He received plenty of responses. Here are the references sent to him, along with a few others I found along the way. I included links to the original article when available. This is by no means a comprehensive list, but it should be enough to get you started. 

 

Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractivenessJournal of Personality and Social Psychology, 64(3), 431-441. doi:10.1037//0022-3514.64.3.431

 

Bennett, S. K. (1982). Student perceptions of and expectations for male and female instructors: Evidence relating to the question of gender bias in teaching evaluation. Journal of Educational Psychology, 74(2), 170-179. doi:10.1037//0022-0663.74.2.170

 

Benton, S. L., & Cashin, W. E. (n.d.). Student ratings of teaching: A summary of research and literature. Retrieved February 4, 2017, from http://www.ideaedu.org/research-and-papers/idea-papers/50-student-ratings-teaching-summary-research-and-literature  

 

Boring, A. (2017). Gender biases in student evaluations of teaching. Journal of Public Economics, 145, 27-41. doi:10.1016/j.jpubeco.2016.11.006

 

Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectivenessScienceOpen Research. doi:10.14293/s2199-1006.1.sor-edu.aetbzc.v1

 

A January 25, 2016 NPR story (“Why female professors get lower ratings”) features this study.

 

Boysen, G. A. (2016). Using student evaluations to improve teaching: Evidence-based recommendations. Scholarship of Teaching and Learning in Psychology, 2(4), 273-284. doi:10.1037/stl0000069

 

Boysen, G. A., Richmond, A. S., & Gurung, R. A. (2015). Model teaching criteria for psychology: Initial documentation of teachers’ self-reported competency. Scholarship of Teaching and Learning in Psychology, 1(1), 48-59. doi:10.1037/stl0000023

 

Boysen, G. A., Kelly, T. J., Raesly, H. N., & Casner, R. W. (2013). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39(6), 641-656. doi:10.1080/02602938.2013.860950

 

Carrell, S., & West, J. (2010). Does professor quality matter? Evidence from random assignment of students to professorsJournal of Political Economy, 118(3), 409-432. doi:10.3386/w14081

 

DeWitt, P. (2015, January 02). 10 seconds: The time it takes a student to size you up. Retrieved from http://blogs.edweek.org/edweek/finding_common_ground/2015/01/10_seconds_the_time_it_takes_a_student_to_size_you_up.html  

 

Eiszler, C. F. (2002). College students' evaluations of teaching and grade inflation. Research in Higher Education, 43(4), 483-501.

 

Gamliel, E., & Davidovitz, L. (2005). Online versus traditional teaching evaluation: Mode can matter. Assessment & Evaluation in Higher Education, 30(6), 581-592. doi:10.1080/02602930500260647

 

Kite, M. E. (2012). Effective evaluation of teaching: A guide for faculty and administrators. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/evals2012/index.php

 

Macnell, L., Driscoll, A., & Hunt, A. N. (2014). What's in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303. doi:10.1007/s10755-014-9313-4

 

North Carolina State University press release.

 

A February 23, 2015 Inside Higher Ed blog (“Gender bias in student evaluations”) features this study.

 

Onwuegbuzie, A. J., Witcher, A. E., Collins, K. M., Filer, J. D., Wiedmaier, C. D., & Moore, C. W. (2007). Students' perceptions of characteristics of effective college teachers: A validity study of a teaching evaluation form using a mixed-methods analysis. American Educational Research Journal, 44(1), 113-160. doi:10.3102/0002831206298169

 

Ory, J. C. (2000). Teaching evaluation: Past, present, and future. In Evaluating Teaching in Higher Education: A Vision for the Future. San Francisco, CA: Jossey-Bass.

 

Pusateri, T. (1016, December 16). Student feedback on teaching: Why mean ratings may not tell the full story. Retrieved from http://cetl.kennesaw.edu/article/student-feedback-teaching-why-mean-ratings-may-not-tell-full-story

 

Richmond, A. S., Boysen, G. A., Gurung, R. A., Tazeau, Y. N., Meyers, S. A., & Sciutto, M. J. (2014). Aspirational model teaching criteria for psychology. Teaching of Psychology, 41(4), 281-295. doi:10.1177/0098628314549699

 

Sidanius, J., & Crane, M. (1989). Job evaluation and gender: The case of university faculty. Journal of Applied Social Psychology, 19(2), 174-197. doi:10.1111/j.1559-1816.1989.tb00051.x

 

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598-642.

 

Sproule, R. (2000). Student evaluation of teaching: Methodological critiqueEducation Policy Analysis Archives, 8(50). doi:10.14507/epaa.v8n50.2000

 

Sproule, R. (2002). The underdetermination of instructor performance by data from the student evaluation of teaching. Economics of Education Review, 21(3), 287-294. doi:10.1016/s0272-7757(01)00025-5

 

Sproule, R., & Valsan, C. (2009). The student evaluation of teaching: Its failure as a research program, and as an administrative guide. Economic Interferences, 11(25), 125-150. Retrieved from http://www.amfiteatrueconomic.ro/temp/Article_641.pdf

 

Stark, P. (2013, October 18). Do student evaluations measure teaching effectiveness? Retrieved from http://blogs.berkeley.edu/2013/10/14/do-student-evaluations-measure-teaching-effectiveness

 

Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluationsScienceOpen Research. doi:10.14293/s2199-1006.1.sor-edu.aofrqa.v1

 

A September 26, 2014 NPR story (“Student course evaluations get an ‘F’”) features this study.

 

Stroebe, W. (2016, July 17). Student evaluations of teaching: No measure for the TEF. Retrieved from https://www.timeshighereducation.com/comment/student-evaluations-teaching-no-measure-tef

 

Stroebe, W. (2016). Why good teaching evaluations may reward bad teaching: On grade inflation and other unintended consequences of student evaluations. Perspectives on Psychological Science, 11(6), 800-816. doi:10.1177/1745691616650284

 

Wolfer, T. A., & McNown Johnson, M. (2003). Re-evaluating student evaluation of teaching: The teaching evaluation form. Journal of Social Work Education, 39(1), 111-121.

David Myers

Neurocore

Posted by David Myers Expert Feb 2, 2017

Walking down the hall to my Holland (Michigan) ear doctor’s office, I pass an office of Neurocore Brain Performance Centers, a company started in nearby Grand Rapids and whose website declares that its

Holland Center offers testing and drug-free, science-based treatment options for a number of conditions. These include depression, anxiety, ADD, ADHD, autism and sleep disorders. Utilizing brainwave analysis [without] using medication, we focus on positive reinforcement and repetition to train your brain to function better and reduce or eliminate the symptoms of ADHD, depression, anxiety or insomnia.

 

That’s quite a list, with 90+ percent improvement rates claimed for several of them on the Neurocore website. A now-famous local high school grad—Washington Redskins quarterback Kirk Cousins—touts Neurocore’s powers.

 

Such claims evoke déjà vu for quack snake oil regimens which once similarly promised cures for a host of ailments. Is there evidence to support Neurocore claims? Or are they no more credible than those for “brain training,” which psychological scientists have found to be overhyped.

 

My curiosity about Neurocore was reawakened when its biggest investor, Betsy DeVos (with her husband Dick), became President Trump’s nominee for Education secretary. (Small world: Ms. DeVos is a fellow alum of Kirk Cousins’ local high school, and her parents’ philanthropy enabled our community’s arts center, senior citizens center, and downtown renewal.) The DeVos’s financing of Neurocore led The New York Times to examine Neurocore’s scientific credibility.

 

So, what has psychological science reported concerning this “science-based treatment” (which has been available since 2004)? My search of the psychological literature, courtesy of PsycINFO, turned up this result:

 A search of abstracts (“ab”) in the ProQuest Health and Medicine database yielded the same:

According to the Times, Neurocore’s chief medical officer, Dr. Majid Fotuhi, reports that Neurocore will soon be publishing its results in “peer-reviewed scientific” publications.

 

When faced with questionable claims, science has a simple procedure: test them to see if they work. If its predictions are confirmed, so much the better for the claim. If they crash against a wall of data, so much the worse.

 

Sometimes, to be sure, the results astonish us. As a treatment for intractable depression, electroconvulsive therapy often works, for reasons we don’t yet fully understand. Who would have guessed? But often, as Nathan DeWall and I report in Psychology, 11th Edition,

science becomes society’s garbage disposal, sending crazy-sounding ideas to the waste heap, atop previous claims of perpetual motion machines, miracle cancer cures, and out-of-body travels into centuries past. To sift reality from fantasy, sense from nonsense, therefore requires a scientific attitude: being skeptical but not cynical, open but not gullible.

 

“To believe with certainty,” says a Polish proverb, “we must begin by doubting.”