Skip navigation
All Places > The English Community > Bedford Bits > Blog > Authors Barbara Wallraff

Bedford Bits

10 Posts authored by: Barbara Wallraff Expert
Barbara Wallraff

Call Me by My Name

Posted by Barbara Wallraff Expert Oct 24, 2018

 

Yes, of course writers and speakers should call people what they want to be called—all the way from honorifics (if you’re using them) to pronouns: Ms., Miss, Mx., Caitlyn Jenner, Chaz Bono, he, they …. The need for gender-neutral terms has made referring to individuals a bit more complicated. But referring to some groups the way they prefer—particularly groups that have been historically stigmatized or disadvantaged—has become a minefield.


Certain things, obviously, no one should ever call particular ethnic groups or their members. I’d like to think we all know what these terms are and avoid them. But even well-intentioned, up-to-date writers can give offense, because, after all, who’s to say what a group prefers?


Here, matters become very specific, so let’s specifically consider people with disorders or disabilities. These people are not rare: Data from the U.S. Census Bureau indicates that 19 percent of the U.S. population (or 54.4 million people) are living with a disability.


To be even more specific, let’s focus on the autism community, with which I’m familiar because I have long done freelance editing for Spectrum, a respected autism news website. Until earlier this year, Spectrum changed instances of “autistic people” to “people with autism,” because that was the term that professionals in the field used. The site’s staff consists mainly of science journalists and they had been trained in this convention, so upholding the rule was pretty simple. But sometimes people with autism and clinicians or therapists who work closely with them also write for the site, and some of them began pushing back, wanting to use the phrasing “autistic people.”


A few months ago, Spectrum announced a change to its policy:

When referring to people on the spectrum, Spectrum’s style has been to use person-first language (‘person with autism’). The rationale for this language is to put a person’s humanity first, before their condition…. But language evolves, and many people in the autism community now strongly prefer identity-first language (‘autistic person’). This terminology embraces autism as part of a person’s identity rather than a condition that is separate from them. Some professionals are also beginning to prefer this language. The style guide of the National Center on Disability and Journalism … recommends asking a person how they prefer to be identified.

 

The options, however, aren’t limited to “person with autism” versus “autistic person.” Some in the autism community dislike both terms, preferring “ASD [autism spectrum disorder] individuals” or “individuals with ASD.” Yet another respectable point of view treats “ASD” and “autism” as interchangeable, whereas others (such as Spectrum) would argue that ASD is autism, period, and should be called autism.


Similar issues come up for people with other disorders or disabilities. One thing that’s generally agreed about all such conditions is that one shouldn’t say things like “afflicted with,” “suffering from,” or “victim of.” On the opposite side of the coin, neither should one say “differently abled,” “challenged,” “handi-capable,” or “special.” “With” will do just fine.


If it seems to you that I’m not offering clear guidance about the terminology to be used for people with disorders or disabilities—or actually, people with virtually any characteristics whatsoever—you’re right. I’m a sympathetic outsider looking in, not a member of any stigmatized minority. (Okay, I’m a woman, but we’re a majority.) So I don’t believe it’s up to me to choose the terms I like best, unless I’ve studied up on what is up to date and gives offense to few.


The part about staying up to date, with respect to any group, is crucial. In his August 1963 “I Have a Dream” speech, Martin Luther King, Jr., used the word “Negro” 10 times, and “black” in reference to people just three times; that’s pretty good evidence that “Negro” was then the preferred term. The Black Panther Party was founded in 1966, the Black Power movement arose at about the same time, and “black” gradually displaced “Negro.” Then came “Afro-American,” and then “African-American,” and now “people of color”—though this last term does not refer specifically to black people but to anyone who is not white. However, “Latinx” (plural “Latinxs”) is gaining on “Latino” and “Latina” among Hispanics (a term that’s synonymous with “Latinos” in the Census Bureau’s usage, although not in everyone’s).


The idea of the “euphemism treadmill,” a term Steven Pinker coined in his 2003 book The Blank Slate, comes to mind. I wouldn’t call any of the terms I’ve been discussing a euphemism, though, any more than I’d call “Ms.,” used in preference to “Mrs.” or “Miss,” a euphemism. These things are just what some people prefer to be called.


So we’re back to that as a guideline for how to refer to anyone. Few if any Muslims insist on being called “Muslim people” or “people in the Islamic tradition”; in most contexts most writers who are women prefer to be called “writers” rather than “women writers”; and on and on. Encourage your students, when writing about individuals or groups of people, to be well intentioned, and help them be well informed, and they’ll get it right about as often as any of us can.


Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff @mac.com.

 

Image Credit: File:Hello my name is sticker.svg - Wikimedia Commons [Public Domain]

Barbara Wallraff

Thinking Ahead

Posted by Barbara Wallraff Expert Jul 25, 2018

 

We’re told that people who hope to have fulfilling careers now and in the coming decades must be adaptable. That’s because technologies such as the internet and artificial intelligence are changing the kinds of jobs for which college is intended to prepare students and the skills they’ll need to do a given job well.


Consider your own job: You probably use a course management system, though such things barely existed 20 years ago. No doubt you’re also familiar with Wikipedia, Twitter, digital textbooks, blogs, Prezi, plagiarism detection systems, hyperlinking, and so on. Technology keeps shaking things up, and technologists warn that the pace of change can only increase.


Rapid change is expected in part because artificial intelligence has become capable of doing many things—identifying individuals in photos, driving and parking a car, and learning from experience, for instance—that once were the exclusive province of people. AI continues to encroach on, or assist with, many increasingly highly skilled kinds of work. So students must be prepared to have all kinds of innovations thrown at them throughout their careers. They will need to view change as a constant.


And then there’s “correct English,” which changes at a glacial pace. The fundamental structures of English change hardly at all. New nouns and verbs and adjectives may be coined every day, but they’re still nouns and verbs and adjectives, doing the jobs they’ve always done. New prepositions and conjunctions, however, scarcely ever enter English—these are considered “closed” classes of words. Pronouns, too, had long been considered a closed class, though new gender-neutral forms like “ze,” “nem,” “vis,” and the singular “they” are vying with one another to join the mainstream. Even a “glacial pace” of change is speeding up, figuratively as well as literally.


Of course, it’s a good thing to be conversant with the norms of traditional prose, because these makes centuries’ worth of writing accessible. We can, for example, read and enjoy Shakespeare. But contemporary literature is ever more inclusive, celebrating ever more registers of English. A few of many possible examples would be the fiction of Jesmyn Ward and Jhumpa Lahiri, and George Saunders’s Lincoln in the Bardo, winner of the 2017 Man Booker Prize.


On the nonfiction side, traditional norms allow us to admire, for example, Charles Darwin’s insights in On the Origin of Species—even as we might wish he’d been able to write it in blog form and post videos of species as he came across them on his travels. (The first edition of Origin had just one illustration, a simple branching diagram.)


The norms of scholarly writing by now have diverged far from those for literature—journal articles, for instance, intentionally rely on the passive voice and colorless, if precise, terminology. Further, the vocabularies of scholarly and technical literature are expanding as rapidly as the fields of research: nanotechnology, mesosociology, biomedical engineering, artificial neural networks, and so on. Most of the world’s scientists and engineers and so on write in English—but often it’s not an English that people outside a small circle can understand.


Even the range of genres available to us has broadened, largely thanks to the technology of the internet. Online, multimodal compositions are coming to be the norm, with text paired with photos, videos, interactive graphics, and so on.


I doubt we can even guess how advances in technology will change our language in the near future and how we use it. Will “it’s” get folded into the spelling “its” because software has trouble distinguishing when to use each form, or will the software catch up with educated human understanding? Will the preferred pronunciations of words become the ones that Siri, Alexa, and Google Assistant recognize most readily? Will grammatically correct phrases that a grammar checker underlines come to be considered incorrect?


But those are just fine points. Taking a longer and wider view, I remember a U.K. ESL specialist explaining to me almost 20 years ago that speech to text automation was coming along, automated translation software was coming along, text to speech was coming along too—and pretty soon the three processes were likely to become one sequence and we’d be able to converse with almost anyone, each of us in our preferred language. That ability is now tantalizingly close. Once simultaneous translation becomes widely available, it’s bound to shake many things up. And that’s only an example of the kinds of technologies on the horizon that have the potential to profoundly affect our lives, our work, and our use of language.


What does all this suggest for the present and future of English comp? To me, it suggests that students need to know that there’s no one correct way of writing anymore, if there ever was one. Certainly, there are better and worse ways—better and worse registers and tones—in different genres, for different audiences.


Would your students benefit from an assignment like the following? Let’s imagine a surprising archaeological find was recently made near your campus. They might write it up in three to five different genres—maybe as a tweet; as if they had taken part in or watched the dig and are telling a friend about it; as a brief local news story; as a report for an archaeology wiki or blog; and as an academic paper based on firsthand sources or the outline for such a paper.
An assignment like this may not do much to get your students ready for changes to the language and technology yet to come. But this one might help them better understand how diverse is the range of registers and media and genres already available to them—and how adaptable they will probably need to be in their writing.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff@mac.com or comment below!

 

Image Credit:  Pixabay Image 2228610 by Seanbatty, used under a CC0 Creative Commons License

 

Quick quiz:

  • In the biblical story, what was Jonah swallowed by?
  • How many animals of each kind did Moses take on the Ark?

Did you answer “whale” to the first question and “two” to the second? Most people do … even though they’re well aware that it was Noah, not Moses who built the ark in the biblical story.

 

So wrote Lisa Fazio, an assistant professor of psychology at Vanderbilt University, in a recent article titled “Why You Stink at Fact-Checking.” Fazio’s article was published in the very cool and credible online magazine The Conversation and republished last month in The Washington Post.

 

Fazio says that psychologists call the relevant phenomenon the Moses Illusion. But not long after I read her article, I heard a non-Moses-related variant on NPR. It went something like this:

“A humorous story is a …”

                                                                              “… joke.”

“Where there’s fire, there’s often …”

                                                                              “… smoke.”

“Another word for ‘people’ is …”

                                                                                      “… folk.”

      “The white part of an egg is called the …”

                                                                                      “[???].”

 

Right.

 

The Conversation article, based on a sizable body of research that Fazio and colleagues have conducted, demonstrates how easy and normal it is for all of us to unwittingly absorb—and share—false information. What’s more, the “negative effects of reading false information occur even when the incorrect information directly contradicts people’s prior knowledge.”

 

Participants in Fazio’s studies accepted false information even if they’d been “warned that some of the questions would have something wrong with them.” They did so when the factual errors turned up in questions related to their field of expertise. They did so even when the “critical information” was highlighted in red and they were told to pay particular attention to it!

 

If you’re concerned about your students’ ability to separate information from disinformation when they’re writing papers, I highly recommend you assign them to read Fazio’s article.

 

Writers beware too

The article got me thinking, indirectly, about “indirection.” Most good nonfiction writers I know consider indirection a fault, whether or not they know that name for it. (I’ve never heard another one.) I learned about indirection from the legendary Eleanor Gould of The New Yorker, but just now I was surprised to find that among half a dozen or so of my go-to writing and editing guides, only The New York Times Manual of Style and Usage discusses it:

 

indirection is what Harold Ross of The New Yorker called the quirk of sidling into facts as if the reader already knew them. An example is this sentence, in a profile of a college athlete: The 19-year-old also plays the piccolo. The reader pauses to wonder whether the 19-year-old is the athlete or someone else.

 

The most straightforward remedy is, of course, to get the athlete’s name in there. For example, “Wilson, 19 years old, also plays the piccolo.”

 

Indirection tends not to raise the hackles of readers who haven’t been trained to look for it—possibly because it’s common and accepted in fiction. For instance, take this opening sentence of a short story that appeared in The Atlantic: “It was Saturday and the house was full of flies again.” I’ve remembered that sentence for decades (although unfortunately I can’t remember or find online the title of the story or the author). It hooked me exactly because it sidles into the situation in a way that made me want to know more.

 

However, in nonfiction, avoiding indirection strikes me as important in two ways:

 

(1) Good ol’ clarity. I often advise writers who are trying to make an argument that the goal is to lead readers along step by logical step to their document’s conclusion—which, by the time readers reach it, will preferably seem inevitable. Firmly connecting the content of one sentence to that of the next underpins this step-by-step technique. Not “Wilson is an exceptional athlete. The 19-year-old also plays the piccolo” but “Wilson, at 19 years old, is an exceptional athlete. She also plays the piccolo.” Doesn’t the latter version feel much more grounded and authoritative?

 

(2) Fighting against the Moses Illusion. Note that both examples of the phenomenon I’ve given in this post present the false information indirectly. They don’t say, “Moses took two animals of each kind aboard the Ark. True or false?” and “The white part of an egg is called the yolk. True or false?” I’ll bet that most readers would catch the falsehoods here. Allowing ourselves indirection can also lure us into making mistakes we’re not even aware of.

 

The Conversation article concludes:

Detecting and correcting false information is difficult work and requires fighting against the ways our brains like to process information. Critical thinking alone won’t save us. Our psychological quirks put us at risk of falling for misinformation, disinformation and propaganda.

 

This applies to the psychology of writers as well as readers, I have no doubt. Caveat scriptor. 

 

Image Credit: Pixabay Image 1351629 by quinntheislander, used under a CC0 Creative Commons License

Barbara Wallraff

Just the Facts, Folks

Posted by Barbara Wallraff Expert Apr 25, 2018

 

“Avoid Exaggeration…. When you’re writing for readers who don’t know you well, it’s important to show that you’re reliable and not overly dramatic. Readers have a sense of what the world is like. Exaggerations tend to affect them much the way insults do: they begin to mistrust the writer.”

 

In some contexts now I can feel a bit silly about having written that advice in Mike Palmquist’s and my new textbook. Academic writing is not one of those contexts. Fields that involve statistics, for instance, have strict rules about what does or doesn’t constitute exaggeration, and researchers are trained to respect them reflexively. They write things like “X intervention led to a 2.5 percent increase in Y, but that result is not statistically significant.” That is, they don’t just present the information they’ve gathered; they also acknowledge its significance. When it doesn’t mean much, they make sure to point it out.


In less quantifiable disciplines, a writer might want to argue, say, that Lyndon Johnson was “the greatest civil-rights advocate of all time,” or she or he might quote an authority as having said that. But academics rarely state such things in passing as though they were settled fact. They know that doing so would undermine their credibility.


Outside academia, obviously the rules against exaggeration fly out the window. All the same, I feel strongly that here too avoiding exaggeration pays off. The speech that the Parkland shootings survivor Emma Gonzalez gave on February 17 at a gun control rally in Fort Lauderdale is a case in point. The speech was widely described as emotional and powerful. And yet it covered a lot of information, including:

  • The House of Representatives did not observe a moment of silence for the Parkland victims, as is generally done after mass shootings. 
  • “In Florida, to buy a gun you do not need a permit, you do not need a gun license, and once you buy it you do not need to register it. You do not need a permit to carry a concealed rifle or shotgun. You can buy as many guns as you want at one time.”
  • Australia had one mass shooting in 1999, introduced gun safety laws, and hasn’t had one since.
  • “Japan has never had a mass shooting. Canada has had three and the UK had one and they both introduced gun control.”
  • President Trump’s electoral campaign received $30 million from the NRA.
  • If you divide that dollar amount “by the number of gunshot victims in the United States in the one and one-half months in 2018 alone, that comes out to being $5,800.”
  • “President Trump repealed an Obama-era regulation that would have made it easier to block the sale of firearms to people with certain mental illnesses.”

 

Gonzalez presented other things, too, as facts—but those are most of the major ones. Given how prevalent false assertions are, I rooted around online to see if I could find out whether any of them were exaggerations. (Some of the results of my research can be found in the links above.)


There are certainly websites and transcripts and videos that claim Gonzalez was exaggerating or worse. For instance, NRA board member Ted Nugent had this to say on a nationally syndicated radio program after Gonzalez spoke at the March for Our Lives rally in Washington D.C., on March 24: “These poor children, I’m afraid to say this and it hurts me to say this, but the evidence is irrefutable, they have no soul.” And after the host showed him clips from the speech, he responded, “The dumbing down of America is manifested in the culture deprivation of our academia that have taught these kids the lies, media that have prodded and encouraged and provided these kids lies.”


All of that’s obviously opinion, not fact. (“The evidence is irrefutable” that Gonzalez and other young activists have no soul?!) Numerous other partisan discreditings of her facts have themselves been discredited—and around it goes. But what I wanted to find out is whether websites that are respected across the political spectrum, such as centrist news and fact-checking sites, “called BS,” to borrow Gonzalez’s memorable phrase, on specific information she had shared.


Politifact reports that the information about gun purchases and permits in Florida “tracks with” material from the NRA itself, but calls the claim “You do not need a permit to carry a concealed rifle or shotgun” “probably the weakest line of the speech.” It explains, “There is no permit available to carry a concealed rifle or shotgun, because concealed carry of those weapons is not allowed.” Snopes.com, another fact-checking site, rates the assertion that Trump repealed the Obama-era regulation only “Mostly true,” inasmuch as it’s a simplification. And that’s about as much truth-stretching in the speech as numerous credible sources seem to have been able to find. Gonzalez’s speech was emotional and powerful because it was factual.


There’s no denying that it’s possible to exaggerate widely and often and still succeed in public life. But if ever there was a teachable moment to show your students how much words can matter—specifically words from young people like them, and specifically accurate words—Emma Gonzalez’s Fort Lauderdale speech would be it.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff@mac.com.

 

Image Credit: "How to Spot Fake News" by IFLA on Wikimedia Commons, used under Creative Commons Attribution 4.0 International license

 

Your students should use Merriam-Webster.com, for sure. It’s the best dictionary for everyday purposes, and I’ll explain why after I give you a bit of background and do some complaining.

 

Once upon a time, I might have recommended the New Oxford New American Dictionary. But the print edition hasn’t been updated since 2010, and its free online sibling is a bit of a project to find. Click here, and where you see “Dictionary” in white-on-black letters, scroll down to “Dictionary (US).”

 

I definitely would have recommended the American Heritage Dictionary. It’s still good. But once upon a time, it had ample resources and wasn’t shy about offering informed, expert advice. I was thrilled when, years ago, the AHD invited me to become a member of its “usage panel.” But last month, after half a century’s existence, the usage panel was disbanded. And as of this spring, the AHD’s editorial staff will consist of one part-time lexicographer. The AHD ain’t what it used to be.

 

The American Heritage Dictionary was launched in 1969, with the goal of restoring lexicographical standards that many felt had been recklessly abandoned when Merriam-Webster published Webster’s Third Unabridged, in 1961. Web3 did away with usage labels like “colloquial,” “incorrect,” and “humorous.” It lopped 250,000 entries out of its predecessor to make room for 100,000 new words (“unabridged,” eh?). And it was “descriptive,” promising only to inform people about how words are used. If folks said “irregardless,” then “irregardless” was a word worthy of inclusion. (Okay, Web3 does label it “nonstand.”)

 

Its predecessor, Webster’s Second, published in 1934, is widely acknowledged to have been magisterial in its day. It was proudly “prescriptive,” advising people how to use words. According to W2, “irregardless” is “Erron. or Humorous, U. S.

 

The outrage that, in some quarters, arose when Web2 abdicated in favor of Web3 led to the creation of the AHD. It assembled an impressive hundred-member usage panel (Isaac Asimov! Alistair Cooke! Langston Hughes! Barbara Tuchman!). Their collated opinions about the likes of between you and I were presented as notes beneath the relevant headwords (99 percent of the panel disapproved of between you and I—and I sure wish I knew who the rebel was). As for “irregardless,” it doesn’t look as if the first edition’s lexicographers even bothered to ask for the panel’s opinions. The note beneath the word reads, “Usage: Irregardless, a double negative, is never acceptable except when the intent is clearly humorous.”

 

But the AHD, along with the other major dictionaries, has by now slid some distance down the slippery slope whose bottom Web3 was so eager to reach. Heck, even the incomparable but special-purpose OED now tells you that “literally” can mean “figuratively.”

 

To be sure, descriptivism has qualities that many find appealing. It presents itself as egalitarian, answering “Why should we privilege the locutions of dead white men?” with “We shouldn’t, and we don’t. We tell you how a range of past and present English speakers use words and encourage you to make choices for yourself.” I mistrust that claim, because no doubt all sorts of biases lurk in the underlying data. What’s more, if dictionaries really wanted to help writers and speakers of various English dialects use them more effectively, shouldn’t there be specific dictionaries for many more subsets of English than there are? All the dictionaries I’m discussing purport to cover all dialects.

 

Second, descriptivism is a definable goal that for-profit companies, which publish most dictionaries, can comfortably aim for. Data about word usage is in limitless supply, and it’s cheap and superficially unambiguous, while discernment about usage can be hard to find, costly to engage, and easy to doubt and argue with.

 

Finally, descriptivism feels scientific, like linguistics. And up to a point, it is scientific. Today’s lexicographers have at their command astonishing “corpora”—huge electronic compiled bodies of language, drawn from a wide range of spoken and printed sources—to tease out new information about how the language is used. Big data!

 

Would you, however, consider it a good idea to fold your school’s English composition program into its linguistics department? The two have different purposes and use different methods. The purpose of dictionaries was, until the 1960s, neatly aligned with the purposes of English composition courses: to teach people how to use our common language correctly, clearly, and effectively. Web3 and its successors, including M-W.com, have washed their hands of that responsibility. I’ve even heard lexicographers make fun of the idea that that’s their job. Meanwhile, the rest of us turn to dictionaries in hopes they’ll teach us to use language better. Sorry, that’s not what they do anymore.

 

So why is M-W.com the best dictionary for everyday purposes? Because it’s free, readily available, and easy to use. It has usage notes that give sound guidance. It’s online (as the others are too), so you always see the latest versions of entries. Merriam-Webster itself admits that its .com dictionary is more up to date than the latest edition of its Collegiate. And the Collegiate—and therefore, I have to assume, M-W.com—is in wide use. Stylebooks including The Chicago Manual, and periodicals ranging from The New Yorker to MIT Technology Review, have it as their house dictionary.

 

In descriptivism—as in other things I like better, such as evolution—success breeds success. If the nation’s most widely used dictionary says that the verb “face-palm” is written with a hyphen and “humblebrag” is not, that in itself is bound to skew the words’ spelling toward those choices—until the people who mostly use them decide dictionaries are irrelevant. I’ll write some other time about how to go around dictionaries direct to the sources and be your own lexicographer.

 

Credit: Pixabay Image 1798 by PublicDomainPictures, used under a CC0 Creative Commons License

 

For 30-some years I’ve worked as a line editor of newspaper, magazine, and journal articles and professional reports. Much of what I edit is also reviewed by a professional fact checker, or else I check it as I go along. So, I get to see what’s incorrect in final drafts as well as what’s ungrammatical or infelicitous. The authors of virtually everything I work on have bachelor’s degrees, master’s degrees, and often even Ph.D.’s, yet everyone goofs up in their writing sometimes. Allow me to share some common kinds of goofs. When I see them, it lowers my opinion of the writer. Hopefully we can motivate students by reminding them that these goofs reflect poorly on them in the eyes of prospective employers and other readers.

 

  • Professional fact checkers, in particular, know that writers are not always careful with factual details—that’s why checkers have jobs. Working alongside them, I’ve occasionally found startling differences between the manuscript the writer turned in and the galleys once the checkers have done their work. The memorable “oldest human settlement in North America,” for example, might become a ho-hum “long-inhabited place.” Often, a writer’s most compelling assertions are what need to be toned down. It is much easier to overstate one’s case than to present a nuanced truth that will lead readers to a similar conclusion. But if the truth is nuanced, so must be the writing about it.

 

  • Carelessness is also endemic in footnotes and endnotes, by the way. I’ve hardly ever fact-checked a piece with notes in which they all were accurate and consistently formatted. Notes in journal articles and reports that have multiple authors tend to be especially haphazard. Collaborators should agree in advance about what format to use, or assign one person at the editing stage to make the formats consistent, or both.

 

  • Inexperienced writers are prone to overcapitalizing. For instance, according to a graduate student’s self-description, he had worked at “a Startup that uses Artificial Intelligence.” The temptation to overcapitalize is especially strong with phrases that are often turned into acronyms—as “artificial intelligence” is with “AI.” But unless the thing is an official entity or officially designated kind of entity, there’s no reason to cap it. The NIH is the National Institutes of Health, a federal agency; a PCMH is a patient-centered medical home. The former phrase is a proper noun, while the latter is just jargon, or a so-called term of art. The reasons not to cap such things are that doing so ascribes undue importance, and in aggregate it can look forbidding and start to make English look like German.

 

  • My texting app is pretty good at distinguishing between “it’s” and “its”—but I can say that with confidence only because I know the difference. The more texts one writes, the more likely it becomes that one will have trouble deciding which form is correct in other writing. The English language didn’t make it any easier on us when it mandated that “the book’s cover” should have an apostrophe but “its cover” should not. All I can say is that good writers have been able to reliably distinguish between “its” and “it’s” for centuries. Its (just kidding!) not complicated.

 

  • Young professionals sometimes are unclear on when to hyphenate. They may hyphenate after “-ly” adverbs—as in “a strongly-voiced objection” and “a fully-fledged alternative—even though the rule against doing that would seem to be straightforward. Those phrases don’t misread without hyphens, so the hyphens serve no purpose. Young people also tend to multiply hyphens in phrases, writing things like “return-on-investment” and “an arrival on-time.” “On-time arrival” uses a hyphen so that “on-time” will be read as a unit. The need for that hyphen tempts people to write “arrival on-time”—and from there, I guess, “return-on-investment” isn’t much of a stretch. However, hyphens are much less often needed in modifiers that come after what they modify than in modifiers that come before. A “well-thought-out” argument is “well thought out.” Dictionaries, unfortunately, don’t reinforce this point. Merriam-Webster online says that “well-thought-out” is an adjective, defines it, and leaves it at that. Type “well thought out” into its search bar, and the site will hyphenate it for you and take you to that entry. As is usually the case, technological tools cannot take the place of good classroom instruction. If you don’t teach your students the difference, they may never find it out.

 

Some of these shortcomings have to do with fine points of writing, which won’t make or break, say, a résumé or a professional report the way sloppy thinking or shoddy research will. All the same, the problems I’ve described can for the most part be solved quickly, simply, and definitively. Getting things like these wrong signals a lack of mastery loud and clear—so it can’t hurt to teach your students to get them right.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff@mac.com.

 

Barbara Wallraff is a professional writer and editor. She spent 25 years at the Atlantic Monthly, where she was the language columnist and an editor. The author of three books on language and style—the national bestseller Word Court, Your Own Words, and Word Fugitives—Wallraff has lectured at the Columbia School of Journalism, the Council of Science Editors, Microsoft, the International Education of Students organization, and the Radcliffe Publishing Program. Her writing about English usage has appeared in national publications including the American Scholar, the Wilson Quarterly, the Harvard Business Review blog, the Wall Street Journal, and the New York Times Magazine. She is coauthor of In Conversation: A Writer's Guidebook, which will be published in December 2017.

 

Credit:  Pixaby Image 1870721 by 3844328, used under a CC0 Creative Commons License

 

The principle is: Refer to people the way they want to be referred to.


Admittedly, special situations come up that test the principle—as when Prince changed his stage name to a symbol that none of us have on our keyboards and that is said to be unpronounceable. But Prince’s fans went along with it, mostly calling him “The Artist Formerly Known as Prince.” Seven years or so later when he changed his name back to Prince, they went along with that too.


Then again, some special situations lurch over the line between showing respect and acquiescing in puffery. Maybe a senior employee likes to be introduced by his title, Chief Knowledge Officer So-and-So, or a group wants its name written in all capitals and never mind that the name isn’t an acronym. Marketers and publicists and so on may not fully understand the implications of terminology they ask for. We have to think for ourselves.


And yet the general principle holds: Refer to people the way they want to be referred to. It applies when writing or speaking to or about individuals in academic, workplace, and social settings. It applies to pronouns (“a transgender man … he”). It applies to groups (“people with autism,” “Rohingya”). In many cases, we can’t truly know what most members of groups, or what particular atypical individuals, prefer. But since the principle we’re discussing is fairly commonly understood, we can rely on sources such as reputable media online to have done that part of our homework for us, and we just need to see how they refer to the people we’re writing about.


The principle also applies to you—even if you’re not particular about how your students refer to you. It may be useful to remember that students are negotiating the transition from treating adults with deference, as they were probably brought up to do, to treating them mainly as peers. In my opinion, you’ll be doing students a favor if you make some preference or other about how to refer to you clear as early as possible in your interactions with them. Various choices can be valid in a student-instructor relationship (Dr. Baker, Professor Baker, Ms. Baker, Victoria, Vickie), so why make them guess which you’d like? More important, this situation is about respect, and it offers you a nonconfrontational opportunity to indicate what you find respectful to the right degree.


In addition to the question of how to address someone is how to refer to someone in writing. When students cite sources in papers, the style they’re following (MLA, APA, Chicago, CSE) tells them the forms to use. Most styles call for citing last names only, and never mind the advanced and honorary degrees, the lofty titles that many of the sources they’re citing worked so hard to earn. For instance, in academic text and bibliographies alike, the cosmologist, when he’s a source, is simply “Hawking.”


But what if the assignment is to write a brief biography of Stephen Hawking? On first reference he’s probably “Stephen Hawking.” After that, though, is he “Hawking” or “Dr. Hawking” or “Stephen”? Is his first wife “Jane” or “Mrs. Hawking”? Is his second wife “Elaine” or “Ms. Mason”? These options are either more or less appropriate in different contexts, for different audiences; they set different tones. Similarly, calling Hillary Clinton “Mrs. Clinton” sets a different tone from calling her “Secretary Clinton.”


To be sure, these are trivial choices. But they also signal to readers, possibly several times per page, such things as how familiar the writer is with the genre in which she is writing. If students can keep half an eye on how their sources refer to people, even as they keep their other eye and a half on what the sources are saying about them, it can only work to their advantage.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff@mac.com.

 

Barbara Wallraff is a professional writer and editor. She spent 25 years at the Atlantic Monthly, where she was the language columnist and an editor. The author of three books on language and style—the national bestseller Word Court, Your Own Words, and Word Fugitives—Wallraff has lectured at the Columbia School of Journalism, the Council of Science Editors, Microsoft, the International Education of Students organization, and the Radcliffe Publishing Program. Her writing about English usage has appeared in national publications including the American Scholar, the Wilson Quarterly, the Harvard Business Review blog, the Wall Street Journal, and the New York Times Magazine. She is coauthor of In Conversation: A Writer's Guidebook, which will be published in December 2017.

 

Credit: Pixaby Image 2849602 by surdumihail, used under a CC0 Creative Commons License

 

A friend of mine, Jim, is a successful SAT/ACT tutor who’s been in the business for some time. In a typical year, he tells me, several of his tutees get an 800 on the SAT’s reading and writing section. Now that your fall composition courses are in full swing, you may find useful—and possibly surprising—his perspective on things that affect many students’ ability to read critically and write persuasively.


“They have trouble with irony, if it’s good irony,” Jim told me. I was puzzled. My impression is that students adore irony and other rhetorical devices with which people make their points indirectly. “That’s why many of them don’t like Jane Austen,” he said. “They don’t realize she’s funny.”


Indeed, young people’s idea of irony can be heavy-handed, and when they employ irony themselves, they tend to lack control over it. Maybe they don’t even necessarily recognize subtle irony? By now we’re all used to the idea that tone is easy to misinterpret in emails and texts—in writing, that is. I wonder whether the modern practice of adding emoji to everyday communications is undermining readers’ ability to recognize, and writers’ ability to convey, irony and similar matters of tone in plain words.


But “the biggest thing” he notices is that students “infer too much,” Jim told me. “This is the big problem of our time! They don’t see what’s there; they see what they expect is there. They bring their own perspective from previous things they’ve read and seen, movies and such. They think the writer is saying something expected.”


That observation, of course, applies directly to students’ ability to read critically, but it has implications for their ability to write well too.


“It’s hard to write something that tells readers exactly what you mean without saying the obvious. And saying the obvious makes readers think you don’t understand where they’re coming from,” Jim said. “The trick is to know what to leave out. A good writer is not only telling you things but also giving you clues to what they’re implying.” That is, saying the obvious is counterproductive. Not only does it validate the viewpoint of readers who expect to be reading the expected, but it risks boring them.


“Teach your students to trust their readers,” Jim said. “Writers always have to wonder what the reaction of the reader is going to be.” Naturally, you, the instructor, are the reader your students really need to please. But often, let’s say when you’ve assigned a persuasive essay, you may want them to write as if they are addressing an audience that is interested in the topic and knows the basic facts about it but doesn’t necessarily see it their way.


“For the student, it’s a role-play,” Jim said. “They should do what actors are sometimes taught to do: play to one person—in this case, probably a person who’s not the instructor.”


As writers, no doubt most of us are inclined to imagine we understand our readers, hypothetical or real, better than we really do. (This is where not inferring too much sneaks back in.) Recognizing this tendency—and not projecting ourselves onto them—is a first step toward knowing our audience better and therefore being better able to persuade them, inform them, hold their interest, or whatever our intended purpose is.


“The reader a student is writing for should be someone they know,” Jim continued. “Someone who’s a little different from them, but with things in common.” Of course, all of us have many things in common—mainly the fundamental things. So in a way, the deeper the subject, the easier it may be to find and speak to that common ground.


“With truly great writers,” Jim said, “you can read something of theirs from 500 B.C. and say to yourself, How do they know me?”


But if your student writers can begin to elicit that reaction from someone—like you or their ideal reader—who lives in their own time and place, surely you can count that as an accomplishment that you and they can be proud of.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff @mac.com.

 

Barbara Wallraff is a professional writer and editor. She spent 25 years at the Atlantic Monthly, where she was the language columnist and an editor. The author of three books on language and style—the national bestseller Word Court, Your Own Words, and Word Fugitives—Wallraff has lectured at the Columbia School of Journalism, the Council of Science Editors, Microsoft, the International Education of Students organization, and the Radcliffe Publishing Program. Her writing about English usage has appeared in national publications including the American Scholar, the Wilson Quarterly, the Harvard Business Review blog, the Wall Street Journal, and the New York Times Magazine. She is coauthor of In Conversation: A Writer's Guidebook, which will be published in December 2017.

 

Credit: Pixaby Image 1185626 by janeb13, used under a CC0 Creative Commons License

Do you remember being 19? I do. The last thing I wanted to be was correct—or (in most circumstances) appropriate. I wanted to be … independent! Creative! Envied! I did want to write well, and I understood that doing so involved correct grammar and the use of appropriate language. I just didn’t like to think of it that way.

 

Lately I’ve been wondering what a better way to think of it might be. What are we really asking students to do when we tell them to write correctly and appropriately? We’re asking them to conform to standards of language and conventions of genre and discipline. Unfortunately, conformity is another thing that young people may not be interested in working hard to achieve. “Why are we supposed to do X?,” they’ll want to know.

 

These are much the same issues as come up when someone is learning a sport. The coach will say, “Hold the bat/club/racket like this,” and the learner will be more inclined to remember and follow through if the coach offers a reason, such as “It will give you a stronger swing.” Saying only “This is the correct way to do it” is not only less helpful but also less persuasive.

 

With sports, people learn many of the rules and the reasons for them by, over time, watching, asking questions, and playing. With writing, we learn by reading, asking questions, and writing. Some reasons are obvious; just seeing the rule broken shows what its purpose is. The reason not to write run-on sentences, for instance, becomes clear if you have to struggle through a few run-ons that someone else wrote. Or consider repeats. In my work as an editor, I mark phrases like “The supervisor is responsible for supervising …” or “Currently, the currents in the Atlantic Ocean …” and suggest changes. Authors do ask me why I’ve suggested certain changes, but none has ever asked me about these. Evidently they realize what’s wrong without asking.

 

Other reasons can be hard to figure out. For instance, why do we look down on the passive voice in most fields and genres but consider it standard when writing in scientific disciplines? If the active voice is stronger and more forceful in journalism or a memoir, why isn’t it more forceful when describing a scientific experiment? It’s because, in science, who performed the experiment is generally beside the point. A sentence like “Jamie Gonzales placed the mice into the maze” puts the focus on the wrong thing; “The mice were placed into the maze” is what readers need to know; who did the placing doesn’t really matter. Ideally, experiments will be replicable—anyone might follow the same procedure and achieve the same results. To that extent, the passive-voice convention makes sense.

 

Not even expert writers always know the reasons underlying some of the writing standards and conventions they observe. Often, to begin to understand a convention, we need to critically read examples of its being observed or not, and we need to consider how the convention evolved over time. It would be an interesting assignment to have students read passages in various genres, possibly all on the same subject—say, an abstract from a scientific journal, a newspaper article about the same study, and an editorial discussing its implications—and discuss the differences they perceive. How do the purpose and audience of each genre influence the conventions used within?

 

When the reason for a writing standard or convention is opaque, we may have to ask students to take it on trust that there is a reason. That trust will be easier for them to come by if we explain as many reasons as we can for other standards and conventions, rather than simply telling them what’s correct.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff @mac.com.

 

Barbara Wallraff is a professional writer and editor. She spent 25 years at the Atlantic Monthly, where she was the language columnist and an editor. The author of three books on language and style—the national bestseller Word CourtYour Own Words, and Word Fugitives—Wallraff has lectured at the Columbia School of Journalism, the Council of Science Editors, Microsoft, the International Education of Students organization, and the Radcliffe Publishing Program. Her writing about English usage has appeared in national publications including the American Scholar, the Wilson Quarterly, the Harvard Business Review blog, the Wall Street Journal, and the New York Times Magazine. She is coauthor of
In Conversation: A Writer's Guidebook, which will be published in December 2017.

Even able writers who try their best to “be clear” may fail miserably. A couple of months ago, I was reminded of how subtle clarity can be—and how greatly it can matter.

 

In March the U.S. Court of Appeals for the First Circuit, in Maine, handed down a controversial decision—one I heard about the same way you probably did, in coverage by the New York Times, the Washington Post, CNN, NPR, and other news outlets. The court reversed a lower court decision in a Maine labor law case to rule in favor of the plaintiffs, dairy-truck drivers, on the grounds that the absence of a comma in a state law made it ambiguous.

 

The law says that the usual regulations mandating extra pay for overtime do not apply to the following categories of work: “The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution” of perishable foods. Because no comma appears in that series after “shipment”—because the series lacks a serial, or Oxford, comma—about 75 drivers are entitled to four-plus years of back overtime, the court decreed. That overtime is worth millions.

 

As the Times explained the Court of Appeals’ reasoning, “The phrase could mean that ‘packing for shipment or distribution’ was exempted, but not distribution itself.” So, arguably, people who distribute perishable food do not belong to an exempt category, and they do therefore deserve overtime pay.

 

My first thought when I read this was, No, sorry—nothing against truck drivers, but the law does not say that. If what was exempt from overtime was packing for either of two purposes (shipping or distribution), the phrase should have ended, “… marketing, storing OR packing for shipment or distribution.” The missing “or” is the clincher, not the serial comma, which would unimportantly appear, or not, after “storing.” As written, the single “or” can’t, correctly, be part of the phrase “shipment or distribution” because it’s busy tying together the series as a whole. The two distinct work categories of packing and distribution are being declared exempt.

 

But then I stepped back from the grammar and thought, The higher court found the law ambiguous! If the justices couldn’t agree on what the law means, what more do we need to know? And I noticed that the sentence does contain a bit more evidence pointing toward ambiguity: canning, processing, … and packing are all -ing forms of verbs, specifically gerunds, whereas shipment and distribution are not. The form of these two nouns subtly encourages us to think of them as different from the gerunds in the series. Might they be a pair of objects of the preposition for?

 

Further, let’s acknowledge that not every law on the books is well written or even up to code grammatically. So never mind that the manual for drafting laws in Maine, according to the Times, advises against using serial commas. (The manual is said to give “trailers, semitrailers, and pole trailers” as a don’t-do-it example, and “trailers, semitrailers and pole trailers” as one to emulate.) It’s crazy that the meaning of this section of the law in question depends so heavily on one optional comma.

 

To me, this episode demonstrates why we should all routinely use serial commas. “The canning, processing, packing for shipment, or distribution” of perishable foods—that’s clear, no?

 

But it also contains a broader lesson about looking out for readers. I doubt that any of us can write anything worth reading if we’re constantly considering possible misreadings and ambiguities in sentences as we draft them. So we need to take that step after drafting and read our work over with sharp, skeptical eyes. I think of it as pretending I’ve never before seen what I just wrote. When I have time to put the work aside overnight, or at least while I go out for a walk, I do it. It makes the little trick of imagination I’m playing easier. Alternatively—or usually also—I ask someone to weigh in who really has never before seen what I wrote and who understands that I’m not just seeking praise.

 

While looking out for our readers, we must also give them credit as critical thinkers; we don’t need to tell them things over and over. Trying to see through their eyes what we’ve written doesn’t have to lead to dull repetition. Rather, the idea is to meet readers more than halfway.

 

Looking out for readers may not be worth millions to them—let alone to us—but still it’s worth a lot.

 

Do you have questions about language or grammar, or are there topics you would like me to address? If so, please email me at bwallraff @me.com.

 

Barbara Wallraff is a professional writer and editor. She spent 25 years at the Atlantic Monthly, where she was the language columnist and an editor. The author of three books on language and style—the national bestseller Word CourtYour Own Words, and Word Fugitives—Wallraff has lectured at the Columbia School of Journalism, the Council of Science Editors, Microsoft, the International Education of Students organization, and the Radcliffe Publishing Program. Her writing about English usage has appeared in national publications including the American Scholar, the Wilson Quarterly, the Harvard Business Review blog, the Wall Street Journal, and the New York Times Magazine. She is coauthor of
In Conversation: A Writer's Guidebook, which will be published in December 2017.