Skip navigation
All Places > The English Community > Bedford Bits > Blog > Author: Jack Solomon
1 2 3 4 5 Previous Next

Bedford Bits

93 Posts authored by: Jack Solomon Expert
Jack Solomon

Things Fall Apart

Posted by Jack Solomon Expert Mar 29, 2018

 

While there appears to be some significant doubt over whether Cambridge Analytica really had much effect on the outcome of 2016 presidential election (Evan Halper at the L.A. Times makes a good case that it didn't), the overall story of the way that millions of Facebook profiles were mined for partisan purposes is still something that is of profound significance in this time when digital technology seems to be on the verge of undermining the entire democratic process itself. As such, the Facebook/Cambridge Analytica controversy is a worthy topic for a class that makes use of popular culture in teaching writing and critical thinking.

 

If you happen to be using the 9th edition of Signs of Life in the U.S.A., you could well begin with John Herrman's "Inside Facebook's (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political Media Machine." In this extensive survey of the many ways in which Facebook has fostered an ecosystem of political activists who invade your news feed with ideologically targeted content, Herrman shows how the marketing of online behavior has been transformed into a "(Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political Media Machine." That our Internet activity is being tracked and our data mined is no secret anymore, and many people don't seem to mind—so long as it only results in specially curated advertising pitches and coupon offers. But what Herrman describes goes well beyond product merchandizing into information manipulation, the building of highly politicized news silos where the news you get is the news that someone has calculated that you want to get, and nothing else, as more and more Americans transition away from such traditional news sources as newspapers and television to Facebook, Twitter, and a myriad of other social media.

 

Brooke Gladstone's "Influencing Machines: The Echo Chambers of the Internet" (also in the 9th edition of Signs of Life), succinctly explains the effect of this shift. With no pretense of presenting a balanced palette of news and information, the new media are exacerbating and deepening the social divisions in America, creating ideological echo chambers that effectively constitute alternate realities for those that inhabit them. The result is a kind of political and cultural echolalia.

 

It's little wonder, then, that the contending parties in America cannot find a way to communicate effectively with each other. Already divided by a history of cultural conflict and contradiction (chapter 7 of Signs of Life explores this division in depth), Americans have vanishingly less in common with those whose lives lie on the other side of the great divide.

 

There is something profoundly ironic about all this. For many years it has been assumed that the effect of modern mass media has been to chip away at America's regional differences, flattening them out into a kind of unaccented (literally and figuratively) sameness: a mass culture watching the same TV shows, eating the same food, and talking in the same way. But now something is changing. Rather than tending towards a common culture, America, sliced and diced by digital algorithms, is dividing into mutually hostile camps.

 

William Butler Yeats said it best long ago at a time when his own country was divided in two: "Things fall apart," he lamented, "the centre cannot hold." Now there's something to hashtag.

 

 

Image Source: "Facebook security chief rants about misguided “algorithm” backlash" by  Marco Verch on Flickr 10/08/17 via Creative Commons 2.0 license.

Jack Solomon

And the Winner Is . . .

Posted by Jack Solomon Expert Mar 15, 2018

 

As I consider the cultural significance of this year's Academy Awards ceremony, my attention has not been captured by the Best Picture winner—which strikes me as a weird amalgam of Water World, Beauty and the Beast (TV version), and Avatar, with a dash of Roswell thrown in for good measure—but by something quite external to the event. Yes, I'm referring to the clamor over the 20% television ratings drop that has been lighting up the airwaves.

 

Fortune blames the drop off on "the rapidly-changing viewing habits of TV audiences, more and more of whom are choosing to stream their favorite content online (including on social media) rather than watching live on TV," as does Vulture and NPR, more or less. They're probably right, at least in part. Other explanations cite the lack of any real blockbusters among the Best Picture nominees this year (Fortune), as well as the Jimmy Kimmel twopeat as Master of Ceremonies (Fortune). But the really big story involves what might be regarded as the transformation of the Nielson ratings into a kind of Gallup Poll.

 

Consider in this regard the Fox News angle on the story: "Oscars ratings are down, and ABC's lack of control over the Academy may be to blame." Or Breitbart's exultation over the low numbers. And, of course, the President's morning after tweet. In each case (and many others), the fallout from the fall off is attributed to voter—I mean viewer—disgust with the "elitist" and "liberal" tendencies of the Academy, which is now getting its comeuppance.

 

Is it? I don't know: a thorough analysis of the numbers seems to be in order, and I would expect that the ABC brass at the very least will be conducting one in an attempt to preserve their ad revenues. In my own view, whatever caused the ratings drop is certainly overdetermined, with multiple forces combining to reduce television viewership not only of the Academy Awards and the Super Bowl but of traditional televised media as a whole. Certainly Fortune, Vulture and NPR are correct about the effect of the digital age on American viewing habits, but, given the leading role that Hollywood has played in the resistance to the Trump presidency, a deeper exploration of the possibility of a growing resistance to the resistance as evidenced in television viewing preferences could shed some light on emerging trends within the culture wars in this country.

 

Of course, the Fox News (et.al) take on the matter could prove to be fake news in the end, but even should that happen, the fact that the ratings drop could be so easily exploited for political purposes is itself significant. There are a number of takeaways from this. The first can be found in a Washington Post blog entitled "Trump is supercharging the celebrification of politics." The Post blog surveys an intensification of a cultural process that has been the core premise of nine editions of Signs of Life in the U.S.A.: namely, that the traditional division between "high" culture and "low" (or workaday and recreational) in America is being replaced by a single "entertainment culture" that permeates our society from end to end. The transformation has been going on for a long time, but Trump has intensified it.

 

But as the hoohah over the Academy Awards television viewership decline demonstrates, this entertainment culture is not a common culture: Americans are lining up on two sides of a popular cultural divide that matches an ideological one, with Fox News audiences lined up against MSNBC's, and innumerable other viewership dichotomies (Duck Dynasty, say, vs. Mad Men) indicating just how wide the culture gap has grown. So now we're counting audience numbers for such once broad-appeal spectacles as the Super Bowl and the Academy Awards to see which side is "winning." This is a new normal indeed, and it is indicative of a country that is tearing apart.

 

But then again, the same Post blog that I've cited above reports that the most read Washington Post story for the day in which the blog appeared concerned "the season finale of The Bachelor”—a TV event that really puts the soap into soap opera. So maybe there actually is something of world-historic importance for Americans to rally 'round after all.

 

Image Source: "Academy Award Winner" by  Davidlohr Bueso on Flickr 09/06/09 via Creative Commons 2.0 license

I had not planned on writing on this topic as my Bits Blog posting deadline approached. But when a headline in the L.A. Times on February 21st blared that "Conspiracy theories about Florida school shooting survivors have gone mainstream"—and this on a day when America's school children rose up to say "enough is enough" about gun violence—I felt that I ought to say something. What to say, however, is difficult to decide. As I wrote after the Route 91 Harvest music festival massacre in Las Vegas, I am not confident (to put it mildly) that anything meaningful is going to be done—the L.A. Times has a nailed it with a "Handy clip-and-save editorial for America's next gun massacre" and I don't have any solutions that the students now marching for their lives aren't already proposing more effectively than I can. But the whole mess has—thanks to something I've read in the Washington Post—enabled me to crystallize a solution to a critical thinking conundrum that I've been pondering, and that's what this blog will be about.

 

That conundrum is how to teach our students how to distinguish between reliable and unreliable information on the Internet. It seems like such an easy thing to do: just stick to the facts and you'll be fine. But when the purveyors of conspiracy theories have grown as sophisticated as they have in mimicking the compilation of "factual" evidence and then posting it all over the Internet in such a way as to confuse people into thinking that there is a sufficiency of cross-referenced sources to make their fairy tales believable, it becomes more of a challenge to teach students what's rot and what's not. And as I've also written in this blog, that challenge isn't made any easier by academic attacks on objective factuality on behalf of poststructural theories of the linguistic and/or social construction of reality. So, as I say, the matter isn't as simple as it looks.

 

Here's where that Washington Post article comes in. For in Paul Waldman's opinion piece, "Why the Parkland students have made pro-gun conservatives so mad," he identifies what can be used as a simple litmus test for cutting through the clutter in an alt-fact world: keep an eye out for ad hominem arguments in political argumentation.

 

Here's how he puts it:

The American right is officially terrified of the students of Marjory Stoneman Douglas High School. Those students, who rapidly turned themselves into activists and organizers after 17 of their fellow students and teachers were murdered at their school, have become the most visible face of this new phase of  the gun debate, and conservatives are absolutely livid about it. As a consequence, they’re desperately arguing not just that the students are wrong in their suggestions for how gun policy should be changed, but also that they shouldn’t be speaking at all and ought to be ignored.

 

There are two critical reasons the right is having this reaction, one more obvious than the other. The plainer reason is that as people who were personally touched by gun violence and as young people — old enough to be informed and articulate but still children — the students make extremely sympathetic advocates, garnering attention and a respectful hearing for their views. The less obvious reason is that because of that status, the students take away the most critical tool conservatives use to win political arguments: the personal vilification of those who disagree with them.

 

It is the use of "personal vilification of those who disagree" that reliably marks out an evidence-starved argument. Thus, when Richard Muller—once a favorite of the climate change denial crowd—reviewed his data and announced in 2012 that he had changed his mind and concluded that climate change is both real and anthropogenic, his erstwhile cheerleaders simply began to call him names. And you probably don't even want to know about the personal attacks they have been making on Michael Mann.

 

But given the high level of personal vilification that takes place on the Net (the political left can be found doing this too), our students have probably been somewhat desensitized to it, and may even take it for granted that this is the way that legitimate argumentation takes place. This is why it is especially important that we teach them about the ad hominem fallacy, not simply as a part of a list of logical and rhetorical fallacies to memorize but as a stand-alone topic addressing what is probably the most common rhetorical fallacy to be found on the Internet, and political life more generally these days.

 

Now, we can't stop simply with warning our students against ad hominem arguments (we should teach them not to make them either), but we can establish the point as a kind of point of departure: if someone's claims are swathed in personal attacks and accusations, it is likely that there is nothing of substance behind the argument. After all, an ad hominem attack is a kind of changing of the subject, a distraction from the attacker's lack of any relevant evidence.

 

I know this won't change the world, and it is of no use against the sort of people who are now vilifying American school children who have had enough, but at least it's a place to begin for writing and critical thinking instruction.

 

Yes, it's that time of year again: time for Super Bowl Semiotics, advertising division. And as I contemplate this year's rather uninspiring, and uninspired, lineup, I find myself realizing that the ads were more significant for what they didn't say (or do) than for what they did—like Sherlock Holmes' dog that didn't bark in the night. Here's why.

 

To start with, one dog that didn't bark this time around was a real dog: that is, after a couple of high-profile puppy-themed ads in the recent past (Budweiser's "Puppy Love" ad from Super Bowl 48 was a hit, while GoDaddy's parody the following year was a disaster—you can find complete analyses of both in the 9th edition of Signs of Life), Madison Avenue decided to let this sleeping dog lie for once, along with the ever-popular cute animal theme overall. I expect to see it come back next year (or soon thereafter) however: cute animals are good salespeople in America.

 

Of course, there was a fair share of comedy in the lineup (yuks sell stuff too), and the consensus appears to be that the comic ads from Tide took the prize for Best Ads in a Sponsoring Role. The Tide ads, of course, borrowed a page from the Energizer company, whose Energizer Bunny ads—first aired in 1989—employ a sophisticated advertising strategy that is essentially self-reflexive, parodying existing campaigns for other products, and, in so doing, appealing to an audience that has been so super saturated with advertising gimmicks that it has become skeptical of advertising in general.

 

But the big story of Super Bowl 52 was the relative lack of politically themed ads. Given the way that social politics—from #oscarssowhite to #metoo—have been playing such a prominent role in America's popular cultural main events recently, this may appear to be a surprising omission, but not when we consider how the NFL has been witness to an entire season of political protests that have tied it up in the sort of controversies it is not well equipped to handle. And given the ruckus that an immigration-themed Super Bowl ad made last year, one can see why politics was not on the agenda.

 

Not taking the hint, however, the ad folks at Dodge thought that they could enter the political fray in a way that would make everyone happy . . . and fell flat on their face with their Martin Luther King, Jr. spot. Dr. King, as at least one critic of the ad has put it, wasn't talking about trucks. In fact, as some careful readers of the actual MLK speech that Dodge appropriated have noted, King was warning his audience precisely against the power of advertising. Um, maybe a little learning is a dangerous thing.

 

In my view, the ad folks at Dodge tripped up in yet another way during the night, though I don't think that anyone else has noticed this. I refer here to the Vikings-take-Minneapolis Ram truck spot, which took a group of actual Icelanders—dressed up as medieval Viking raiders—from Iceland to Minneapolis in a thoroughly juiced-up journey, all set to Queen's "We Will Rock You." Now, some Minnesota Viking fans have taken the ad as some sort of dig at the football team, but I think the real story parallels what I've been writing here about the Thor movies. All those ferocious blondes, cruisin' for a bruisin' . . . . I don't want to press the matter, but I don't think that this is really a good time to so aggressively display what can only be called a demonstration of raw "white power."

 

Perhaps the biggest story of all, however, is that no ad really made that much of an impact. Oh, there are (as always) lists of favorites to be found all over the Net, but nothing really broke through the ad clutter in any big way. At five million dollars for thirty seconds of exposure (the cost seems to go up by a tidy million every year), that's something of an anti-climax, but perhaps that's as it should be. After all, there is still a football game somewhere behind all this, and, as games go, it was quite a good game.

 

 

Credit: “2018 Super Bowl LII Minnesota Banner – Minneapolis” by Tony Webster on Flickr 1/27/18 via Creative Commons 2.0 license.

Since the publication of the first edition of Signs of Life in the U.S.A in 1994, semiotics has become a popular instrument in promoting critical thinking skills in composition classrooms. With such a broad variety of semiotic methodologies to choose from, however, I find it useful from time to time to clarify the precise semiotic approach that is presented and modeled in Signs of Life: hence, the title and topic of this blog.

 

To begin with, the methodology of Signs of Life reflects a synthesis of some of the most effective elements to be found within the broad history of semiotic theory. To describe that synthesis, I need to briefly sketch out just what history I am referring to. It begins, then, with Roman Jakobson.

 

Arguably the most commonly known approach to technical semiotics, Jakobson's ADDRESSER – MESSAGE – ADDRESSEE schema has constituted a foundation for generations of semioticians. A fundamentally formalistic approach to communications theory as a whole, Jakobson's model was modified by Stuart Hall, who introduced a political dimension into the equation with his notion of "dominant," "negotiated," and "oppositional" readings of cultural texts (like television programs)—readings that either completely accept, partially accept, or completely challenge the intended message of the addresser. In essence, both Jakobson's and Hall's views are involved in the Signs of Life synthesis.

 

Before getting to a more precise description of that synthesis, however, I need to describe the role of three other major pioneers of semiotic thinking. The first of these figures is Ferdinand de Saussure, whose description of the constitutional role of difference within semiological systems underlies the fundamental principle in Signs of Life that the "essential approach to interpreting signs of popular culture is to situate signs within systems of related semiotic phenomena with which they can be associated and differentiated" (13; n.b.: the principle of association is not explicit in Saussure, but is implicit in his notion of the conceptual "signified").

 

The second pioneer is Roland Barthes, whose notion of semiotic mythologies underpins the ideological component of cultural semiotic analysis that Signs of Life explores and teaches.

 

The third essential figure in the synthesis is C.S. Peirce, whose sense of the historicity of signs, along with his philosophical realism, has provided me with an antidote to the tendency towards ahistorical formalism that the tradition of Saussure has fostered. And it was also Peirce who introduced the principle of abduction (i.e., the search for the most likely interpretation in the course of a semiotic analysis) that is critical to the methodology that is described and modeled in Signs of Life.

 

I will now introduce into the mix two new terms which, to the best of my knowledge, are my own, and are to be found in the 9th edition of Signs of Life. These are "micro-semiotics" and "macro-semiotics." The first of these terms describes what we do when we set out to decode any given popular cultural phenomenon—like an advertisement or a television program. In this we more or less follow Jakobson, analyzing the addresser's message as it was intended to be decoded. The macro-semiotic dimension, on the other hand, builds on the micro-semiotic reading to take it into the realm of cultural semiotics, where Hall, Saussure, Barthes, and Peirce all come into play, with Hall and Barthes leading the way to oppositional (and even subversive) re-codings of cultural texts, while Saussure and Peirce give us the tools for doing so, as briefly described above in this blog.

 

Now, if you are unfamiliar with Signs of Life in the U.S.A. all this may sound rather too complicated for a first-year writing textbook, and I can attest to the fact that when its first edition was in development, the folks at what was then simply called Bedford Books were plenty nervous about the whole thing. But while there are a few technical points directly introduced in the book in the interest of clarifying as clearly as possible exactly how a semiotic interpretation is performed, the text is not inaccessible—as the existence of nine editions, to date, demonstrates. The point, for the purpose of this blog, is that the semiotic method, as synthesized in Signs of Life, has a solid and diverse pedigreewhich is something that you could always explain to any student who may wonder where all this stuff came from.

Jack Solomon

War Everlasting

Posted by Jack Solomon Expert Jan 18, 2018


 

In "The Myth of Superman," the late Umberto Eco's pioneering essay on the semiotics of superheroes, a useful distinction is drawn between the heroes of myth and those of the traditional novel. What Eco points out is the way that mythic heroes are never "used up" by their experiences in the way that novelistic heroes are. The narrator, say, of Great Expectations is a different man at the end of his story than he was at the beginning (this, of course is Dickens' point), and if a sequel were to be written, the Pip of that novel would have to show the effects of time and experience that we see in the original tale. Superman, on the other hand (and the mythic heroes like Hercules that he resembles) is the same person from adventure to adventure, not taking up where he left off but simply reappearing in new story lines that can be multiplied indefinitely.

 

As I contemplate the appearance of yet another installment in the endless Star Wars franchise (along with the equally endless stream of superhero sagas that dominate the American cinematic box office), however, I can detect a certain difference that calls for a readjustment of Eco's still-useful distinction. And since differences are the key to semiotic understanding, this one is worth investigating.

 

All we have to do to see this difference is to consider the casting of Mark Hamill and the late Carrie Fisher in Star Wars: The Last Jedi. Of course, part of the reason for this was simply marketing: nostalgia is a highly effective ticket seller. But when we associate this movie with other action-adventure films whose heroes can be seen to be aging in ways that they have not done so before (the Batman and James Bond franchises are especially salient in this regard), another, much more profound significance emerges. This is the fact that while the characters in today's most popular designed-to-be-sequelized movies are coming to resemble the characters of conventional novels (as Eco describes them), the situations they find themselves in remain more or less the same. Quite simply, they are forever at war.

 

To see the significance of this, consider the plot trajectory of the traditional war story. Such stories, even if it takes a while for them to come to a conclusion, do eventually end. From the Homeric tradition that gives us the ten years of the Trojan War (with another ten years tacked on for Odysseus to get home) to The Lord of the Rings, the great wars of the story-telling tradition have a teleology: a beginning, a middle, and an end, as Aristotle would put it. But when we look at the Star Wars saga (especially now that Lucas has sold the franchise to Disney), or the Justice League tales, or (for that matter) The Walking Dead, we can find provisional, but never final, victories. Someone (or something) somewhere, will be forever threatening the world of the hero, and the end is never in sight. It is violent conflict itself that is never "used up."

 

There are a number of ways of interpreting this phenomenon. One must begin with the commercial motivation behind it: killing off the war would be tantamount to killing the golden geese of fan demand, and no one holding onto a valuable movie franchise is going to want to do that.

 

But while this explanation is certainly a cogent one, it raises another question: namely, why are movie fans satisfied with tales of never-ending war? In the past, it was the promise of a final victory that would carry audiences through the awful violence that served as the means to the happy ending that would redeem all the suffering that preceded it. The popularity of today's never-ending war stories indicates that the mass audience no longer requires that. The violence appears to be self-justifying.

 

Perhaps this receptiveness to tales of never-ending war simply reflects a sophisticated recognition on the part of current audiences that wars, in reality, never really do end. World War I—the "war to end all wars"—led to World War II, which led to the Korean War, and then to Vietnam. And America has been effectively at war in Afghanistan since 2001, with no end in sight. And, of course, the "war on terror" is as open-ended as any Justice League enterprise. So maybe Hollywood's visions of endless wars are simply responding to a certain historical reality.

 

I would find it difficult to argue against such an interpretation. But somehow I don't think that it goes deep enough. I say this because, after all, the purpose of popular entertainment is to be entertaining, and entertainment—especially when it comes to the genres of fantasy and action-adventure story telling—often serves as a distraction from the dismal realities of everyday life. And so, just as during the Great Depression movie-goers flocked to glamorous and romantic films that were far removed from the poverty and deprivation of that difficult era, one might expect war movies today that offered visions of final victory—a fantasy end to war in an era of endless conflict.

 

So the successful box office formula of endless war suggests to me that audiences are entertained, not repelled, by sagas of wars without end. Interchangeable visions of heroes (I use the word in a gender neutral sense) running across desert landscapes and down starship corridors with explosions bursting behind them, simply promise more such scenes in the next installment as violence is packaged as excitement for its own sake: war as video game.

 

Which may help explain why we tolerate (and basically ignore) such endless wars as that which we are still fighting in Afghanistan.

 

Credit: Pixaby Image 2214290 by tunechick83, used under a CC0 Creative Commons License

Everyone has a secret vice, I suppose, and mine is reading online newspapers like Inside Higher Ed and The Chronicle of Higher Education—as in multiple times every day. I admit that there is something compulsive about the matter, something that goes beyond the unquestionable usefulness of such reading for someone who is both a university professor and a cultural semiotician, something, I'm afraid, that is akin to the all-too-human attraction to things like train wrecks. This might surprise anyone who does not read these news sources: after all, wouldn't one expect there to be nothing but a kind of staid blandness to higher education reporting? Tedium, not harum-scarum, would seem to be the order of the day on such sites.

 

But no, in these days when signs of the culture wars are to be found everywhere in American society, even the higher-ed news beat is not immune to the kind of squabbling and trolling that defaces so much of the Internet. The situation has gotten so bad that the editors of The Chronicle of Higher Education have discontinued the comments section for most of its news stories, while Inside Higher Ed has polled its readers as to whether it should do the same. So far, IHE has decided to continue with posting reader comments (though it just shut down the comments section responding to an article on a recent controversy at Texas State University), and although I think it would be better for the overall blood pressure of American academe to just scrap the comments section altogether, on balance I hope that that doesn't happen. Here's why.

 

Because for the purposes of cultural semiotics, the comments sections on the Internet, no matter where you find them, offer invaluable insights into what is really going on in this country. Unlike formal surveys or polls—which, though they claim scientific precision, can never get around the fact that people, quite simply, often lie to pollsters and other inquisitors—online comments, commonly posted in anonymity, reveal what their authors really think. It isn't pretty, and it can make your blood boil, but it can get you a lot closer to the truth than, say, all those surveys that virtually put Hillary Clinton in the White House until the votes were actually counted.

 

Among the many things that the comments on IHE can tell us is that the days when we could assume that what we do on our university campuses stays on our university campuses are over. Thanks to the Internet, the whole world is watching, and, what is more, sharing what it sees. This matters a great deal, because even though the sorts of things that make headline news represent only a very small fraction of the daily life of the aggregated Universitas Americus, these things are magnified exponentially by the way that social media work. Every time a university student, or professor, says something that causes a commotion due to an inadequate definition of the speaker's terms, that statement will not only be misconstrued, it will become the representative face of American academia as a whole—which goes a long way towards explaining the declining levels of trust in higher education today that are now being widely reported. This may not be fair, but all you have to do is read the comments sections when these sorts of stories break, and it will be painfully clear that this is what happens when words that mean one thing in the context of the discourse of cultural studies mean quite something else in ordinary usage.

 

Linguistically speaking, what is going on is similar to the days of deconstructive paleonymy:  that is, when Derrida and DeMan (et al.) took common words like "writing" and "allegory" and employed them with significantly different, and newly coined, meanings. This caused a lot of confusion (as, for example, when Derrida asserted in Of Grammatology, that, historically speaking, "writing" is prior to "speech"), but the confusion was confined to the world of literary theorists and critics, causing nary a stir in the world at large. But it is quite a different matter when words that are already loaded with socially explosive potential in their ordinary sense are injected into the World Wide Web in their paleonymic one. Another part of the problem lies in the nature of the social network itself. From Facebook posts that their writers assume are private (when they aren't), to Twitter blasts (which are character-limited and thus rife with linguistic imprecision), the medium is indeed the message. Assuming an audience of like-minded readers, posters to social media often employ a kind of in-group shorthand, which can be woefully misunderstood when read by anyone who isn't in the silo. So when the silo walls are as porous as the Internet can make them, the need for carefully worded and explained communications becomes all the more necessary. This could lead to lecture-like, rather boring online communication, but I think that this would be a case of boredom perpetrated in a good cause. The culture wars are messy enough as they are: those of us in cultural studies can help by being as linguistically precise, and transparent, as we can.

So Thor is back, hammering his way to another blockbusting run at the box office. But this time, it's almost as if the producers of Thor: Ragnarok read an analysis I posted to this blog on November 11, 2013, when Thor: The Dark World appeared, because some interesting things have happened to the franchise this time around that seem to be in reaction to what I argued back then. So let's have a look first at what I said in 2013, before turning to the present. Here's what I said then:

 

Well, the dude with the big hammer just pulled off the biggest box office debut for quite some time, and such a commercial success calls for some semiotic attention.

 

There is an obvious system within which to situate Thor: The Dark World and thus begin our analysis. This, of course, is the realm of the cinematic superhero, a genre that has absolutely dominated Hollywood film making for quite some time now. Whether featuring such traditional superheroes as Batman, Spider Man, and Superman, or such emergent heavies as Iron Man and even (gulp!) Kick-Ass, the superhero movie is a widely recognized signifier of Hollywood’s timid focus on tried-and-true formulae that offer a high probability of box office success due to their pre-existing audiences of avid adolescent males. Add to this the increasingly observed cultural phenomenon that adulthood is the new childhood (or thirty is the new fourteen), and you have a pretty clear notion of at least a prominent part of the cultural significance of Thor’s recent coup.

 

But I want to look at a somewhat different angle on this particular superhero’s current dominance that I haven’t seen explored elsewhere. This is the fact that, unlike all other superheroes, Thor comes from an actual religion (I recognize that this bothered Captain America’s Christian sensibilities in The Avengers, but a god is a god). And while the exploitation of their ancestors’ pagan beliefs is hardly likely to disturb any modern Scandinavians, this cartoonish revision of an extinct cultural mythology is still just a little peculiar. I mean, why Thor and not, say, Apollo, or even Dionysus?

 

I think the explanation is two-fold here, and culturally significant in both parts. The first is that the Nordic gods were, after all, part of a pantheon of warriors, complete with a kind of locker/war room (Valhalla) and a persistent enemy (the Jotuns, et al) whose goal was indeed to destroy the world. [ That the enemies of the Nordic gods were destined to win a climactic battle over Thor and company (the Ragnarok, or Wagnerian Gotterdammerung), is an interesting feature of the mythology that may or may not occur in a future installment of the movie franchise.] But the point is that Norse mythology offers a ready-made superhero saga to a market hungering for clear-cut conflicts between absolute bad guys whose goal is to destroy the world and well-muscled good guys who oppose them: a simple heroes vs. villains tale.

You don’t find this in Greek mythology, which is always quite complicated and rather more profound in its probing of the complexities and contradictions of human life and character.

 

But I suspect that there is something more at work here. I mean, Wagner, the Third Reich’s signature composer, didn’t choose Norse mythology as the framework for his most famous opera by accident. And the fact is that you just don’t get any more Aryan than blonde Thor is (isn’t it interesting that the troublesome Loki, though part of the Norse pantheon too, somehow doesn’t have blonde hair? Note also in this regard how the evil Wormtongue in Jackson’s The Lord of the Rings also seems to be the only non-blonde among the blonde Rohirrim). The Greeks, for their part, weren’t blondes. So is the current popularity of this particular Norse god a reflection of a coded nostalgia for a whiter world? In this era of increasing racial insecurity as America’s demographic identity shifts, I can’t help but think so.

 

OK, so that was then, what about now? Let's just say that the "white nationalist" march at Charlottesville has clearly brought out into the open what was still lurking on the margins in 2013, and I would hazard to guess that a good number of the khaki-clad crew with their tiki torches and lightning bolt banners were (and are) Thor fans. So I'll stand by my 2013 interpretation. And as for the most recent installment in the Thor saga, well, I can almost see the producers of Thor: Ragnarok having the following pre-production conversation:

 

Producer 1: The semioticians are on to us.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: I've got it: let's give Thor a haircut this time, and, you know, brown out those blonde tones!

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: Tessa Thompson is available to play Valkyrie.

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: Idris Elba is available too.

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: You do know that Taika Waititi is a Jewish Maori, don't you, and that he's available too?

 

Producer 1: I see a concept here.

 

Producer 2: Oh goodie, campy superheroes!

 

Producer 3: And surely no one will object to Jeff Goldblum playing one of the evil Elders of the Universe, because surely no one remembers the anti-Semitic forgery "Protocols of the Elders of Zion" that Hitler made such use of.

 

Producer 1: We didn't hear that.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: We’ll paint a blue stripe on Jeff's chin. No one will make the connection.

 

Producer 1: It’s a wrap!

 

I rest my case.

In my last blog (Signs of Life in the U.S.A.: A Portrait of the Project as a Young Book) I indicated that I might tell the story of the various book covers that have been used for Signs of Life in the U.S.A. over the years, and, given the importance of visual imagery to cultural semiotics, I think that offering an insider view of how book covers get created might be useful to instructors of popular culture. So here goes.

 

Anyone who has followed the cover history of Signs of Life knows that Sonia and I have always eschewed the use of celebrity images—a common cover strategy that suggests that popular culture is all about entertainment icons. Since one of the main theses of Signs of Life is that popular culture is a matter of everyday life, of the ordinary along with the extraordinary, we wanted to find a cover image for our first edition that would semiotically convey this message even before its readers opened the book to see what was inside. At the same time, Sonia and I liked the practice of using established works of art for book covers, and figured that there would be a wealth of Pop Art choices to choose from.

 

Well, there certainly was a lot of Pop Art to consider, but we were rather dismayed to find that just about all of it was—at least to our tastes—off putting (“repulsive” would be a better word for the often garish, erotic, and/or just plain ugly works we found), and we didn’t want such stuff on the cover of our book. But then we found a perfect image from a well-known Pop Art painter named Tom Wesselmann, whose Still Life #31—featuring an image of a kitchen table with some apples, pears, a TV set, a view of an open countryside outside a window, and a portrait of George Washington—seemed just right for our purposes. So discovered, so done. We had our first cover.

 

Thus, things were easy when it came to the second edition: we simply looked for more Wesselmann, and this time we found Still Life #28, a painting that is quite similar to Still Life #31, though the color scheme is different, and Abraham Lincoln takes the place of George Washington. There’s even a cat on the cover. Cover number 2 was in the bag.

 

Between the first and second editions of Signs of Life, however, Sonia and I also published the first edition of California Dreams and Realities, for which we used one of David Hockney’s Pearblossom Highway paintings (#2). This ruled out using something from Hockney for the third edition of Signs of Life (we wanted Hockney again for the second edition of California Dreams), so when it came time to create the new cover we suggested another Wesselmann. Our editor disagreed: it was time for something new—which made sense because we did not want to give the impression that the third edition was the same as the first two. Each edition is much revised. So this time the art staff at Bedford designed a cover that featured a montage of images that included a white limousine, a yellow taxi, a cow, a highway, images from the southwestern desert, an electric guitar (a Parker Fly, by the way), the San Francisco skyline, the Capitol Dome in Washington D.C., the Statue of Liberty, two skyscrapers standing together, a giant football, a giant hamburger, a Las Vegas casino sign, and a blue-sky background with billowing white clouds. A bit too cluttered for my taste, but good enough, though it was upsetting to realize, after the September 11 attacks, that those two skyscrapers were the World Trade Center.

 

By the time the fourth edition came around, Bedford had chosen a motif that would be repeated, in variations, for the next five editions: this would be linear arrangements of individual images displayed in a single Rubic’s-cube-like block (edition #4), in rows with brightly colored dots interspersed (edition #5), in rows without dots (edition #6), in an art work by Liz West featuring a brightly colored square filled with squares (edition #7), and in rows of tiny images of the artist's (Simon Evans) personal possessions (edition #8). Everyday life in boxes, so to speak.

 

Which takes us to the ninth edition. When Sonia and I were shown the cover art for the first time, we could see that the Bedford art department had abandoned the images-in-rows motif to go, as it were, back to the future with an image reminiscent not of the first two covers but to a less cluttered revival of the third. It’s nice to see Lincoln back, along with a Route 66 sign that echoes Hockney’s Route 138 highway marker in the Pearblossom series. And there is a lot of blue sky to add a measure of natural serenity to the scene. I'm quite fond of natural serenity.

 

So, you see, a lot of thought goes into cover design (and I haven't even mentioned the two proposed covers that Sonia and I flat out rejected).  For while, as the old saying has it, you can't judge a book by its cover, you can use the cover of Signs of Life as a teaching tool, something to hold up in class and ask students to interpret, image by image, the way one would interpret a package. Because, in the end, a book cover is a kind of package, something that is at once functional (it holds the book together and protects its pages) and informational (it presents a sense of what is inside), while striving (at least in our case) to be as aesthetically pleasing as possible. It wraps the whole project up, and is something I will miss if hard-copy books should ever disappear in a wave of e-texts.

 

8th editionNew 9th edition
Signs of Life in the USAThe arrival of the authors' copies of the ninth edition of Signs of Life in the U.S.A. prompts me to reflect here on the history of this—at least for Sonia Maasik and myself—life-changing project. So I will do something a little different this week, and return to the original purpose of the web-log, which was to write something along the lines of a traditional journal or diary entry rather than an interpretive essay—a remembrance of things past in this case.

 To begin with, Signs of Life did not begin its life as a textbook. Its origins lie in a book I wrote in the mid-1980s: The Signs of Our Time: Semiotics: The Hidden Messages of Environments, Objects, and Cultural Images (1988). That book was a product of pure contingency, even serendipity. I was seated at my departmental Displaywriter (an early word processor that was about the size of a piano and used eight inch truly floppy disks) completing my final draft of Discourse and Reference in the Nuclear Age (1988)—a technical critique of poststructural semiotics that proposed a new paradigm whose theoretical parameters underlie the applied semiotic lessons to be found in Signs of Life—when my department chair drifted by and casually asked me if I would like to talk to a local publisher whom he had met recently at a party and who was looking for someone to write a  non-academic book on semiotics for a non-academic audience. As a young professor, I was ready to jump at any book-publishing opportunity, and, having found myself doing a lot of spontaneous interpretations of the popular culture of the 1980s (especially of stuffed toys like Paddington Bear and the celebrity Bear series—anyone remember Lauren Bearcall?), I was ready with a book proposal in no time. I soon had a contract, an advance (with which I purchased an early Macintosh computer that didn't even have a hard drive—it still works), and a tight deadline to meet (that's how things work in the trade book world). And that's also how Discourse and Reference and The Signs of Our Time came to be published in the same year.

 

A few years later, Sonia discovered that composition instructors were using The Signs of Our Time as a classroom text, and I found that chapters from the book were being reprinted in composition readers (the first to do so was Rereading America 2/e). So Sonia had a brainstorm: having worked with Bedford Books on other projects, she suggested that we propose a new composition textbook to Bedford based upon The Signs of Our Time. Looking back, it looks like a pretty obvious thing to have done, but this was the early 1990s, and America was hotly embroiled in the academic version of the "culture wars"; not only was the academic study of popular culture still controversial, but no one had attempted to bring semiotics into a composition classroom before. Still, Chuck Christensen—the founder of Bedford Books—who was always on the lookout for something both daring and new, was interested. He also wanted to know if I could provide a one-page description of what semiotics was all about. So ordered, so done, and we had a contract for a composition reader that would combine a full writing instruction apparatus with an array of readings, alongside unusually long chapter introductions that would both explain and demonstrate the semiotic method as applied to American popular culture.

 

That part of the matter was unusually smooth. But there were bumps in the road on the way to completion. For instance, there was our editor's initial response to our first chapter submissions. Let's just say that he was not enamored of certain elements in my expository style. But thanks to a long long-distance phone call we managed to clear that up to our mutual satisfaction. And the good news was that Bedford really wanted our book. The bad news was that they wanted it published by January 1994—a good deal less than a year away and we were starting practically from scratch. It was published in January 1994 (just in time for the big Northridge earthquake that knocked my campus to the ground). I still don't know how Sonia and I did it (the fact that we said "yes" to Chuck's invitation to do another book—it became California Dreams and Realities—in that same January, giving us six months to do it this time, simply boggles my mind to this day, but, as I say, we were a lot younger then).

 

Well, all that was a quarter of a century ago. In that time we have improved upon every prior edition of Signs of Life, listening not only to the many adopters of the text who have reviewed it over the years in the development stage of each new edition, but adding changes based upon our own experiences using it in our own classes. Of these changes, the most important to me are the ongoing refinements of my description of the semiotic method—the unpacking of the often-intuitive mental activity that takes place when one interprets popular cultural phenomena. There is an increasingly meta-cognitive aspect to these descriptions, which break down into their component parts the precise details of a semiotic analysis—details that effectively overlap with any act of critical thinking. And, of course, every new edition responds to popular cultural events and trends with updated readings, updated chapter introductions that introduce fresh models of semiotic analysis, and the introduction of new chapter topics altogether. And in the case of the 9th edition, we have added plenty of material for instructors who may want to use the 2016 presidential election as a course theme or topic. But perhaps the most important refinements for those who adopt the text are those that Sonia brings to each new edition: the expansion and clarification of the writing apparatus in the text that guides students in the writing of their semiotic analyses.

 

As I draw to an end here, I realize that I could write an entire blog just on the history of the covers for Signs of Life. Maybe I will in my next blog entry.

Jack Solomon

Coping Without Catharsis

Posted by Jack Solomon Expert Oct 12, 2017

It's beginning to feel like every time I sit down to write this bi-weekly blog of mine that America has just endured another calamity of such mind-numbing atrociousness that I can't simply ignore it, while at the same time knowing that there is nothing I can say that can possibly make anyone—students and colleagues alike—feel any better about it. And the massacre at the Route 91 Harvest music festival in Las Vegas has placed me in that position once again.

 

So I'm going to go ahead and address the matter analytically, but there are some things I will not do. First, I will not waste my time, or yours, demanding that America finally do something to control the spread of weapons of mass destruction to everyone who wants them, because I know perfectly well that America is not going to do anything of the kind. Second, I'm not going to try to explain why nothing is going to happen because it would be entirely futile to do so. Suffice to say that we all know the script: the political rituals that follow upon every one of these atrocities, and the way that those rituals invariably play out as they do. Third, I'm not going to blame "the media" for the carnage; that, too, is a common, though by no means illegitimate, part of the post-massacre script, as this essay in Inside Higher Education demonstrates once again. And finally, I'm not going to blame the high level of violence in popular culture for the high level of violence in everyday life—though that, too, is a not-unworthy subject for careful, data-driven analysis. Rather, I am going to look at the difference between the typical (and conventional) narrative to be found in violent entertainment, and the formless anomie to be found in the seemingly endless string of massacres in schools, movie theaters, night clubs, music festivals, and heaven knows what other sites, that plague our days and nights today.

 

Consider, then, the typical narrative of violent entertainment. Reduced to its most basic structure, it involves a victim (or victims), a villain (or villains), and a savior (or saviors). The story—whether told in the generic form of horror, or murder mystery, or thriller, or war epic, or superhero saga, or sword and sorcerer fantasy, or whatever—tells the tale of how the villain is, in some way or another, opposed by the savior, and, usually, stopped (even when the story is open-ended, which is not infrequent in contemporary entertainment, there is usually some heroic figure, or figures, to identify with, who at least provides a model of sanity amidst the mayhem). This is what stories conventionally do: they give shape to the horrors of existence and give them a kind of meaning that Aristotle called "catharsis." When the detective catches the killer, the vampire slayer drives the stake through the monster's heart, the evil empire is defeated, the wicked witch is dissolved or the evil sorcerer vaporized, the bad king is dethroned (or de-headed: Macbeth is part of this system as well), and so on and so forth, the audience overcomes its pity and terror, and, to put it as plainly as possible, feels better.

 

But this is exactly what does not happen when someone, who has been living among us—and who, having shown no signs of madness or murderousness, has plotted his massacre completely under the radar of law enforcement—suddenly cuts loose. More often than not, now, he also kills himself. And we are left with nothing but the carnage: there is no wily detective, no heroic hobbit, no boy wizard, no man/woman in spandex, no warrior, no secret agent, no martial arts expert, nor any kind of savior at all: just the sorry spectacle of missed opportunities on the part of those we rely on to protect us—from the police to the politicians—and an almost total lack of understanding of why the carnage occurred at all. I realize that the heroic acts of victims and first-responders on the ground in such cases can help mitigate the horror, but it is all too after-the-fact for any real comfort when we know that it is all going to happen again. This is the reality of real-life horror, and there is no redemptive narrative in sight.

One of the most common objections from students whose instructors use popular culture as a basis for teaching writing and critical thinking skills in their classes is that it (pop culture) "is only entertainment," and that any attempt to think critically about it is "reading something into it" that isn't there.   Well, I think that the results of the latest round of Emmy Awards should finally put an end to any such complaints, because the sweeping triumphs of The Handmaid's Tale and Saturday Night Live have made it quite clear that the entertainment industry is now a direct participant in American politics.

 

This is a point that has been stated explicitly in every edition of Signs of Life in the U.S.A. (including, of course, the 9th edition, due out in a couple of weeks), in which students are taught that the traditional line between entertainment and everyday life has been so diminished that it could be said that we live in an "entertainment culture," in which all of our activities, including the political process, are required to be entertaining as well.  The blurring of this line does not simply refer to entertainers who have become successful politicians (like Ronald Reagan, Al Franken, and, um, Donald Trump), but to the way that television shows like Saturday Night Live and The Daily Show have become major players in American electoral politics.

 

Lest the recent results at the Emmy's give the idea that the politicization of entertainment is a one-way street, navigated solely by entertainments and entertainers on the left, the same thing is going on on the right as well, and this is something that cultural analysts often miss, pretty simply because those entertainers do not tend to be part of the taste culture of cultural analysts.  Of course, it isn't only cultural analysts who have neglected the place of what I'll call the "ent-right" in American politics: by relying virtually exclusively on the support of entertainers like Beyonce´ and Lena Dunham—not to mention the crew at SNL and Jon Stewart—Hillary Clinton completely miscalculated the power of those entertainers who appeal to the voters who voted for Donald Trump.  The results of this miscalculation are hardly insignificant.

 

To give you a better idea of just how American entertainment is now parsing on political grounds, I'll provide a link to a New York Times feature article that includes fifty maps of the United States geographically showing which television programs are viewed in which regions of the country.  Referred to as a "cultural divide" in the article, what is revealed is equally a political divide.  So striking are the differences in television viewership that it would behoove future presidential election pollsters to ask people not who they are going to vote for (a question that the 2016 election appears to demonstrate is one that people do not always answer honestly) but which television programs they watch (or what kind of music they listen to, etc.. Who knows what the outcome of the 2016 election would have been if Hillary Clinton had a prominent country music icon on her side).

 

In short, popular cultural semiotics isn't merely something for the classroom (though it can begin there); it is essential to an understanding of what is happening in this country and of what is likely to happen.  And one has to look at everything, not only one's own favorite performers.  Because the purpose of analyzing entertainment is not to be entertained: it is to grasp the power of entertainment.

Jack Solomon

They're Ba-ack!

Posted by Jack Solomon Expert Sep 14, 2017

Creepy clowns are back, and Hollywood is counting on them to deliver big box office after what appears to have been a slow summer for the movie industry—at least according to the L. A. Times.  I've visited this territory before in this blog, but between the recent release of It, the cinematic version of the Stephen King novel by the same name, and all the recent hoopla over Insane Clown Posse and their "Juggalo" followers, I thought it would merit a second look.

 

If you've never heard of Insane Clown Posse, and think that Juggalos must be some sort of children's breakfast cereal, you're forgiven.  This is one of those many corners of popular culture that, somehow, young folks always seem to be in on, but which tends to be under the radar for the rest of us.  Not that Insane Clown Posse is anything new: they're a rap act that has been around since 1989, specializing in a genre called "horror core"—think Marilyn Manson meets Twisty the Clown.  And Juggalos are horror-core fans that follow performers like Insane Clown Posse around and hold mass participation events of their own—think Gothicised Deadheads in creepy clown suits at a Trekkie convention.

 

So what is it with It, and all this clown stuff?  What is the significance of this fad that appears to be edging into a trend?  Well, to begin with, it's less than sixty shopping days till Halloween, so that's part of the explanation—according to the First Law of Popular Culture (which I have just invented): viz., A fad that has made money will continue to be milked for more money until it is obliterated by a new fad that makes it look hopelessly outdated while retaining its essential appeal.  Applied to the present instance, we might say that just as zombies flocked in where vampires began to fear to tread a few years ago, creepy clown stock appears to be rising now that zombies are beginning to look rather old hat.  But is there anything more to it all?

 

In attempting to widen the semiotic system in which we can situate the creepy clown phenomenon in order to interpret it, I've found myself considering the peculiar similarities between the Juggalos of today and the Skinheads of yore.  Interestingly, both have working-class origins, along with highly stylized fashion codes and preferences for certain kinds of music (of course, this is true for just about any popular cultural youth movement).  More significantly, both have divided into what might be called malignant and benign camps.  That is to say, one set of Juggalos is at least accused of having the characteristics of a street gang, while the other appears to be as harmless as run-of-the-mill cosplayers.  Similarly, while the classic Skinhead liked to toy around with neo-Nazi and other fascist displays, an offshoot of the movement—sometimes referred to as "anti-racist" Skinheads—has adopted the fashion-and-music tastes (more or less) of fascistical Skinheads while embracing an anti-fascist ideology. 

 

All this gets me thinking, because if we expand the system we can find two other popular cultural trends that the creepy clown phenomenon—along with its Juggalo cohorts—shares with the Skinheads: an obsession with costumed role playing mixed with a fascination with violence (even if only in play), whether in the form of horror (Juggalos) or of hob-nailed mayhem (Skinheads).  In this respect (costume drama-cum-cruelty), we may as well include Game Of Thrones in the system, for here too we find elaborate costuming wound round a mind-numbing level of violence.  It's as if Harry Potter grew up to become a warlord.

 

Well, so what?  If popular culture appears to be filled with elaborate expressions of violent cosplay, it's just play-pretend isn't it, a distraction from the horrors, or boredom, of everyday life—what Freud called "civilization and its discontents?" And Stephen King is hardly alone in making a fortune off the perennial appeal of Grand Guignol.

 

But then I start thinking about the violence-obsessed costume drama that took place on the campus of the University of Virginia, where khaki-clad and polo shirt sporting crowds of young men marched torches in hand in a studied recreation of Hitler's brown-shirt demonstrations.  Was this some sort of political cosplay, a "let's play at Nazis" display for those in the crowd who weren't "official" members of the Klan and the American Nazi Party?  I really don't know.  I'm not sure that anyone knows just how many genuine Nazis there are in the country, as compared with the play actors who are getting a kick out of trolling their classmates.  But playing at horror has a way of familiarizing it, of moving it from the fringe to the center, and I can only hope that we haven't gotten to the point where the line between play-pretend and deadly-earnest has become so blurred that the true horrors may descend upon us.

Last Spring I left off in this blog with an exploration of what I called “The Uses of Objectivity.” That essay probed the inadvertent relationships between poststructural theory and the current climate of “alternative facts” and “post-truth” claims.  Since then I’ve run across an essay in The Chronicle of Higher Education that could have been written in response to mine, and while it actually wasn't, I'd like to continue the discussion a bit here.

 

The Chronicle essay I’m referring to here is Andrew J. Perrin’s “Stop Blaming Postmodernism for Post-Truth Politics.” That's an easy request to honor: certainly the supporters 0f such alt-fact politicians as Donald Trump can hardly be expected to have been influenced by —much less, have read—the texts of contemporary postmodern theory.  So by all means let's take postmodernism off the hook in this regard.  The question is not how postmodernism has affected what is often referred to as the "populist" politics of Trumpism; the question is how educators can best contest, in the classroom, the contentions of the post-truth world.  My position on this question is that educators who wish to do so would do well not to deconstruct, in a postmodern fashion, the fundamental grounds for things like scientific consensus, while Perrin, for his part, feels that we need more postmodernism in the face of the post-truth era because of the way that it exposes the ways in which "all claims, beliefs, and symbols are tied up with the structures of power and representation that give rise to them." 

 

Now, the originator of this postmodern approach to power/knowledge was, of course, Michel Foucault.  It is central to his entire notion of "discourse," which itself descended from his essentially poststructural (poststructuralism is an academic species of the larger cultural genus postmodernism) adaptation of the structuralist position that reality (and the knowledge thereof) is constructed by systems of signs.  That is to say, the signified, in the structuralist view, is not something detected outside the sign system: it is constituted by the sign system.  From here it is not a very large step to the poststructural position that whoever controls the sign system controls what counts as "reality," as "truth" itself. 

 

There is certainly no shortage of historical instances in which this vision of power/knowledge has indeed been played out.  The Third Reich, for example, rejected relativity theory as "Jewish physics," and that was that as far as Germany was concerned.  George Orwell, for his part, gave dramatic expression to this sort of thing in 1984: 2+2=5 if Big Brother says so.

 

Thus, it comes down to a simple question.   What is a more effective response to the post-truth claim, for example, that climate science is hoax: the position that all scientific claims are expressions of power/knowledge, or the position that concrete empirical evidence gets us closer to the truth of climate change than do the claims of power?  This is not a rhetorical question, because I do not suppose that everyone will agree with my own answer to it, which happens to be as simple as the question itself:  I prefer to oppose power/knowledge with objectively measurable data.  For me, reality is not subject to a referendum.

 

Interestingly, the late Edward Said—who helped put Foucault on the American literary-critical map in his book Beginnings—came to identify another problem that arises with respect to postmodern power theory when he criticized Foucault for effectively denying the element of human responsibility in power relations by treating power as a nebulous "formation" that is expressed socially and historically rather than being wielded by empowered individuals (which happens to be a poststructural view on power that parallels the structuralist position on the relationship between langue and parole).  Such a view could provide support for the many voters who did not vote in the 2016 presidential election due to their belief that both major parties expressed the same neoliberal and capitalist power formations.  I think that the aftermath of that election makes it pretty plain that individuals do wield power and in different ways, no matter what the current larger power/knowledge formation may be.

 

And just as interestingly, as I was putting the finishing touches on this blog, an essay by Mark Lilla appeared in the Chronicle of Higher Education saying substantially the same thing: i.e., if students accept "the mystical idea that anonymous forces of power shape everything in life," they "will be perfectly justified in withdrawing from democratic politics and casting an ironic eye on it."  Now, two Humanities professors in agreement doth not a movement make, but it's heartening to see that my thoughts are shared by someone else.

Jack Solomon

The Uses of Objectivity

Posted by Jack Solomon Expert Jun 15, 2017

I take my title, and topic, for my last blog before the summer break from two pieces appearing in today's (as I write this) online news. One, John Warner's essay "The Pitfalls of 'Objectivity,'" appears in Inside Higher Ed, and the other is a news feature in The Washington Post on the prison sentencing of a Sandy Hook hoax proponent who sent death threats to the parents of one of the children murdered at the Connecticut elementary school. I'll begin with John Warner's essay.

Warner is a blogger for Inside Higher Ed, whose blog, "Just Visiting," describes his experiences as an adjunct writing instructor. As a voice for the much-beleaguered, and ever-growing, class of adjunct writing professors in this country, Warner is a very popular Inside Higher Ed blogger, whose columns consistently garner far and away the most commentary (almost always positive) of any other blog on the news site, often from grateful instructors who are justifiably glad to see someone expressing their point of view for once in a prominent place. Heck, Warner gets more comments on each blog post than I have gotten in all the years I have been writing this blog, so it's hard to argue with success.

But in this era when "fake news" and "alternative facts" have come to so dominate the political landscape, I feel obliged to respond to Warner's thesis, which is that, "One of the worst disservices the students I work with have experienced prior to coming to college is being led to believe that their writing – academic or otherwise – should strive for 'objectivity.'” Warner's point—which, as a central tenet of cultural studies generally, and the New Historicism in particular, is not a new one—is that "there is no such thing as purely objective research." This position cannot be refuted: writing and research always not only contain, but begin, in subjectivity. Even scientific investigation starts with an hypothesis, a conjecture, a subjective cast into an ocean of epistemic uncertainty. And if one really wants to press the point, there has never been a successful refutation of the fundamental Kantian position that knowledge is forever trapped in the mind, that we know only phenomena, not noumena.

So, the question is not whether or not subjectivity is an inevitable part of writing, thinking, and arguing. Rather, the question is whether we really want to throw out the objective baby with the bathwater, which is what I think happens when Warner argues that, "Strong writing comes from a strong set of beliefs, beliefs rooted in personal values. Those underlying values tend to be relatively immutable." And that takes us to the Sandy Hook hoax community.

 

To put it succinctly, the Sandy Hook hoaxers believe that the massacre at the Sandy Hook School was a "false flag" that either never took place at all or was perpetrated by the Obama administration (there are various claims in this regard), and which was planted in order to justify the seizure of Americans' guns. The hoaxers have written at length, and with great passion, about this, producing all sorts of "facts" (in the way of all conspiracy theorists). One could say that their texts come "from a strong set of beliefs . . . rooted in personal values . . . that tend to be relatively immutable." And there's the problem.

Now, Warner is hardly promoting conspiracy theorizing, or being tied to immutable beliefs. For him, "An effective writer is confident in communicating their beliefs, while simultaneously being open to having those beliefs challenged and then changed as they realize their existing beliefs may be in conflict with their values." But the problem is that without objective facts, a contest of beliefs is only that, with no basis for settling the debate. You don't like the facts? Shout "fake news!" and produce your own "alternative facts." I'm sure you see where this heading.

As with the legacy of poststructuralist thinking that I have often written about in this blog, Warner's apparently generous and liberal approach to writing leads to unintended results. By undermining our students' acceptance of the existence of objective facts—and the objectivity to pursue them—we are underpinning a political environment where hostile camps hole up in their echo chambers of shared beliefs and simply shout at each other. And while I know that we, as writing instructors, can't end that—any more than we can come up with a final refutation of Kantian and poststructuralist subjectivism—if we really want to do our bit to resist the current climate of "fake news" claims we should be encouraging our students to see the dialectic of subjectivity and objectivity, the complex ways in which the two can complement each other. It isn't easy, and there can be no easy formula for doing so, but simply denigrating objectivity to our students is not going to help us, or them.