Skip navigation
All Places > The English Community > Bedford Bits > Blog > Author: Jack Solomon
1 2 3 4 5 Previous Next

Bedford Bits

99 Posts authored by: Jack Solomon Expert

This post was originally published on December 20, 2012.

 

One of my students in a popular cultural semiotics seminar recently wrote her term project on the reality television “Real Housewives of .  .  .” phenomenon. Not being a fan of such shows myself, it took her paper to prompt me to think seriously about the whole thing for myself. And I realized that such shows instantiate a far more profound cultural signifier than I had heretofore realized. The following analysis represents my thinking on the matter, not my student’s.

 

As is always the case, my semiotic analysis centers on a crucial difference. The difference in question here is not simply that between the actual realities of the lives of ordinary housewives as opposed to the reality TV versions, but also the difference between their current television representations and those of the past. That is, not only do most actual housewives lack the wealth, glamour, and business opportunities of the “Real Housewives” of Beverly Hills, New Jersey, or wherever, but their television counterparts of the past did, too. The classic TV housewife, enshrined within the history of the family sitcom, was an asexual middle-class woman who was totally focused on her children: Think June Lockhart, Jane Wyatt, and Barbara Billingsley.

 

That the current crowd of glammed-up, runway-model housewives of today’s “reality” shows reflects a widespread cultural return to the conservative gender-coded precept that a woman’s value lies in her erotic appeal almost goes without saying. While a few less-than-glamorous women are cast in these programs as if to head off criticisms of this kind, their existence tends to prove the rule—and even they tend to be dolled up on the program Web sites.

But this is an easy observation to make. More profoundly, however, is the fact that the reality TV housewife has become an object of desire for her largely female audience. Rather than being seen as a hapless drudge of patriarchy, the reality TV housewife is a vicarious role model, even when she doesn’t found her own business enterprise and simply stays at home. What caused this change in perception?

 

To answer this question, I considered the frequently reported economic fact that household incomes for the vast majority of Americans have been essentially stagnant, when adjusted for inflation, over the last four decades. Now, add to this the exponential inflation in the costs of such basic necessities as housing and transportation and you get the modern two-income family: not necessarily because both partners in a marriage want to work, but because in order to maintain a middle-class household two incomes are now more or less essential. Certainly the efforts of the women’s movement have contributed to the enormous growth of women’s participation in the workforce, but the new image of the reality TV housewife suggests that something else is at work here as well.

 

That is, with the housewife being presented as a fortunate woman who doesn’t have to work, it seems that American women are nostalgic for the “good old days” of a time when they didn’t have to work just to maintain a middle-class home. The fantasy now is to be a housewife, not to escape the role. That’s quite a change.

 

Just how much of an effect on American consciousness in general this stagnation of incomes has had is probably one of the most important social questions of our time. Can it help explain the hostile polarization of our political landscape, our dwindling sympathy for others in an increasingly libertarian environment, the growing resentment of middle-class workers (especially unionized workers) with decent jobs and benefits? I think so. And this will be a topic for future blogs of mine.

Jack Solomon

Building a Religion

Posted by Jack Solomon Expert Jun 7, 2018

As I head into the summer recess for my Bits blogs, I find myself contemplating the cultural significance of the rise and apparent fall of Theranos, the troubled biotech startup that was once heralded as a disruptive force that would revolutionize the blood testing industry, and, not so incidentally, produce a new generation of high-tech entrepreneurs to rank with Steve Jobs and Bill Gates. On the face of it, of course, this would not appear to be a topic for popular cultural analysis, but bear with me for a moment, for when it comes to the new technologies, everything relates in some way or another to the manifold currents of everyday life that popular culture expresses.

 

What has drawn my attention to Elizabeth Holmes and the Theranos saga is the publication of a book by the Wall Street Journal writer who first blew the whistle on the company in 2015: John Carreyrou's BAD BLOOD: Secrets and Lies in a Silicon Valley Startup. A brief synopsis of that book appeared in Wired just as it was being released, and it was a single sentence in that synopsis that really got me thinking. It appears in Carreyrou's narrative at the point when things at Theranos were beginning to unravel and various high-ranking employees were abandoning ship. In the wake of such resignations, Elizabeth Holmes allegedly summoned every remaining employee to an all-hands-on-deck meeting to demand loyalty from them. But she didn't call it loyalty: according to Carreyrou "Holmes told the gathered employees that she was building a religion. If there were any among them who didn’t believe, they should leave."

 

Building a religion: Holmes was telling a truth that was deeper than she realized. For when we situate the story of Theranos in the larger system of post-industrial America, we can see that our entire culture has been building a religion around what Fredric Jameson has called America's postmodern mode of production. On the face of it, the object of worship in this system is technology itself, which is viewed as a kind of all-purpose savior that will solve all of our problems if we are just patient enough. Steven Pinker's new book, Enlightenment Now, makes this point explicitly, but it is implicit every time some new tech startup promises to "fix" higher education, clean up all the trash in the ocean, and use architecture to save the natural environment (see, for example, Wade Graham's "Are We Greening Our Cities, or Just Greenwashing Them?", which provides both a survey and a critique of the eco-city movement: you can find it in the 9th edition of Signs of Life in the USA). The religion of technology also produces its own demi-gods, like Elon Musk, who can announce yet another delay (or change of plans) in his money-losing product line and still see his Tesla stock rise due to the unwavering adoration of his flock.

 

Oddly enough, as I was writing the first draft of this blog I came across an essay in The Chronicle of Higher Education that examines a related angle on this phenomenon. There, in a take-down of the "design thinking" movement (an ecstatic amalgamation of a Stanford University product design program and the Esalen Institute that promises to transform higher education into a factory for producing entrepreneurially inclined "change agents"), Lee Vinsel compares the whole thing, overtly, to a religious cult, acidly remarking that the movement "has many of the features of classic cult indoctrination, including intense emotional highs, a special lingo barely recognizable to outsiders, and a nigh-salvific sense of election" —concluding that "In the end, design thinking is not about design. It’s not about the liberal arts. It’s not about innovation in any meaningful sense. It’s certainly not about 'social innovation' if that means significant social change. It’s about commercialization. It’s about making education a superficial form of business training."

 

Thus, I think that Vinsel would agree with my contention that behind the religion of technology is something larger, older, and more universal. This is, quite simply, the religion of money worship. Minting instant billionaires and driving an ever-deeper wedge between a technology-fostered one percent and everyone else, the post-industrial economy dazzles most through the glitter of gold, which overcomes every other moral value, from Facebook's willingness to allow its platform to be exploited for the purposes of overt political manipulation to Theranos's performing a million blood tests with a technology so flawed that the tests have had to be invalidated, at who knows what cost to the patients (one should say, victims) involved.

 

And what does America do in response? It makes movies, like Aaron Sorkin's The Social Network, and John Carreyrou's own Bad Blood, a film said to be starring Jennifer Lawrence, and due out in 2019, thus turning social anomie into entertainment, and promising even more offerings on the altars of extreme affluence.

 

Image Credit: Pixabay Image 1761832 by kropekk_pl, used under a CC0 Creative Commons License

One of my all-time favorite readings from past editions of Signs of Life in the USA is Andy Medhurst's "Batman, Deviance, and Camp." In that analysis of how the original muscle-man clone of Superman morphed into "Fred MacMurray from My Three Sons" in the wake of Fredric Wertham's notorious accusation in 1955 that Batman and Robin were like "a wish dream of two homosexuals living together," only to be transformed into the Camped Crusader of the 1966 TV series Batman, and then revised once more into the Dark Knight of the 1980s and beyond, Medhurst reveals how cartoon superheroes change with the times, reflecting and mediating the cross currents of cultural history. So as I ponder the rampant success of the second Deadpool film in this emergent franchise, I find myself wondering what this new entrant into the superhero sweepstakes may signify. Surely this is a topic for semiotic exploration.

 

What particularly strikes me here is the difference between the gloomy and humorless Batman of the Miller/Burton/Nolan (et al.) era, and the non-stop wisecracking of Deadpool. It isn't that Deadpool doesn't have a dark backstory of his own, as grim as anything to be found in Bruce Wayne's CV. And, surely, the Deadpool ecosystem is even more violent than the Batworld. No, it's a matter of tone, of attitude, rather than content.

 

Now, if Deadpool were the only currently popular superhero who cracked wise all the time, there really wouldn't be very much to go on here, semiotically speaking. But Deadpool isn't the only wise acre among the men in spandex: various Avengers (especially Thor), along with the latest incarnation of Spiderman, have also taken to joking around in the midst of the most murderous mayhem. If the Dark Knight soared to superstar status on the wings of melancholy, a lot of rising contenders for the super-crown appear to be taking their cue from Comedy Central. Something's going on here. The question is, what?

 

I'm thrown back on what might be called "deductive abduction" here: that is, moving from a general condition to a particular situation as the most likely explanation. The general condition lies in the way that wise-cracking humor has been used in numerous instances in which a movie whose traditional audience would be restricted to children and adolescents (think Shrek) has broken through to generational cross-over status by employing lots of self-reflexive, topically allusive, and winking dialogue to send a message to post-adolescent viewers that no one involved in the film is really taking all this fantasy stuff seriously, and so it's safe, even hip, for grown-up viewers to watch it (of course, this is also part of the formula behind the phenomenal success of The Simpsons). Stop for a moment to think about the profound silliness of the Avengers movies: who (over a certain age) could take this stuff seriously? Well, the wise cracks—which are generally aimed at those who happen to be over a certain age—are there to provide reassurance that it isn't supposed to be taken seriously. Just sit back, be cool, and enjoy.

 

So, given the R-rating of the Deadpool movies, I would deduce that the almost excessive (if not actually excessive) self-reflexive, topically allusive, and winking dialogue to be found in them works to reassure an over-seventeen audience that the whole thing is just a big joke. No one is taking any of this seriously, and so it is perfectly safe to be spotted at the local cineplex watching it. Hey, there's even a postmodern inflection to Deadpool's fourth-wall dissolving monologues: what could be more hip?

 

Since most cultural phenomena are quite over-determined in their significance, I do not mean to preclude any other possible interpretations of the super wiseass phenomenon, but the interpretation I've posted here is one I feel confident of. At any rate, the topic could make for a very lively class discussion and an interesting essay assignment.

 

Image Credit: Pixabay Image 2688068 by pabloengelsused under a CC0 Creative Commons License. 

The lead-in to the L.A. Times article on the Tony Award nominations really caught my attention. Here it is:

 

"'SpongeBob SquarePants,' 'Mean Girls' and 'Harry Potter and the Cursed Child.'

'Angels in America,' 'Carousel' and 'My Fair Lady.'"

 

That's exactly as it appeared, and the title of the piece—"Tony nominations for 'Harry Potter,' 'SpongeBob' and 'Mean Girls' put Hollywood center stage"—made it clear that the author was well aware of the list's significance: that television and the movies appear to be taking over one of the last bastions of American live theater: the Broadway stage.

 

Now, before you get the idea that I am going to lament this development as some sort of cultural loss or desecration, let me assure you that I have no such intention. To begin with, Broadway has always occupied a somewhat liminal position in the traditional high cultural/mass cultural divide, and the stage has always been a setting for popular entertainment—albeit one that is not mediated by electronic technology. And while the article, for its part, does note that "Harry Potter and the Cursed Child" represents "just one example of the kind of pop-culture franchise that can reduce producers' financial risk," it does not do so in anger. Indeed, it even quotes a writer associated with this year's nominees' sole "art-house" production ("The Band's Visit”), who rather generously observes that "Commercial theater, and musical theater, is a really risky venture. It's very expensive. It's possible to have a great success, but it's really unlikely"; adding that "I don't blame anyone for trying to hedge against that risk by adapting a really well known property, and it's not always cynical."

 

There are two quick and easy popular-semiotic takeaways, then, from this year's Tonys. The first is that the last barriers between mass media entertainment and the more culturally prestigious (if less lucrative) traditional stage are coming down once and for all. The second is that they are coming down not on behalf of some sort of producer-led deconstruction of a vanishing high cultural/low cultural divide, but simply because very few Broadway producers are willing to take any financial risks these days and prefer to go with tried and true productions. And this doesn't simply mean translating blockbuster movies and TV shows to the stage: after all, revivals of "Angels in America" and "The Iceman Cometh" are also among the nominees this year.

 

But the real significance of the Tonys for me appears when we broaden the system in which to analyze the nominations to include what is happening in the movies and television as well. Here too we find revivals, reboots, sequels, prequels...in short, one studio or network franchise after another centered on a successful brand that never seems to run out of steam: "Avengers Infinity War" (note the endlessness implied); "Star Wars Forever"; "Roseanne" II and "Murphy Brown" redux; and so on and so forth. What this reveals is not only a similar spirit of creative bet hedging by going with tried and true entertainment commodities, but a narrowing of opportunities for creators themselves as well. For the message in this particular bottle is that in America success is the gift that keeps on giving. A few people (like George Lucas and J.K. Rowling) are going to rise from obscurity and hit it so big with their creative efforts that they will use up all the oxygen in the room. It isn't that there will be less creativity (with luminaries like Lucas and Rowling shining bright for innumerable self-publishing dreamers who hope to be the next meteors in the popular cultural skies, there will never be any danger of that); the problem is that there will be fewer opportunities to make such creative breakthroughs, or earn any sort of living while trying, when the stage (literally and figuratively) is filled with old brands that won't move aside for new entrants.

 

And so, finally, we come to a larger system within which to understand what is going on with the Tony Awards: this system is America itself, where a handful of winners are vacuuming up all of the opportunity in America and leaving almost nothing for everyone else (George Packer eloquently describes the situation in "Celebrating Inequality," an essay you can find in the 9th edition of Signs of Life in the USA). The rewards of the American dream are bigger than they have ever been; but not only are there fewer seats at the banquet of success, the pickings are getting leaner and leaner for those who haven't been invited.

 

Credit: Pixabay Image 123398 by smaus, used under a CC0 Creative Commons License

 

Though there have been some very high profile participants in the "movement" (can you spell "Elon Musk"?), I am not aware that the #deletefacebook movement is making much of a real dent in Facebook's membership ranks, and I do not expect that it ever will. For in spite of a seemingly continuous stream of scandalous revelations of Facebook's role in the dissemination of fake news and the undermining of the American electoral system—not to mention the way that Facebook, along with other digital titans such as Google, data mine our every move on the Internet—all signs indicate that, when it comes to America’s use of social media, the only way is up. Even the recantations of such former social media "cheerleaders" as Vivek Wadhwa (who have decided that maybe all this technological "progress" is only leading to human "regression" after all) are highly unlikely to change anyone's behavior.

 

The easiest explanation for this devotion to social media, no matter what, is that Internet usage is addictive. Indeed, a study conducted at the University of Maryland by the International Center for Media and the Public Agenda, in which 200 students were given an assignment to give up their digital devices for 24 hours and then write about their feelings during that bleak stretch, revealed just that, with many students reporting effects that were tantamount to symptoms of drug withdrawal (a full description of this study can be found in chapter 5 of the 9th edition of Signs of Life in the USA). To revise Marx a little, we might say that social media are the opiate of the masses.

 

Given the fact that our students are likely to have lived with the Internet all of their lives, it could be difficult, bordering on impossible, for them to analyze in any objective fashion just how powerful, and ultimately enthralling, social media are. It’s all too easy to take the matter for granted. But with the advent of digital technology looming as the most significant cultural intervention of our times, passive acceptance is not the most useful attitude to adopt. At the same time, hectoring students about it isn’t the most productive way to raise awareness either. All those “Google is making America stupid” screeds don’t help at all. So I want to suggest a different approach to preparing the way for a deep understanding of the seductive pull of social media: I'll call it a "phenomenology of Facebook."

 

Here's what I have in mind. Just as in that phenomenologically influenced mode of literary criticism called "Reader Response," wherein readers are called upon to carefully document and describe their moment-by-moment experience in reading a text, you could ask your students to document and describe their moment-by-moment experience when they use social media. Rather than describing how they feel when they aren't online (which is what the University of Maryland study asked students to do), your students would describe, in journal entries, their precise emotions, expectations, anticipations, disappointments, triumphs, surprises, hopes, fears (and so on and so forth) when they are. Bringing their journals to class, they could share (using their discretion about what to share and what not to) what they discovered, and then organize together the commonalities of their experience. The exercise is likely to be quite eye opening.

 

It is important that you make it clear that such a phenomenology is not intended to be judgmental: it is not a matter of “good” or “bad”; it is simply a matter of “what.” What is the actual experience of social media usage? What is it like? What’s going on? Only after clearly answering such phenomenological questions can ethical questions be effectively posed.

 

Not so incidentally, you can join in the exercise yourself. I’ve done it myself. You may be surprised at what you learn.

 

 

Credit: Pixabay Image 292994 by LoboStudioHamburg, used under a CC0 Creative Commons License

In 1971, Norman Lear and Bud Yorkin reconfigured a popular British sitcom featuring a bigoted working class patriarch (Till Death Do Us Part) to create America's All in the Family. A massive hit, All in the Family continued on not only to top the Nielsens for five years running but also went a long way towards mediating the racial, generational, and sexual conflicts that continued to smolder in the wake of the cultural revolution. A new kind of sitcom, All in the Family (along with other such ground-breaking TV comedies as The Mary Tyler Moore Show) provided a highly accessible platform for Americans to come to terms with the social upheavals of the sixties, thus contributing to that general reduction of tension that we can now see as characteristic of the seventies. The decade that came in with Kent State went out with Happy Days.

 

So the recent reboot of Roseanne in a new era of American social conflict is highly significant. Explicitly reconstituting Roseanne Barr's original character as an Archie Bunkeresque matriarch, the revived sitcom raises a number of cultural semiotic issues, not the least of which is the question as to whether the new Roseanne will help mediate America's current cultural and political divisions, or exacerbate them.

 

In short, we have here a perfect topic for your classroom.

 

To analyze Roseanne as a cultural sign, one must begin (as always in a semiotic analysis), by building a system of associated signs—as I have begun in this blog by associating Roseanne with All in the Family and The Mary Tyler Moore Show. There are, of course, many other associations that could be made here within the system of American television (Saturday Night Live, Family Guy, and The Simpsons loom very large here), but I'll limit myself now with the association with All in the Family because of the way that, right off the bat, it reveals an important difference—and semiotic significance is always to be found in a combination of associations and differences—that points to an answer to our immediate semiotic question.

 

This difference emerges from the well-known fact that Norman Lear was quite liberal in his politics and intended his show to be a force for progressive television, while Roseanne Barr is an outspoken conservative—a situation that has already produced a good deal of controversy. Consider C. Nicole Mason's Washington Post piece "‘Roseanne’ was about a white family, but it was for all working people. Not anymore," a personal essay that laments the Trumpist overtones of Roseanne Barr's new character. On the flip side of the equation, the new Roseanne has been an immediate smash hit in "Trump Country," scoring almost unheard of Nielsen numbers in this era of niche TV. Pulling in millions of older white viewers who prefer the traditional "box" to digital streaming services, the show is already reflecting the kind of generational and racial political divisions that burst into prominence in the 2016 presidential election. As Helena Andrews-Dyer puts it in the Washington Post, "The ‘Roseanne’ reboot can’t escape politics — even in an episode that’s not about politics."

 

Thus, while it may be soon to tell for certain, I think that the new Roseanne will prove to be quite different from All in the Family in its social effect. Rather than helping to pull a divided nation together, the signs are that Roseanne is going to deepen the divide. I say this not to imply that television has some sort of absolute responsibility to mediate social conflict, nor to suggest that Roseanne's appeal to older white viewers is in itself a bad thing (indeed, the relative lack of such programming goes a long way towards explaining the immediate success of the show). My point is simply semiotic. America, at least when viewed through the lens of popular culture, appears to be even more deeply divided than it was in 1971. Things have not stayed the same. Roseanne isn't Archie Bunker, Trump isn't Nixon, and everyone isn't laughing.

Jack Solomon

Things Fall Apart

Posted by Jack Solomon Expert Mar 29, 2018

 

While there appears to be some significant doubt over whether Cambridge Analytica really had much effect on the outcome of 2016 presidential election (Evan Halper at the L.A. Times makes a good case that it didn't), the overall story of the way that millions of Facebook profiles were mined for partisan purposes is still something that is of profound significance in this time when digital technology seems to be on the verge of undermining the entire democratic process itself. As such, the Facebook/Cambridge Analytica controversy is a worthy topic for a class that makes use of popular culture in teaching writing and critical thinking.

 

If you happen to be using the 9th edition of Signs of Life in the U.S.A., you could well begin with John Herrman's "Inside Facebook's (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political Media Machine." In this extensive survey of the many ways in which Facebook has fostered an ecosystem of political activists who invade your news feed with ideologically targeted content, Herrman shows how the marketing of online behavior has been transformed into a "(Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political Media Machine." That our Internet activity is being tracked and our data mined is no secret anymore, and many people don't seem to mind—so long as it only results in specially curated advertising pitches and coupon offers. But what Herrman describes goes well beyond product merchandizing into information manipulation, the building of highly politicized news silos where the news you get is the news that someone has calculated that you want to get, and nothing else, as more and more Americans transition away from such traditional news sources as newspapers and television to Facebook, Twitter, and a myriad of other social media.

 

Brooke Gladstone's "Influencing Machines: The Echo Chambers of the Internet" (also in the 9th edition of Signs of Life), succinctly explains the effect of this shift. With no pretense of presenting a balanced palette of news and information, the new media are exacerbating and deepening the social divisions in America, creating ideological echo chambers that effectively constitute alternate realities for those that inhabit them. The result is a kind of political and cultural echolalia.

 

It's little wonder, then, that the contending parties in America cannot find a way to communicate effectively with each other. Already divided by a history of cultural conflict and contradiction (chapter 7 of Signs of Life explores this division in depth), Americans have vanishingly less in common with those whose lives lie on the other side of the great divide.

 

There is something profoundly ironic about all this. For many years it has been assumed that the effect of modern mass media has been to chip away at America's regional differences, flattening them out into a kind of unaccented (literally and figuratively) sameness: a mass culture watching the same TV shows, eating the same food, and talking in the same way. But now something is changing. Rather than tending towards a common culture, America, sliced and diced by digital algorithms, is dividing into mutually hostile camps.

 

William Butler Yeats said it best long ago at a time when his own country was divided in two: "Things fall apart," he lamented, "the centre cannot hold." Now there's something to hashtag.

 

 

Image Source: "Facebook security chief rants about misguided “algorithm” backlash" by  Marco Verch on Flickr 10/08/17 via Creative Commons 2.0 license.

Jack Solomon

And the Winner Is . . .

Posted by Jack Solomon Expert Mar 15, 2018

 

As I consider the cultural significance of this year's Academy Awards ceremony, my attention has not been captured by the Best Picture winner—which strikes me as a weird amalgam of Water World, Beauty and the Beast (TV version), and Avatar, with a dash of Roswell thrown in for good measure—but by something quite external to the event. Yes, I'm referring to the clamor over the 20% television ratings drop that has been lighting up the airwaves.

 

Fortune blames the drop off on "the rapidly-changing viewing habits of TV audiences, more and more of whom are choosing to stream their favorite content online (including on social media) rather than watching live on TV," as does Vulture and NPR, more or less. They're probably right, at least in part. Other explanations cite the lack of any real blockbusters among the Best Picture nominees this year (Fortune), as well as the Jimmy Kimmel twopeat as Master of Ceremonies (Fortune). But the really big story involves what might be regarded as the transformation of the Nielson ratings into a kind of Gallup Poll.

 

Consider in this regard the Fox News angle on the story: "Oscars ratings are down, and ABC's lack of control over the Academy may be to blame." Or Breitbart's exultation over the low numbers. And, of course, the President's morning after tweet. In each case (and many others), the fallout from the fall off is attributed to voter—I mean viewer—disgust with the "elitist" and "liberal" tendencies of the Academy, which is now getting its comeuppance.

 

Is it? I don't know: a thorough analysis of the numbers seems to be in order, and I would expect that the ABC brass at the very least will be conducting one in an attempt to preserve their ad revenues. In my own view, whatever caused the ratings drop is certainly overdetermined, with multiple forces combining to reduce television viewership not only of the Academy Awards and the Super Bowl but of traditional televised media as a whole. Certainly Fortune, Vulture and NPR are correct about the effect of the digital age on American viewing habits, but, given the leading role that Hollywood has played in the resistance to the Trump presidency, a deeper exploration of the possibility of a growing resistance to the resistance as evidenced in television viewing preferences could shed some light on emerging trends within the culture wars in this country.

 

Of course, the Fox News (et.al) take on the matter could prove to be fake news in the end, but even should that happen, the fact that the ratings drop could be so easily exploited for political purposes is itself significant. There are a number of takeaways from this. The first can be found in a Washington Post blog entitled "Trump is supercharging the celebrification of politics." The Post blog surveys an intensification of a cultural process that has been the core premise of nine editions of Signs of Life in the U.S.A.: namely, that the traditional division between "high" culture and "low" (or workaday and recreational) in America is being replaced by a single "entertainment culture" that permeates our society from end to end. The transformation has been going on for a long time, but Trump has intensified it.

 

But as the hoohah over the Academy Awards television viewership decline demonstrates, this entertainment culture is not a common culture: Americans are lining up on two sides of a popular cultural divide that matches an ideological one, with Fox News audiences lined up against MSNBC's, and innumerable other viewership dichotomies (Duck Dynasty, say, vs. Mad Men) indicating just how wide the culture gap has grown. So now we're counting audience numbers for such once broad-appeal spectacles as the Super Bowl and the Academy Awards to see which side is "winning." This is a new normal indeed, and it is indicative of a country that is tearing apart.

 

But then again, the same Post blog that I've cited above reports that the most read Washington Post story for the day in which the blog appeared concerned "the season finale of The Bachelor”—a TV event that really puts the soap into soap opera. So maybe there actually is something of world-historic importance for Americans to rally 'round after all.

 

Image Source: "Academy Award Winner" by  Davidlohr Bueso on Flickr 09/06/09 via Creative Commons 2.0 license

I had not planned on writing on this topic as my Bits Blog posting deadline approached. But when a headline in the L.A. Times on February 21st blared that "Conspiracy theories about Florida school shooting survivors have gone mainstream"—and this on a day when America's school children rose up to say "enough is enough" about gun violence—I felt that I ought to say something. What to say, however, is difficult to decide. As I wrote after the Route 91 Harvest music festival massacre in Las Vegas, I am not confident (to put it mildly) that anything meaningful is going to be done—the L.A. Times has a nailed it with a "Handy clip-and-save editorial for America's next gun massacre" and I don't have any solutions that the students now marching for their lives aren't already proposing more effectively than I can. But the whole mess has—thanks to something I've read in the Washington Post—enabled me to crystallize a solution to a critical thinking conundrum that I've been pondering, and that's what this blog will be about.

 

That conundrum is how to teach our students how to distinguish between reliable and unreliable information on the Internet. It seems like such an easy thing to do: just stick to the facts and you'll be fine. But when the purveyors of conspiracy theories have grown as sophisticated as they have in mimicking the compilation of "factual" evidence and then posting it all over the Internet in such a way as to confuse people into thinking that there is a sufficiency of cross-referenced sources to make their fairy tales believable, it becomes more of a challenge to teach students what's rot and what's not. And as I've also written in this blog, that challenge isn't made any easier by academic attacks on objective factuality on behalf of poststructural theories of the linguistic and/or social construction of reality. So, as I say, the matter isn't as simple as it looks.

 

Here's where that Washington Post article comes in. For in Paul Waldman's opinion piece, "Why the Parkland students have made pro-gun conservatives so mad," he identifies what can be used as a simple litmus test for cutting through the clutter in an alt-fact world: keep an eye out for ad hominem arguments in political argumentation.

 

Here's how he puts it:

The American right is officially terrified of the students of Marjory Stoneman Douglas High School. Those students, who rapidly turned themselves into activists and organizers after 17 of their fellow students and teachers were murdered at their school, have become the most visible face of this new phase of  the gun debate, and conservatives are absolutely livid about it. As a consequence, they’re desperately arguing not just that the students are wrong in their suggestions for how gun policy should be changed, but also that they shouldn’t be speaking at all and ought to be ignored.

 

There are two critical reasons the right is having this reaction, one more obvious than the other. The plainer reason is that as people who were personally touched by gun violence and as young people — old enough to be informed and articulate but still children — the students make extremely sympathetic advocates, garnering attention and a respectful hearing for their views. The less obvious reason is that because of that status, the students take away the most critical tool conservatives use to win political arguments: the personal vilification of those who disagree with them.

 

It is the use of "personal vilification of those who disagree" that reliably marks out an evidence-starved argument. Thus, when Richard Muller—once a favorite of the climate change denial crowd—reviewed his data and announced in 2012 that he had changed his mind and concluded that climate change is both real and anthropogenic, his erstwhile cheerleaders simply began to call him names. And you probably don't even want to know about the personal attacks they have been making on Michael Mann.

 

But given the high level of personal vilification that takes place on the Net (the political left can be found doing this too), our students have probably been somewhat desensitized to it, and may even take it for granted that this is the way that legitimate argumentation takes place. This is why it is especially important that we teach them about the ad hominem fallacy, not simply as a part of a list of logical and rhetorical fallacies to memorize but as a stand-alone topic addressing what is probably the most common rhetorical fallacy to be found on the Internet, and political life more generally these days.

 

Now, we can't stop simply with warning our students against ad hominem arguments (we should teach them not to make them either), but we can establish the point as a kind of point of departure: if someone's claims are swathed in personal attacks and accusations, it is likely that there is nothing of substance behind the argument. After all, an ad hominem attack is a kind of changing of the subject, a distraction from the attacker's lack of any relevant evidence.

 

I know this won't change the world, and it is of no use against the sort of people who are now vilifying American school children who have had enough, but at least it's a place to begin for writing and critical thinking instruction.

 

Yes, it's that time of year again: time for Super Bowl Semiotics, advertising division. And as I contemplate this year's rather uninspiring, and uninspired, lineup, I find myself realizing that the ads were more significant for what they didn't say (or do) than for what they did—like Sherlock Holmes' dog that didn't bark in the night. Here's why.

 

To start with, one dog that didn't bark this time around was a real dog: that is, after a couple of high-profile puppy-themed ads in the recent past (Budweiser's "Puppy Love" ad from Super Bowl 48 was a hit, while GoDaddy's parody the following year was a disaster—you can find complete analyses of both in the 9th edition of Signs of Life), Madison Avenue decided to let this sleeping dog lie for once, along with the ever-popular cute animal theme overall. I expect to see it come back next year (or soon thereafter) however: cute animals are good salespeople in America.

 

Of course, there was a fair share of comedy in the lineup (yuks sell stuff too), and the consensus appears to be that the comic ads from Tide took the prize for Best Ads in a Sponsoring Role. The Tide ads, of course, borrowed a page from the Energizer company, whose Energizer Bunny ads—first aired in 1989—employ a sophisticated advertising strategy that is essentially self-reflexive, parodying existing campaigns for other products, and, in so doing, appealing to an audience that has been so super saturated with advertising gimmicks that it has become skeptical of advertising in general.

 

But the big story of Super Bowl 52 was the relative lack of politically themed ads. Given the way that social politics—from #oscarssowhite to #metoo—have been playing such a prominent role in America's popular cultural main events recently, this may appear to be a surprising omission, but not when we consider how the NFL has been witness to an entire season of political protests that have tied it up in the sort of controversies it is not well equipped to handle. And given the ruckus that an immigration-themed Super Bowl ad made last year, one can see why politics was not on the agenda.

 

Not taking the hint, however, the ad folks at Dodge thought that they could enter the political fray in a way that would make everyone happy . . . and fell flat on their face with their Martin Luther King, Jr. spot. Dr. King, as at least one critic of the ad has put it, wasn't talking about trucks. In fact, as some careful readers of the actual MLK speech that Dodge appropriated have noted, King was warning his audience precisely against the power of advertising. Um, maybe a little learning is a dangerous thing.

 

In my view, the ad folks at Dodge tripped up in yet another way during the night, though I don't think that anyone else has noticed this. I refer here to the Vikings-take-Minneapolis Ram truck spot, which took a group of actual Icelanders—dressed up as medieval Viking raiders—from Iceland to Minneapolis in a thoroughly juiced-up journey, all set to Queen's "We Will Rock You." Now, some Minnesota Viking fans have taken the ad as some sort of dig at the football team, but I think the real story parallels what I've been writing here about the Thor movies. All those ferocious blondes, cruisin' for a bruisin' . . . . I don't want to press the matter, but I don't think that this is really a good time to so aggressively display what can only be called a demonstration of raw "white power."

 

Perhaps the biggest story of all, however, is that no ad really made that much of an impact. Oh, there are (as always) lists of favorites to be found all over the Net, but nothing really broke through the ad clutter in any big way. At five million dollars for thirty seconds of exposure (the cost seems to go up by a tidy million every year), that's something of an anti-climax, but perhaps that's as it should be. After all, there is still a football game somewhere behind all this, and, as games go, it was quite a good game.

 

 

Credit: “2018 Super Bowl LII Minnesota Banner – Minneapolis” by Tony Webster on Flickr 1/27/18 via Creative Commons 2.0 license.

Since the publication of the first edition of Signs of Life in the U.S.A in 1994, semiotics has become a popular instrument in promoting critical thinking skills in composition classrooms. With such a broad variety of semiotic methodologies to choose from, however, I find it useful from time to time to clarify the precise semiotic approach that is presented and modeled in Signs of Life: hence, the title and topic of this blog.

 

To begin with, the methodology of Signs of Life reflects a synthesis of some of the most effective elements to be found within the broad history of semiotic theory. To describe that synthesis, I need to briefly sketch out just what history I am referring to. It begins, then, with Roman Jakobson.

 

Arguably the most commonly known approach to technical semiotics, Jakobson's ADDRESSER – MESSAGE – ADDRESSEE schema has constituted a foundation for generations of semioticians. A fundamentally formalistic approach to communications theory as a whole, Jakobson's model was modified by Stuart Hall, who introduced a political dimension into the equation with his notion of "dominant," "negotiated," and "oppositional" readings of cultural texts (like television programs)—readings that either completely accept, partially accept, or completely challenge the intended message of the addresser. In essence, both Jakobson's and Hall's views are involved in the Signs of Life synthesis.

 

Before getting to a more precise description of that synthesis, however, I need to describe the role of three other major pioneers of semiotic thinking. The first of these figures is Ferdinand de Saussure, whose description of the constitutional role of difference within semiological systems underlies the fundamental principle in Signs of Life that the "essential approach to interpreting signs of popular culture is to situate signs within systems of related semiotic phenomena with which they can be associated and differentiated" (13; n.b.: the principle of association is not explicit in Saussure, but is implicit in his notion of the conceptual "signified").

 

The second pioneer is Roland Barthes, whose notion of semiotic mythologies underpins the ideological component of cultural semiotic analysis that Signs of Life explores and teaches.

 

The third essential figure in the synthesis is C.S. Peirce, whose sense of the historicity of signs, along with his philosophical realism, has provided me with an antidote to the tendency towards ahistorical formalism that the tradition of Saussure has fostered. And it was also Peirce who introduced the principle of abduction (i.e., the search for the most likely interpretation in the course of a semiotic analysis) that is critical to the methodology that is described and modeled in Signs of Life.

 

I will now introduce into the mix two new terms which, to the best of my knowledge, are my own, and are to be found in the 9th edition of Signs of Life. These are "micro-semiotics" and "macro-semiotics." The first of these terms describes what we do when we set out to decode any given popular cultural phenomenon—like an advertisement or a television program. In this we more or less follow Jakobson, analyzing the addresser's message as it was intended to be decoded. The macro-semiotic dimension, on the other hand, builds on the micro-semiotic reading to take it into the realm of cultural semiotics, where Hall, Saussure, Barthes, and Peirce all come into play, with Hall and Barthes leading the way to oppositional (and even subversive) re-codings of cultural texts, while Saussure and Peirce give us the tools for doing so, as briefly described above in this blog.

 

Now, if you are unfamiliar with Signs of Life in the U.S.A. all this may sound rather too complicated for a first-year writing textbook, and I can attest to the fact that when its first edition was in development, the folks at what was then simply called Bedford Books were plenty nervous about the whole thing. But while there are a few technical points directly introduced in the book in the interest of clarifying as clearly as possible exactly how a semiotic interpretation is performed, the text is not inaccessible—as the existence of nine editions, to date, demonstrates. The point, for the purpose of this blog, is that the semiotic method, as synthesized in Signs of Life, has a solid and diverse pedigreewhich is something that you could always explain to any student who may wonder where all this stuff came from.

Jack Solomon

War Everlasting

Posted by Jack Solomon Expert Jan 18, 2018


 

In "The Myth of Superman," the late Umberto Eco's pioneering essay on the semiotics of superheroes, a useful distinction is drawn between the heroes of myth and those of the traditional novel. What Eco points out is the way that mythic heroes are never "used up" by their experiences in the way that novelistic heroes are. The narrator, say, of Great Expectations is a different man at the end of his story than he was at the beginning (this, of course is Dickens' point), and if a sequel were to be written, the Pip of that novel would have to show the effects of time and experience that we see in the original tale. Superman, on the other hand (and the mythic heroes like Hercules that he resembles) is the same person from adventure to adventure, not taking up where he left off but simply reappearing in new story lines that can be multiplied indefinitely.

 

As I contemplate the appearance of yet another installment in the endless Star Wars franchise (along with the equally endless stream of superhero sagas that dominate the American cinematic box office), however, I can detect a certain difference that calls for a readjustment of Eco's still-useful distinction. And since differences are the key to semiotic understanding, this one is worth investigating.

 

All we have to do to see this difference is to consider the casting of Mark Hamill and the late Carrie Fisher in Star Wars: The Last Jedi. Of course, part of the reason for this was simply marketing: nostalgia is a highly effective ticket seller. But when we associate this movie with other action-adventure films whose heroes can be seen to be aging in ways that they have not done so before (the Batman and James Bond franchises are especially salient in this regard), another, much more profound significance emerges. This is the fact that while the characters in today's most popular designed-to-be-sequelized movies are coming to resemble the characters of conventional novels (as Eco describes them), the situations they find themselves in remain more or less the same. Quite simply, they are forever at war.

 

To see the significance of this, consider the plot trajectory of the traditional war story. Such stories, even if it takes a while for them to come to a conclusion, do eventually end. From the Homeric tradition that gives us the ten years of the Trojan War (with another ten years tacked on for Odysseus to get home) to The Lord of the Rings, the great wars of the story-telling tradition have a teleology: a beginning, a middle, and an end, as Aristotle would put it. But when we look at the Star Wars saga (especially now that Lucas has sold the franchise to Disney), or the Justice League tales, or (for that matter) The Walking Dead, we can find provisional, but never final, victories. Someone (or something) somewhere, will be forever threatening the world of the hero, and the end is never in sight. It is violent conflict itself that is never "used up."

 

There are a number of ways of interpreting this phenomenon. One must begin with the commercial motivation behind it: killing off the war would be tantamount to killing the golden geese of fan demand, and no one holding onto a valuable movie franchise is going to want to do that.

 

But while this explanation is certainly a cogent one, it raises another question: namely, why are movie fans satisfied with tales of never-ending war? In the past, it was the promise of a final victory that would carry audiences through the awful violence that served as the means to the happy ending that would redeem all the suffering that preceded it. The popularity of today's never-ending war stories indicates that the mass audience no longer requires that. The violence appears to be self-justifying.

 

Perhaps this receptiveness to tales of never-ending war simply reflects a sophisticated recognition on the part of current audiences that wars, in reality, never really do end. World War I—the "war to end all wars"—led to World War II, which led to the Korean War, and then to Vietnam. And America has been effectively at war in Afghanistan since 2001, with no end in sight. And, of course, the "war on terror" is as open-ended as any Justice League enterprise. So maybe Hollywood's visions of endless wars are simply responding to a certain historical reality.

 

I would find it difficult to argue against such an interpretation. But somehow I don't think that it goes deep enough. I say this because, after all, the purpose of popular entertainment is to be entertaining, and entertainment—especially when it comes to the genres of fantasy and action-adventure story telling—often serves as a distraction from the dismal realities of everyday life. And so, just as during the Great Depression movie-goers flocked to glamorous and romantic films that were far removed from the poverty and deprivation of that difficult era, one might expect war movies today that offered visions of final victory—a fantasy end to war in an era of endless conflict.

 

So the successful box office formula of endless war suggests to me that audiences are entertained, not repelled, by sagas of wars without end. Interchangeable visions of heroes (I use the word in a gender neutral sense) running across desert landscapes and down starship corridors with explosions bursting behind them, simply promise more such scenes in the next installment as violence is packaged as excitement for its own sake: war as video game.

 

Which may help explain why we tolerate (and basically ignore) such endless wars as that which we are still fighting in Afghanistan.

 

Credit: Pixaby Image 2214290 by tunechick83, used under a CC0 Creative Commons License

Everyone has a secret vice, I suppose, and mine is reading online newspapers like Inside Higher Ed and The Chronicle of Higher Education—as in multiple times every day. I admit that there is something compulsive about the matter, something that goes beyond the unquestionable usefulness of such reading for someone who is both a university professor and a cultural semiotician, something, I'm afraid, that is akin to the all-too-human attraction to things like train wrecks. This might surprise anyone who does not read these news sources: after all, wouldn't one expect there to be nothing but a kind of staid blandness to higher education reporting? Tedium, not harum-scarum, would seem to be the order of the day on such sites.

 

But no, in these days when signs of the culture wars are to be found everywhere in American society, even the higher-ed news beat is not immune to the kind of squabbling and trolling that defaces so much of the Internet. The situation has gotten so bad that the editors of The Chronicle of Higher Education have discontinued the comments section for most of its news stories, while Inside Higher Ed has polled its readers as to whether it should do the same. So far, IHE has decided to continue with posting reader comments (though it just shut down the comments section responding to an article on a recent controversy at Texas State University), and although I think it would be better for the overall blood pressure of American academe to just scrap the comments section altogether, on balance I hope that that doesn't happen. Here's why.

 

Because for the purposes of cultural semiotics, the comments sections on the Internet, no matter where you find them, offer invaluable insights into what is really going on in this country. Unlike formal surveys or polls—which, though they claim scientific precision, can never get around the fact that people, quite simply, often lie to pollsters and other inquisitors—online comments, commonly posted in anonymity, reveal what their authors really think. It isn't pretty, and it can make your blood boil, but it can get you a lot closer to the truth than, say, all those surveys that virtually put Hillary Clinton in the White House until the votes were actually counted.

 

Among the many things that the comments on IHE can tell us is that the days when we could assume that what we do on our university campuses stays on our university campuses are over. Thanks to the Internet, the whole world is watching, and, what is more, sharing what it sees. This matters a great deal, because even though the sorts of things that make headline news represent only a very small fraction of the daily life of the aggregated Universitas Americus, these things are magnified exponentially by the way that social media work. Every time a university student, or professor, says something that causes a commotion due to an inadequate definition of the speaker's terms, that statement will not only be misconstrued, it will become the representative face of American academia as a whole—which goes a long way towards explaining the declining levels of trust in higher education today that are now being widely reported. This may not be fair, but all you have to do is read the comments sections when these sorts of stories break, and it will be painfully clear that this is what happens when words that mean one thing in the context of the discourse of cultural studies mean quite something else in ordinary usage.

 

Linguistically speaking, what is going on is similar to the days of deconstructive paleonymy:  that is, when Derrida and DeMan (et al.) took common words like "writing" and "allegory" and employed them with significantly different, and newly coined, meanings. This caused a lot of confusion (as, for example, when Derrida asserted in Of Grammatology, that, historically speaking, "writing" is prior to "speech"), but the confusion was confined to the world of literary theorists and critics, causing nary a stir in the world at large. But it is quite a different matter when words that are already loaded with socially explosive potential in their ordinary sense are injected into the World Wide Web in their paleonymic one. Another part of the problem lies in the nature of the social network itself. From Facebook posts that their writers assume are private (when they aren't), to Twitter blasts (which are character-limited and thus rife with linguistic imprecision), the medium is indeed the message. Assuming an audience of like-minded readers, posters to social media often employ a kind of in-group shorthand, which can be woefully misunderstood when read by anyone who isn't in the silo. So when the silo walls are as porous as the Internet can make them, the need for carefully worded and explained communications becomes all the more necessary. This could lead to lecture-like, rather boring online communication, but I think that this would be a case of boredom perpetrated in a good cause. The culture wars are messy enough as they are: those of us in cultural studies can help by being as linguistically precise, and transparent, as we can.

So Thor is back, hammering his way to another blockbusting run at the box office. But this time, it's almost as if the producers of Thor: Ragnarok read an analysis I posted to this blog on November 11, 2013, when Thor: The Dark World appeared, because some interesting things have happened to the franchise this time around that seem to be in reaction to what I argued back then. So let's have a look first at what I said in 2013, before turning to the present. Here's what I said then:

 

Well, the dude with the big hammer just pulled off the biggest box office debut for quite some time, and such a commercial success calls for some semiotic attention.

 

There is an obvious system within which to situate Thor: The Dark World and thus begin our analysis. This, of course, is the realm of the cinematic superhero, a genre that has absolutely dominated Hollywood film making for quite some time now. Whether featuring such traditional superheroes as Batman, Spider Man, and Superman, or such emergent heavies as Iron Man and even (gulp!) Kick-Ass, the superhero movie is a widely recognized signifier of Hollywood’s timid focus on tried-and-true formulae that offer a high probability of box office success due to their pre-existing audiences of avid adolescent males. Add to this the increasingly observed cultural phenomenon that adulthood is the new childhood (or thirty is the new fourteen), and you have a pretty clear notion of at least a prominent part of the cultural significance of Thor’s recent coup.

 

But I want to look at a somewhat different angle on this particular superhero’s current dominance that I haven’t seen explored elsewhere. This is the fact that, unlike all other superheroes, Thor comes from an actual religion (I recognize that this bothered Captain America’s Christian sensibilities in The Avengers, but a god is a god). And while the exploitation of their ancestors’ pagan beliefs is hardly likely to disturb any modern Scandinavians, this cartoonish revision of an extinct cultural mythology is still just a little peculiar. I mean, why Thor and not, say, Apollo, or even Dionysus?

 

I think the explanation is two-fold here, and culturally significant in both parts. The first is that the Nordic gods were, after all, part of a pantheon of warriors, complete with a kind of locker/war room (Valhalla) and a persistent enemy (the Jotuns, et al) whose goal was indeed to destroy the world. [ That the enemies of the Nordic gods were destined to win a climactic battle over Thor and company (the Ragnarok, or Wagnerian Gotterdammerung), is an interesting feature of the mythology that may or may not occur in a future installment of the movie franchise.] But the point is that Norse mythology offers a ready-made superhero saga to a market hungering for clear-cut conflicts between absolute bad guys whose goal is to destroy the world and well-muscled good guys who oppose them: a simple heroes vs. villains tale.

You don’t find this in Greek mythology, which is always quite complicated and rather more profound in its probing of the complexities and contradictions of human life and character.

 

But I suspect that there is something more at work here. I mean, Wagner, the Third Reich’s signature composer, didn’t choose Norse mythology as the framework for his most famous opera by accident. And the fact is that you just don’t get any more Aryan than blonde Thor is (isn’t it interesting that the troublesome Loki, though part of the Norse pantheon too, somehow doesn’t have blonde hair? Note also in this regard how the evil Wormtongue in Jackson’s The Lord of the Rings also seems to be the only non-blonde among the blonde Rohirrim). The Greeks, for their part, weren’t blondes. So is the current popularity of this particular Norse god a reflection of a coded nostalgia for a whiter world? In this era of increasing racial insecurity as America’s demographic identity shifts, I can’t help but think so.

 

OK, so that was then, what about now? Let's just say that the "white nationalist" march at Charlottesville has clearly brought out into the open what was still lurking on the margins in 2013, and I would hazard to guess that a good number of the khaki-clad crew with their tiki torches and lightning bolt banners were (and are) Thor fans. So I'll stand by my 2013 interpretation. And as for the most recent installment in the Thor saga, well, I can almost see the producers of Thor: Ragnarok having the following pre-production conversation:

 

Producer 1: The semioticians are on to us.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: I've got it: let's give Thor a haircut this time, and, you know, brown out those blonde tones!

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: Tessa Thompson is available to play Valkyrie.

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: Idris Elba is available too.

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: You do know that Taika Waititi is a Jewish Maori, don't you, and that he's available too?

 

Producer 1: I see a concept here.

 

Producer 2: Oh goodie, campy superheroes!

 

Producer 3: And surely no one will object to Jeff Goldblum playing one of the evil Elders of the Universe, because surely no one remembers the anti-Semitic forgery "Protocols of the Elders of Zion" that Hitler made such use of.

 

Producer 1: We didn't hear that.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: We’ll paint a blue stripe on Jeff's chin. No one will make the connection.

 

Producer 1: It’s a wrap!

 

I rest my case.

In my last blog (Signs of Life in the U.S.A.: A Portrait of the Project as a Young Book) I indicated that I might tell the story of the various book covers that have been used for Signs of Life in the U.S.A. over the years, and, given the importance of visual imagery to cultural semiotics, I think that offering an insider view of how book covers get created might be useful to instructors of popular culture. So here goes.

 

Anyone who has followed the cover history of Signs of Life knows that Sonia and I have always eschewed the use of celebrity images—a common cover strategy that suggests that popular culture is all about entertainment icons. Since one of the main theses of Signs of Life is that popular culture is a matter of everyday life, of the ordinary along with the extraordinary, we wanted to find a cover image for our first edition that would semiotically convey this message even before its readers opened the book to see what was inside. At the same time, Sonia and I liked the practice of using established works of art for book covers, and figured that there would be a wealth of Pop Art choices to choose from.

 

Well, there certainly was a lot of Pop Art to consider, but we were rather dismayed to find that just about all of it was—at least to our tastes—off putting (“repulsive” would be a better word for the often garish, erotic, and/or just plain ugly works we found), and we didn’t want such stuff on the cover of our book. But then we found a perfect image from a well-known Pop Art painter named Tom Wesselmann, whose Still Life #31—featuring an image of a kitchen table with some apples, pears, a TV set, a view of an open countryside outside a window, and a portrait of George Washington—seemed just right for our purposes. So discovered, so done. We had our first cover.

 

Thus, things were easy when it came to the second edition: we simply looked for more Wesselmann, and this time we found Still Life #28, a painting that is quite similar to Still Life #31, though the color scheme is different, and Abraham Lincoln takes the place of George Washington. There’s even a cat on the cover. Cover number 2 was in the bag.

 

Between the first and second editions of Signs of Life, however, Sonia and I also published the first edition of California Dreams and Realities, for which we used one of David Hockney’s Pearblossom Highway paintings (#2). This ruled out using something from Hockney for the third edition of Signs of Life (we wanted Hockney again for the second edition of California Dreams), so when it came time to create the new cover we suggested another Wesselmann. Our editor disagreed: it was time for something new—which made sense because we did not want to give the impression that the third edition was the same as the first two. Each edition is much revised. So this time the art staff at Bedford designed a cover that featured a montage of images that included a white limousine, a yellow taxi, a cow, a highway, images from the southwestern desert, an electric guitar (a Parker Fly, by the way), the San Francisco skyline, the Capitol Dome in Washington D.C., the Statue of Liberty, two skyscrapers standing together, a giant football, a giant hamburger, a Las Vegas casino sign, and a blue-sky background with billowing white clouds. A bit too cluttered for my taste, but good enough, though it was upsetting to realize, after the September 11 attacks, that those two skyscrapers were the World Trade Center.

 

By the time the fourth edition came around, Bedford had chosen a motif that would be repeated, in variations, for the next five editions: this would be linear arrangements of individual images displayed in a single Rubic’s-cube-like block (edition #4), in rows with brightly colored dots interspersed (edition #5), in rows without dots (edition #6), in an art work by Liz West featuring a brightly colored square filled with squares (edition #7), and in rows of tiny images of the artist's (Simon Evans) personal possessions (edition #8). Everyday life in boxes, so to speak.

 

Which takes us to the ninth edition. When Sonia and I were shown the cover art for the first time, we could see that the Bedford art department had abandoned the images-in-rows motif to go, as it were, back to the future with an image reminiscent not of the first two covers but to a less cluttered revival of the third. It’s nice to see Lincoln back, along with a Route 66 sign that echoes Hockney’s Route 138 highway marker in the Pearblossom series. And there is a lot of blue sky to add a measure of natural serenity to the scene. I'm quite fond of natural serenity.

 

So, you see, a lot of thought goes into cover design (and I haven't even mentioned the two proposed covers that Sonia and I flat out rejected).  For while, as the old saying has it, you can't judge a book by its cover, you can use the cover of Signs of Life as a teaching tool, something to hold up in class and ask students to interpret, image by image, the way one would interpret a package. Because, in the end, a book cover is a kind of package, something that is at once functional (it holds the book together and protects its pages) and informational (it presents a sense of what is inside), while striving (at least in our case) to be as aesthetically pleasing as possible. It wraps the whole project up, and is something I will miss if hard-copy books should ever disappear in a wave of e-texts.

 

8th editionNew 9th edition