Skip navigation
All Places > The English Community > Bedford Bits > Blog > Authors Jack Solomon
1 2 3 4 5 6 Previous Next

Bedford Bits

88 Posts authored by: Jack Solomon Expert
Jack Solomon

War Everlasting

Posted by Jack Solomon Expert Jan 18, 2018


 

In "The Myth of Superman," the late Umberto Eco's pioneering essay on the semiotics of superheroes, a useful distinction is drawn between the heroes of myth and those of the traditional novel. What Eco points out is the way that mythic heroes are never "used up" by their experiences in the way that novelistic heroes are. The narrator, say, of Great Expectations is a different man at the end of his story than he was at the beginning (this, of course is Dickens' point), and if a sequel were to be written, the Pip of that novel would have to show the effects of time and experience that we see in the original tale. Superman, on the other hand (and the mythic heroes like Hercules that he resembles) is the same person from adventure to adventure, not taking up where he left off but simply reappearing in new story lines that can be multiplied indefinitely.

 

As I contemplate the appearance of yet another installment in the endless Star Wars franchise (along with the equally endless stream of superhero sagas that dominate the American cinematic box office), however, I can detect a certain difference that calls for a readjustment of Eco's still-useful distinction. And since differences are the key to semiotic understanding, this one is worth investigating.

 

All we have to do to see this difference is to consider the casting of Mark Hamill and the late Carrie Fisher in Star Wars: The Last Jedi. Of course, part of the reason for this was simply marketing: nostalgia is a highly effective ticket seller. But when we associate this movie with other action-adventure films whose heroes can be seen to be aging in ways that they have not done so before (the Batman and James Bond franchises are especially salient in this regard), another, much more profound significance emerges. This is the fact that while the characters in today's most popular designed-to-be-sequelized movies are coming to resemble the characters of conventional novels (as Eco describes them), the situations they find themselves in remain more or less the same. Quite simply, they are forever at war.

 

To see the significance of this, consider the plot trajectory of the traditional war story. Such stories, even if it takes a while for them to come to a conclusion, do eventually end. From the Homeric tradition that gives us the ten years of the Trojan War (with another ten years tacked on for Odysseus to get home) to The Lord of the Rings, the great wars of the story-telling tradition have a teleology: a beginning, a middle, and an end, as Aristotle would put it. But when we look at the Star Wars saga (especially now that Lucas has sold the franchise to Disney), or the Justice League tales, or (for that matter) The Walking Dead, we can find provisional, but never final, victories. Someone (or something) somewhere, will be forever threatening the world of the hero, and the end is never in sight. It is violent conflict itself that is never "used up."

 

There are a number of ways of interpreting this phenomenon. One must begin with the commercial motivation behind it: killing off the war would be tantamount to killing the golden geese of fan demand, and no one holding onto a valuable movie franchise is going to want to do that.

 

But while this explanation is certainly a cogent one, it raises another question: namely, why are movie fans satisfied with tales of never-ending war? In the past, it was the promise of a final victory that would carry audiences through the awful violence that served as the means to the happy ending that would redeem all the suffering that preceded it. The popularity of today's never-ending war stories indicates that the mass audience no longer requires that. The violence appears to be self-justifying.

 

Perhaps this receptiveness to tales of never-ending war simply reflects a sophisticated recognition on the part of current audiences that wars, in reality, never really do end. World War I—the "war to end all wars"—led to World War II, which led to the Korean War, and then to Vietnam. And America has been effectively at war in Afghanistan since 2001, with no end in sight. And, of course, the "war on terror" is as open-ended as any Justice League enterprise. So maybe Hollywood's visions of endless wars are simply responding to a certain historical reality.

 

I would find it difficult to argue against such an interpretation. But somehow I don't think that it goes deep enough. I say this because, after all, the purpose of popular entertainment is to be entertaining, and entertainment—especially when it comes to the genres of fantasy and action-adventure story telling—often serves as a distraction from the dismal realities of everyday life. And so, just as during the Great Depression movie-goers flocked to glamorous and romantic films that were far removed from the poverty and deprivation of that difficult era, one might expect war movies today that offered visions of final victory—a fantasy end to war in an era of endless conflict.

 

So the successful box office formula of endless war suggests to me that audiences are entertained, not repelled, by sagas of wars without end. Interchangeable visions of heroes (I use the word in a gender neutral sense) running across desert landscapes and down starship corridors with explosions bursting behind them, simply promise more such scenes in the next installment as violence is packaged as excitement for its own sake: war as video game.

 

Which may help explain why we tolerate (and basically ignore) such endless wars as that which we are still fighting in Afghanistan.

 

Credit: Pixaby Image 2214290 by tunechick83, used under a CC0 Creative Commons License

Everyone has a secret vice, I suppose, and mine is reading online newspapers like Inside Higher Ed and The Chronicle of Higher Education—as in multiple times every day. I admit that there is something compulsive about the matter, something that goes beyond the unquestionable usefulness of such reading for someone who is both a university professor and a cultural semiotician, something, I'm afraid, that is akin to the all-too-human attraction to things like train wrecks. This might surprise anyone who does not read these news sources: after all, wouldn't one expect there to be nothing but a kind of staid blandness to higher education reporting? Tedium, not harum-scarum, would seem to be the order of the day on such sites.

 

But no, in these days when signs of the culture wars are to be found everywhere in American society, even the higher-ed news beat is not immune to the kind of squabbling and trolling that defaces so much of the Internet. The situation has gotten so bad that the editors of The Chronicle of Higher Education have discontinued the comments section for most of its news stories, while Inside Higher Ed has polled its readers as to whether it should do the same. So far, IHE has decided to continue with posting reader comments (though it just shut down the comments section responding to an article on a recent controversy at Texas State University), and although I think it would be better for the overall blood pressure of American academe to just scrap the comments section altogether, on balance I hope that that doesn't happen. Here's why.

 

Because for the purposes of cultural semiotics, the comments sections on the Internet, no matter where you find them, offer invaluable insights into what is really going on in this country. Unlike formal surveys or polls—which, though they claim scientific precision, can never get around the fact that people, quite simply, often lie to pollsters and other inquisitors—online comments, commonly posted in anonymity, reveal what their authors really think. It isn't pretty, and it can make your blood boil, but it can get you a lot closer to the truth than, say, all those surveys that virtually put Hillary Clinton in the White House until the votes were actually counted.

 

Among the many things that the comments on IHE can tell us is that the days when we could assume that what we do on our university campuses stays on our university campuses are over. Thanks to the Internet, the whole world is watching, and, what is more, sharing what it sees. This matters a great deal, because even though the sorts of things that make headline news represent only a very small fraction of the daily life of the aggregated Universitas Americus, these things are magnified exponentially by the way that social media work. Every time a university student, or professor, says something that causes a commotion due to an inadequate definition of the speaker's terms, that statement will not only be misconstrued, it will become the representative face of American academia as a whole—which goes a long way towards explaining the declining levels of trust in higher education today that are now being widely reported. This may not be fair, but all you have to do is read the comments sections when these sorts of stories break, and it will be painfully clear that this is what happens when words that mean one thing in the context of the discourse of cultural studies mean quite something else in ordinary usage.

 

Linguistically speaking, what is going on is similar to the days of deconstructive paleonymy:  that is, when Derrida and DeMan (et al.) took common words like "writing" and "allegory" and employed them with significantly different, and newly coined, meanings. This caused a lot of confusion (as, for example, when Derrida asserted in Of Grammatology, that, historically speaking, "writing" is prior to "speech"), but the confusion was confined to the world of literary theorists and critics, causing nary a stir in the world at large. But it is quite a different matter when words that are already loaded with socially explosive potential in their ordinary sense are injected into the World Wide Web in their paleonymic one. Another part of the problem lies in the nature of the social network itself. From Facebook posts that their writers assume are private (when they aren't), to Twitter blasts (which are character-limited and thus rife with linguistic imprecision), the medium is indeed the message. Assuming an audience of like-minded readers, posters to social media often employ a kind of in-group shorthand, which can be woefully misunderstood when read by anyone who isn't in the silo. So when the silo walls are as porous as the Internet can make them, the need for carefully worded and explained communications becomes all the more necessary. This could lead to lecture-like, rather boring online communication, but I think that this would be a case of boredom perpetrated in a good cause. The culture wars are messy enough as they are: those of us in cultural studies can help by being as linguistically precise, and transparent, as we can.

So Thor is back, hammering his way to another blockbusting run at the box office. But this time, it's almost as if the producers of Thor: Ragnarok read an analysis I posted to this blog on November 11, 2013, when Thor: The Dark World appeared, because some interesting things have happened to the franchise this time around that seem to be in reaction to what I argued back then. So let's have a look first at what I said in 2013, before turning to the present. Here's what I said then:

 

Well, the dude with the big hammer just pulled off the biggest box office debut for quite some time, and such a commercial success calls for some semiotic attention.

 

There is an obvious system within which to situate Thor: The Dark World and thus begin our analysis. This, of course, is the realm of the cinematic superhero, a genre that has absolutely dominated Hollywood film making for quite some time now. Whether featuring such traditional superheroes as Batman, Spider Man, and Superman, or such emergent heavies as Iron Man and even (gulp!) Kick-Ass, the superhero movie is a widely recognized signifier of Hollywood’s timid focus on tried-and-true formulae that offer a high probability of box office success due to their pre-existing audiences of avid adolescent males. Add to this the increasingly observed cultural phenomenon that adulthood is the new childhood (or thirty is the new fourteen), and you have a pretty clear notion of at least a prominent part of the cultural significance of Thor’s recent coup.

 

But I want to look at a somewhat different angle on this particular superhero’s current dominance that I haven’t seen explored elsewhere. This is the fact that, unlike all other superheroes, Thor comes from an actual religion (I recognize that this bothered Captain America’s Christian sensibilities in The Avengers, but a god is a god). And while the exploitation of their ancestors’ pagan beliefs is hardly likely to disturb any modern Scandinavians, this cartoonish revision of an extinct cultural mythology is still just a little peculiar. I mean, why Thor and not, say, Apollo, or even Dionysus?

 

I think the explanation is two-fold here, and culturally significant in both parts. The first is that the Nordic gods were, after all, part of a pantheon of warriors, complete with a kind of locker/war room (Valhalla) and a persistent enemy (the Jotuns, et al) whose goal was indeed to destroy the world. [ That the enemies of the Nordic gods were destined to win a climactic battle over Thor and company (the Ragnarok, or Wagnerian Gotterdammerung), is an interesting feature of the mythology that may or may not occur in a future installment of the movie franchise.] But the point is that Norse mythology offers a ready-made superhero saga to a market hungering for clear-cut conflicts between absolute bad guys whose goal is to destroy the world and well-muscled good guys who oppose them: a simple heroes vs. villains tale.

You don’t find this in Greek mythology, which is always quite complicated and rather more profound in its probing of the complexities and contradictions of human life and character.

 

But I suspect that there is something more at work here. I mean, Wagner, the Third Reich’s signature composer, didn’t choose Norse mythology as the framework for his most famous opera by accident. And the fact is that you just don’t get any more Aryan than blonde Thor is (isn’t it interesting that the troublesome Loki, though part of the Norse pantheon too, somehow doesn’t have blonde hair? Note also in this regard how the evil Wormtongue in Jackson’s The Lord of the Rings also seems to be the only non-blonde among the blonde Rohirrim). The Greeks, for their part, weren’t blondes. So is the current popularity of this particular Norse god a reflection of a coded nostalgia for a whiter world? In this era of increasing racial insecurity as America’s demographic identity shifts, I can’t help but think so.

 

OK, so that was then, what about now? Let's just say that the "white nationalist" march at Charlottesville has clearly brought out into the open what was still lurking on the margins in 2013, and I would hazard to guess that a good number of the khaki-clad crew with their tiki torches and lightning bolt banners were (and are) Thor fans. So I'll stand by my 2013 interpretation. And as for the most recent installment in the Thor saga, well, I can almost see the producers of Thor: Ragnarok having the following pre-production conversation:

 

Producer 1: The semioticians are on to us.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: I've got it: let's give Thor a haircut this time, and, you know, brown out those blonde tones!

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: Tessa Thompson is available to play Valkyrie.

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: Idris Elba is available too.

 

Producer 1: Good, but not good enough.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: You do know that Taika Waititi is a Jewish Maori, don't you, and that he's available too?

 

Producer 1: I see a concept here.

 

Producer 2: Oh goodie, campy superheroes!

 

Producer 3: And surely no one will object to Jeff Goldblum playing one of the evil Elders of the Universe, because surely no one remembers the anti-Semitic forgery "Protocols of the Elders of Zion" that Hitler made such use of.

 

Producer 1: We didn't hear that.

 

Producer 2: Oh woe, alas, and alack!

 

Producer 3: We’ll paint a blue stripe on Jeff's chin. No one will make the connection.

 

Producer 1: It’s a wrap!

 

I rest my case.

In my last blog (Signs of Life in the U.S.A.: A Portrait of the Project as a Young Book) I indicated that I might tell the story of the various book covers that have been used for Signs of Life in the U.S.A. over the years, and, given the importance of visual imagery to cultural semiotics, I think that offering an insider view of how book covers get created might be useful to instructors of popular culture. So here goes.

 

Anyone who has followed the cover history of Signs of Life knows that Sonia and I have always eschewed the use of celebrity images—a common cover strategy that suggests that popular culture is all about entertainment icons. Since one of the main theses of Signs of Life is that popular culture is a matter of everyday life, of the ordinary along with the extraordinary, we wanted to find a cover image for our first edition that would semiotically convey this message even before its readers opened the book to see what was inside. At the same time, Sonia and I liked the practice of using established works of art for book covers, and figured that there would be a wealth of Pop Art choices to choose from.

 

Well, there certainly was a lot of Pop Art to consider, but we were rather dismayed to find that just about all of it was—at least to our tastes—off putting (“repulsive” would be a better word for the often garish, erotic, and/or just plain ugly works we found), and we didn’t want such stuff on the cover of our book. But then we found a perfect image from a well-known Pop Art painter named Tom Wesselmann, whose Still Life #31—featuring an image of a kitchen table with some apples, pears, a TV set, a view of an open countryside outside a window, and a portrait of George Washington—seemed just right for our purposes. So discovered, so done. We had our first cover.

 

Thus, things were easy when it came to the second edition: we simply looked for more Wesselmann, and this time we found Still Life #28, a painting that is quite similar to Still Life #31, though the color scheme is different, and Abraham Lincoln takes the place of George Washington. There’s even a cat on the cover. Cover number 2 was in the bag.

 

Between the first and second editions of Signs of Life, however, Sonia and I also published the first edition of California Dreams and Realities, for which we used one of David Hockney’s Pearblossom Highway paintings (#2). This ruled out using something from Hockney for the third edition of Signs of Life (we wanted Hockney again for the second edition of California Dreams), so when it came time to create the new cover we suggested another Wesselmann. Our editor disagreed: it was time for something new—which made sense because we did not want to give the impression that the third edition was the same as the first two. Each edition is much revised. So this time the art staff at Bedford designed a cover that featured a montage of images that included a white limousine, a yellow taxi, a cow, a highway, images from the southwestern desert, an electric guitar (a Parker Fly, by the way), the San Francisco skyline, the Capitol Dome in Washington D.C., the Statue of Liberty, two skyscrapers standing together, a giant football, a giant hamburger, a Las Vegas casino sign, and a blue-sky background with billowing white clouds. A bit too cluttered for my taste, but good enough, though it was upsetting to realize, after the September 11 attacks, that those two skyscrapers were the World Trade Center.

 

By the time the fourth edition came around, Bedford had chosen a motif that would be repeated, in variations, for the next five editions: this would be linear arrangements of individual images displayed in a single Rubic’s-cube-like block (edition #4), in rows with brightly colored dots interspersed (edition #5), in rows without dots (edition #6), in an art work by Liz West featuring a brightly colored square filled with squares (edition #7), and in rows of tiny images of the artist's (Simon Evans) personal possessions (edition #8). Everyday life in boxes, so to speak.

 

Which takes us to the ninth edition. When Sonia and I were shown the cover art for the first time, we could see that the Bedford art department had abandoned the images-in-rows motif to go, as it were, back to the future with an image reminiscent not of the first two covers but to a less cluttered revival of the third. It’s nice to see Lincoln back, along with a Route 66 sign that echoes Hockney’s Route 138 highway marker in the Pearblossom series. And there is a lot of blue sky to add a measure of natural serenity to the scene. I'm quite fond of natural serenity.

 

So, you see, a lot of thought goes into cover design (and I haven't even mentioned the two proposed covers that Sonia and I flat out rejected).  For while, as the old saying has it, you can't judge a book by its cover, you can use the cover of Signs of Life as a teaching tool, something to hold up in class and ask students to interpret, image by image, the way one would interpret a package. Because, in the end, a book cover is a kind of package, something that is at once functional (it holds the book together and protects its pages) and informational (it presents a sense of what is inside), while striving (at least in our case) to be as aesthetically pleasing as possible. It wraps the whole project up, and is something I will miss if hard-copy books should ever disappear in a wave of e-texts.

 

8th editionNew 9th edition
Signs of Life in the USAThe arrival of the authors' copies of the ninth edition of Signs of Life in the U.S.A. prompts me to reflect here on the history of this—at least for Sonia Maasik and myself—life-changing project. So I will do something a little different this week, and return to the original purpose of the web-log, which was to write something along the lines of a traditional journal or diary entry rather than an interpretive essay—a remembrance of things past in this case.

 To begin with, Signs of Life did not begin its life as a textbook. Its origins lie in a book I wrote in the mid-1980s: The Signs of Our Time: Semiotics: The Hidden Messages of Environments, Objects, and Cultural Images (1988). That book was a product of pure contingency, even serendipity. I was seated at my departmental Displaywriter (an early word processor that was about the size of a piano and used eight inch truly floppy disks) completing my final draft of Discourse and Reference in the Nuclear Age (1988)—a technical critique of poststructural semiotics that proposed a new paradigm whose theoretical parameters underlie the applied semiotic lessons to be found in Signs of Life—when my department chair drifted by and casually asked me if I would like to talk to a local publisher whom he had met recently at a party and who was looking for someone to write a  non-academic book on semiotics for a non-academic audience. As a young professor, I was ready to jump at any book-publishing opportunity, and, having found myself doing a lot of spontaneous interpretations of the popular culture of the 1980s (especially of stuffed toys like Paddington Bear and the celebrity Bear series—anyone remember Lauren Bearcall?), I was ready with a book proposal in no time. I soon had a contract, an advance (with which I purchased an early Macintosh computer that didn't even have a hard drive—it still works), and a tight deadline to meet (that's how things work in the trade book world). And that's also how Discourse and Reference and The Signs of Our Time came to be published in the same year.

 

A few years later, Sonia discovered that composition instructors were using The Signs of Our Time as a classroom text, and I found that chapters from the book were being reprinted in composition readers (the first to do so was Rereading America 2/e). So Sonia had a brainstorm: having worked with Bedford Books on other projects, she suggested that we propose a new composition textbook to Bedford based upon The Signs of Our Time. Looking back, it looks like a pretty obvious thing to have done, but this was the early 1990s, and America was hotly embroiled in the academic version of the "culture wars"; not only was the academic study of popular culture still controversial, but no one had attempted to bring semiotics into a composition classroom before. Still, Chuck Christensen—the founder of Bedford Books—who was always on the lookout for something both daring and new, was interested. He also wanted to know if I could provide a one-page description of what semiotics was all about. So ordered, so done, and we had a contract for a composition reader that would combine a full writing instruction apparatus with an array of readings, alongside unusually long chapter introductions that would both explain and demonstrate the semiotic method as applied to American popular culture.

 

That part of the matter was unusually smooth. But there were bumps in the road on the way to completion. For instance, there was our editor's initial response to our first chapter submissions. Let's just say that he was not enamored of certain elements in my expository style. But thanks to a long long-distance phone call we managed to clear that up to our mutual satisfaction. And the good news was that Bedford really wanted our book. The bad news was that they wanted it published by January 1994—a good deal less than a year away and we were starting practically from scratch. It was published in January 1994 (just in time for the big Northridge earthquake that knocked my campus to the ground). I still don't know how Sonia and I did it (the fact that we said "yes" to Chuck's invitation to do another book—it became California Dreams and Realities—in that same January, giving us six months to do it this time, simply boggles my mind to this day, but, as I say, we were a lot younger then).

 

Well, all that was a quarter of a century ago. In that time we have improved upon every prior edition of Signs of Life, listening not only to the many adopters of the text who have reviewed it over the years in the development stage of each new edition, but adding changes based upon our own experiences using it in our own classes. Of these changes, the most important to me are the ongoing refinements of my description of the semiotic method—the unpacking of the often-intuitive mental activity that takes place when one interprets popular cultural phenomena. There is an increasingly meta-cognitive aspect to these descriptions, which break down into their component parts the precise details of a semiotic analysis—details that effectively overlap with any act of critical thinking. And, of course, every new edition responds to popular cultural events and trends with updated readings, updated chapter introductions that introduce fresh models of semiotic analysis, and the introduction of new chapter topics altogether. And in the case of the 9th edition, we have added plenty of material for instructors who may want to use the 2016 presidential election as a course theme or topic. But perhaps the most important refinements for those who adopt the text are those that Sonia brings to each new edition: the expansion and clarification of the writing apparatus in the text that guides students in the writing of their semiotic analyses.

 

As I draw to an end here, I realize that I could write an entire blog just on the history of the covers for Signs of Life. Maybe I will in my next blog entry.

Jack Solomon

Coping Without Catharsis

Posted by Jack Solomon Expert Oct 12, 2017

It's beginning to feel like every time I sit down to write this bi-weekly blog of mine that America has just endured another calamity of such mind-numbing atrociousness that I can't simply ignore it, while at the same time knowing that there is nothing I can say that can possibly make anyone—students and colleagues alike—feel any better about it. And the massacre at the Route 91 Harvest music festival in Las Vegas has placed me in that position once again.

 

So I'm going to go ahead and address the matter analytically, but there are some things I will not do. First, I will not waste my time, or yours, demanding that America finally do something to control the spread of weapons of mass destruction to everyone who wants them, because I know perfectly well that America is not going to do anything of the kind. Second, I'm not going to try to explain why nothing is going to happen because it would be entirely futile to do so. Suffice to say that we all know the script: the political rituals that follow upon every one of these atrocities, and the way that those rituals invariably play out as they do. Third, I'm not going to blame "the media" for the carnage; that, too, is a common, though by no means illegitimate, part of the post-massacre script, as this essay in Inside Higher Education demonstrates once again. And finally, I'm not going to blame the high level of violence in popular culture for the high level of violence in everyday life—though that, too, is a not-unworthy subject for careful, data-driven analysis. Rather, I am going to look at the difference between the typical (and conventional) narrative to be found in violent entertainment, and the formless anomie to be found in the seemingly endless string of massacres in schools, movie theaters, night clubs, music festivals, and heaven knows what other sites, that plague our days and nights today.

 

Consider, then, the typical narrative of violent entertainment. Reduced to its most basic structure, it involves a victim (or victims), a villain (or villains), and a savior (or saviors). The story—whether told in the generic form of horror, or murder mystery, or thriller, or war epic, or superhero saga, or sword and sorcerer fantasy, or whatever—tells the tale of how the villain is, in some way or another, opposed by the savior, and, usually, stopped (even when the story is open-ended, which is not infrequent in contemporary entertainment, there is usually some heroic figure, or figures, to identify with, who at least provides a model of sanity amidst the mayhem). This is what stories conventionally do: they give shape to the horrors of existence and give them a kind of meaning that Aristotle called "catharsis." When the detective catches the killer, the vampire slayer drives the stake through the monster's heart, the evil empire is defeated, the wicked witch is dissolved or the evil sorcerer vaporized, the bad king is dethroned (or de-headed: Macbeth is part of this system as well), and so on and so forth, the audience overcomes its pity and terror, and, to put it as plainly as possible, feels better.

 

But this is exactly what does not happen when someone, who has been living among us—and who, having shown no signs of madness or murderousness, has plotted his massacre completely under the radar of law enforcement—suddenly cuts loose. More often than not, now, he also kills himself. And we are left with nothing but the carnage: there is no wily detective, no heroic hobbit, no boy wizard, no man/woman in spandex, no warrior, no secret agent, no martial arts expert, nor any kind of savior at all: just the sorry spectacle of missed opportunities on the part of those we rely on to protect us—from the police to the politicians—and an almost total lack of understanding of why the carnage occurred at all. I realize that the heroic acts of victims and first-responders on the ground in such cases can help mitigate the horror, but it is all too after-the-fact for any real comfort when we know that it is all going to happen again. This is the reality of real-life horror, and there is no redemptive narrative in sight.

One of the most common objections from students whose instructors use popular culture as a basis for teaching writing and critical thinking skills in their classes is that it (pop culture) "is only entertainment," and that any attempt to think critically about it is "reading something into it" that isn't there.   Well, I think that the results of the latest round of Emmy Awards should finally put an end to any such complaints, because the sweeping triumphs of The Handmaid's Tale and Saturday Night Live have made it quite clear that the entertainment industry is now a direct participant in American politics.

 

This is a point that has been stated explicitly in every edition of Signs of Life in the U.S.A. (including, of course, the 9th edition, due out in a couple of weeks), in which students are taught that the traditional line between entertainment and everyday life has been so diminished that it could be said that we live in an "entertainment culture," in which all of our activities, including the political process, are required to be entertaining as well.  The blurring of this line does not simply refer to entertainers who have become successful politicians (like Ronald Reagan, Al Franken, and, um, Donald Trump), but to the way that television shows like Saturday Night Live and The Daily Show have become major players in American electoral politics.

 

Lest the recent results at the Emmy's give the idea that the politicization of entertainment is a one-way street, navigated solely by entertainments and entertainers on the left, the same thing is going on on the right as well, and this is something that cultural analysts often miss, pretty simply because those entertainers do not tend to be part of the taste culture of cultural analysts.  Of course, it isn't only cultural analysts who have neglected the place of what I'll call the "ent-right" in American politics: by relying virtually exclusively on the support of entertainers like Beyonce´ and Lena Dunham—not to mention the crew at SNL and Jon Stewart—Hillary Clinton completely miscalculated the power of those entertainers who appeal to the voters who voted for Donald Trump.  The results of this miscalculation are hardly insignificant.

 

To give you a better idea of just how American entertainment is now parsing on political grounds, I'll provide a link to a New York Times feature article that includes fifty maps of the United States geographically showing which television programs are viewed in which regions of the country.  Referred to as a "cultural divide" in the article, what is revealed is equally a political divide.  So striking are the differences in television viewership that it would behoove future presidential election pollsters to ask people not who they are going to vote for (a question that the 2016 election appears to demonstrate is one that people do not always answer honestly) but which television programs they watch (or what kind of music they listen to, etc.. Who knows what the outcome of the 2016 election would have been if Hillary Clinton had a prominent country music icon on her side).

 

In short, popular cultural semiotics isn't merely something for the classroom (though it can begin there); it is essential to an understanding of what is happening in this country and of what is likely to happen.  And one has to look at everything, not only one's own favorite performers.  Because the purpose of analyzing entertainment is not to be entertained: it is to grasp the power of entertainment.

Jack Solomon

They're Ba-ack!

Posted by Jack Solomon Expert Sep 14, 2017

Creepy clowns are back, and Hollywood is counting on them to deliver big box office after what appears to have been a slow summer for the movie industry—at least according to the L. A. Times.  I've visited this territory before in this blog, but between the recent release of It, the cinematic version of the Stephen King novel by the same name, and all the recent hoopla over Insane Clown Posse and their "Juggalo" followers, I thought it would merit a second look.

 

If you've never heard of Insane Clown Posse, and think that Juggalos must be some sort of children's breakfast cereal, you're forgiven.  This is one of those many corners of popular culture that, somehow, young folks always seem to be in on, but which tends to be under the radar for the rest of us.  Not that Insane Clown Posse is anything new: they're a rap act that has been around since 1989, specializing in a genre called "horror core"—think Marilyn Manson meets Twisty the Clown.  And Juggalos are horror-core fans that follow performers like Insane Clown Posse around and hold mass participation events of their own—think Gothicised Deadheads in creepy clown suits at a Trekkie convention.

 

So what is it with It, and all this clown stuff?  What is the significance of this fad that appears to be edging into a trend?  Well, to begin with, it's less than sixty shopping days till Halloween, so that's part of the explanation—according to the First Law of Popular Culture (which I have just invented): viz., A fad that has made money will continue to be milked for more money until it is obliterated by a new fad that makes it look hopelessly outdated while retaining its essential appeal.  Applied to the present instance, we might say that just as zombies flocked in where vampires began to fear to tread a few years ago, creepy clown stock appears to be rising now that zombies are beginning to look rather old hat.  But is there anything more to it all?

 

In attempting to widen the semiotic system in which we can situate the creepy clown phenomenon in order to interpret it, I've found myself considering the peculiar similarities between the Juggalos of today and the Skinheads of yore.  Interestingly, both have working-class origins, along with highly stylized fashion codes and preferences for certain kinds of music (of course, this is true for just about any popular cultural youth movement).  More significantly, both have divided into what might be called malignant and benign camps.  That is to say, one set of Juggalos is at least accused of having the characteristics of a street gang, while the other appears to be as harmless as run-of-the-mill cosplayers.  Similarly, while the classic Skinhead liked to toy around with neo-Nazi and other fascist displays, an offshoot of the movement—sometimes referred to as "anti-racist" Skinheads—has adopted the fashion-and-music tastes (more or less) of fascistical Skinheads while embracing an anti-fascist ideology. 

 

All this gets me thinking, because if we expand the system we can find two other popular cultural trends that the creepy clown phenomenon—along with its Juggalo cohorts—shares with the Skinheads: an obsession with costumed role playing mixed with a fascination with violence (even if only in play), whether in the form of horror (Juggalos) or of hob-nailed mayhem (Skinheads).  In this respect (costume drama-cum-cruelty), we may as well include Game Of Thrones in the system, for here too we find elaborate costuming wound round a mind-numbing level of violence.  It's as if Harry Potter grew up to become a warlord.

 

Well, so what?  If popular culture appears to be filled with elaborate expressions of violent cosplay, it's just play-pretend isn't it, a distraction from the horrors, or boredom, of everyday life—what Freud called "civilization and its discontents?" And Stephen King is hardly alone in making a fortune off the perennial appeal of Grand Guignol.

 

But then I start thinking about the violence-obsessed costume drama that took place on the campus of the University of Virginia, where khaki-clad and polo shirt sporting crowds of young men marched torches in hand in a studied recreation of Hitler's brown-shirt demonstrations.  Was this some sort of political cosplay, a "let's play at Nazis" display for those in the crowd who weren't "official" members of the Klan and the American Nazi Party?  I really don't know.  I'm not sure that anyone knows just how many genuine Nazis there are in the country, as compared with the play actors who are getting a kick out of trolling their classmates.  But playing at horror has a way of familiarizing it, of moving it from the fringe to the center, and I can only hope that we haven't gotten to the point where the line between play-pretend and deadly-earnest has become so blurred that the true horrors may descend upon us.

Last Spring I left off in this blog with an exploration of what I called “The Uses of Objectivity.” That essay probed the inadvertent relationships between poststructural theory and the current climate of “alternative facts” and “post-truth” claims.  Since then I’ve run across an essay in The Chronicle of Higher Education that could have been written in response to mine, and while it actually wasn't, I'd like to continue the discussion a bit here.

 

The Chronicle essay I’m referring to here is Andrew J. Perrin’s “Stop Blaming Postmodernism for Post-Truth Politics.” That's an easy request to honor: certainly the supporters 0f such alt-fact politicians as Donald Trump can hardly be expected to have been influenced by —much less, have read—the texts of contemporary postmodern theory.  So by all means let's take postmodernism off the hook in this regard.  The question is not how postmodernism has affected what is often referred to as the "populist" politics of Trumpism; the question is how educators can best contest, in the classroom, the contentions of the post-truth world.  My position on this question is that educators who wish to do so would do well not to deconstruct, in a postmodern fashion, the fundamental grounds for things like scientific consensus, while Perrin, for his part, feels that we need more postmodernism in the face of the post-truth era because of the way that it exposes the ways in which "all claims, beliefs, and symbols are tied up with the structures of power and representation that give rise to them." 

 

Now, the originator of this postmodern approach to power/knowledge was, of course, Michel Foucault.  It is central to his entire notion of "discourse," which itself descended from his essentially poststructural (poststructuralism is an academic species of the larger cultural genus postmodernism) adaptation of the structuralist position that reality (and the knowledge thereof) is constructed by systems of signs.  That is to say, the signified, in the structuralist view, is not something detected outside the sign system: it is constituted by the sign system.  From here it is not a very large step to the poststructural position that whoever controls the sign system controls what counts as "reality," as "truth" itself. 

 

There is certainly no shortage of historical instances in which this vision of power/knowledge has indeed been played out.  The Third Reich, for example, rejected relativity theory as "Jewish physics," and that was that as far as Germany was concerned.  George Orwell, for his part, gave dramatic expression to this sort of thing in 1984: 2+2=5 if Big Brother says so.

 

Thus, it comes down to a simple question.   What is a more effective response to the post-truth claim, for example, that climate science is hoax: the position that all scientific claims are expressions of power/knowledge, or the position that concrete empirical evidence gets us closer to the truth of climate change than do the claims of power?  This is not a rhetorical question, because I do not suppose that everyone will agree with my own answer to it, which happens to be as simple as the question itself:  I prefer to oppose power/knowledge with objectively measurable data.  For me, reality is not subject to a referendum.

 

Interestingly, the late Edward Said—who helped put Foucault on the American literary-critical map in his book Beginnings—came to identify another problem that arises with respect to postmodern power theory when he criticized Foucault for effectively denying the element of human responsibility in power relations by treating power as a nebulous "formation" that is expressed socially and historically rather than being wielded by empowered individuals (which happens to be a poststructural view on power that parallels the structuralist position on the relationship between langue and parole).  Such a view could provide support for the many voters who did not vote in the 2016 presidential election due to their belief that both major parties expressed the same neoliberal and capitalist power formations.  I think that the aftermath of that election makes it pretty plain that individuals do wield power and in different ways, no matter what the current larger power/knowledge formation may be.

 

And just as interestingly, as I was putting the finishing touches on this blog, an essay by Mark Lilla appeared in the Chronicle of Higher Education saying substantially the same thing: i.e., if students accept "the mystical idea that anonymous forces of power shape everything in life," they "will be perfectly justified in withdrawing from democratic politics and casting an ironic eye on it."  Now, two Humanities professors in agreement doth not a movement make, but it's heartening to see that my thoughts are shared by someone else.

Jack Solomon

The Uses of Objectivity

Posted by Jack Solomon Expert Jun 15, 2017

I take my title, and topic, for my last blog before the summer break from two pieces appearing in today's (as I write this) online news. One, John Warner's essay "The Pitfalls of 'Objectivity,'" appears in Inside Higher Ed, and the other is a news feature in The Washington Post on the prison sentencing of a Sandy Hook hoax proponent who sent death threats to the parents of one of the children murdered at the Connecticut elementary school. I'll begin with John Warner's essay.

Warner is a blogger for Inside Higher Ed, whose blog, "Just Visiting," describes his experiences as an adjunct writing instructor. As a voice for the much-beleaguered, and ever-growing, class of adjunct writing professors in this country, Warner is a very popular Inside Higher Ed blogger, whose columns consistently garner far and away the most commentary (almost always positive) of any other blog on the news site, often from grateful instructors who are justifiably glad to see someone expressing their point of view for once in a prominent place. Heck, Warner gets more comments on each blog post than I have gotten in all the years I have been writing this blog, so it's hard to argue with success.

But in this era when "fake news" and "alternative facts" have come to so dominate the political landscape, I feel obliged to respond to Warner's thesis, which is that, "One of the worst disservices the students I work with have experienced prior to coming to college is being led to believe that their writing – academic or otherwise – should strive for 'objectivity.'” Warner's point—which, as a central tenet of cultural studies generally, and the New Historicism in particular, is not a new one—is that "there is no such thing as purely objective research." This position cannot be refuted: writing and research always not only contain, but begin, in subjectivity. Even scientific investigation starts with an hypothesis, a conjecture, a subjective cast into an ocean of epistemic uncertainty. And if one really wants to press the point, there has never been a successful refutation of the fundamental Kantian position that knowledge is forever trapped in the mind, that we know only phenomena, not noumena.

So, the question is not whether or not subjectivity is an inevitable part of writing, thinking, and arguing. Rather, the question is whether we really want to throw out the objective baby with the bathwater, which is what I think happens when Warner argues that, "Strong writing comes from a strong set of beliefs, beliefs rooted in personal values. Those underlying values tend to be relatively immutable." And that takes us to the Sandy Hook hoax community.

 

To put it succinctly, the Sandy Hook hoaxers believe that the massacre at the Sandy Hook School was a "false flag" that either never took place at all or was perpetrated by the Obama administration (there are various claims in this regard), and which was planted in order to justify the seizure of Americans' guns. The hoaxers have written at length, and with great passion, about this, producing all sorts of "facts" (in the way of all conspiracy theorists). One could say that their texts come "from a strong set of beliefs . . . rooted in personal values . . . that tend to be relatively immutable." And there's the problem.

Now, Warner is hardly promoting conspiracy theorizing, or being tied to immutable beliefs. For him, "An effective writer is confident in communicating their beliefs, while simultaneously being open to having those beliefs challenged and then changed as they realize their existing beliefs may be in conflict with their values." But the problem is that without objective facts, a contest of beliefs is only that, with no basis for settling the debate. You don't like the facts? Shout "fake news!" and produce your own "alternative facts." I'm sure you see where this heading.

As with the legacy of poststructuralist thinking that I have often written about in this blog, Warner's apparently generous and liberal approach to writing leads to unintended results. By undermining our students' acceptance of the existence of objective facts—and the objectivity to pursue them—we are underpinning a political environment where hostile camps hole up in their echo chambers of shared beliefs and simply shout at each other. And while I know that we, as writing instructors, can't end that—any more than we can come up with a final refutation of Kantian and poststructuralist subjectivism—if we really want to do our bit to resist the current climate of "fake news" claims we should be encouraging our students to see the dialectic of subjectivity and objectivity, the complex ways in which the two can complement each other. It isn't easy, and there can be no easy formula for doing so, but simply denigrating objectivity to our students is not going to help us, or them.

 

 

Jack Solomon

007

Posted by Jack Solomon Expert Jun 1, 2017

Aught aught seven.  You already know what the topic of this blog is going to be on the basis of this simple combination of numbers: who else but James Bond, spy fiction's most popular secret agent, whose cinematic franchise could make even Batman green with envy.  And you also have probably already guessed the occasion for this blog: Sir Roger Moore, that most prolific of the Bond avatars, has finally gone to that special operations room in the sky.

 

But this blog isn't a eulogy; it's a semiotic analysis—not of the undying Bond himself, but of the way he has been portrayed through the years. 

 

So many actors have played Bond since his appearance in the guise of Sean Connery in 1962 (forever my personal favorite, and only, Bond: but that's not semiotics) that it would take quite an essay to analyze all of them.  But I'm only concerned here with two of them: Roger Moore and Daniel Craig, whose portrayals of the master spy offer a perfect object lesson in the way that a semiotic analysis works.

 

Here's how:  as I cannot note often enough, a semiotic analysis involves the situating of your topic in a system of associations and differences—that is, with those phenomena with which it bears a relationship of both similarity and contrast.  As portrayers of the same fictional character, then, Roger Moore and Daniel Craig belong to such a system, and they have, of course, a lot in common: good looks, suavity, fearlessness, and a certain essential (hard to define) Britishness (which is why, I suppose, David Niven—that most British of Brit actors—was cast, in a spoof of what is already a spoof, as Sir James Bond in 1967).  But there is also a striking, and critical, difference:  Moore played Bond with a creamy smoothness, as well as a sort of Brechtean "don't take any of this too seriously" inflection; Craig, in contrast, gets down and dirty, a bit worn out, a lot more mortal.  Taken by itself, of course, this might only signify the difference between two thespian interpretations of the same character, and thus nothing of much cultural significance at all.  But if we enlarge the system in which James Bond signifies, a larger meaning appears after all.

 

So let's now look at some other entertainment franchises involving superheroes (and James Bond has a lot of superhero DNA in him).  Start with Batman, and Adam West.  In his own way, West was to Batman—as Don Adams was to James Bond, and James Bond was to, well, real British secret agents in the post-World War II era—which is to say, all spoof.  Indeed,  West's take on the Caped Crusader  was so devastating that it wasn't until 1989 that he returned to the silver screen in Tim Burton's Batman, which completely rewrote the script to present the Frank Miller-inspired sturm-und-drang Batman that has provided the foundation for all of the Batmen we have seen ever since.

 

Then there's Superman, and the matchup between George Reeves and Henry Cavill.  Here the suit alone tells the story: from Reeves's sky blue costume to Cavill's blue-black armor, something has changed.  The mood is much darker, more violent, and the Man of Steel himself is no longer a simple champion of Truth, Justice, and the American Way.

 

The critical difference between Bonds, Batmen, and Supermen can be interpreted in three ways.  First, of course, the shift reveals the way that our cultural mood has darkened considerably over the years (Deadpool really makes the point), and our cartoon heroes (both literally and figuratively) have taken on the emotional coloration of our times.  Audiences have no interest in chirpy superheroes, nor in petty crimes and restrained violence: it's all Armageddon and Apocalypse Now.  Similarly, disillusioned (not to say, cynical) viewers will no longer accept pristine-pure heroes: the Man of Steel must have Feet of Clay; the Dark Knight must have Dark Nights.  But, perhaps most profoundly, what has also changed is the social status of the superhero (or super spy) himself, from a minor, rather marginal character who isn't intended to be taken very seriously, to a fully-fledged tragic hero who must bear the burden of our doubts and disillusionments on his well-sculpted shoulders.  Move over Hamlet, here comes Batman.

 

The Marxist cultural critic Lucien Goldmann once proposed that a society can be known by its "high" art.  Perhaps this was once true, but no longer.  To know ourselves we have to look at our popular culture.  Daniel Craig has it right:  we're getting a bit worn out; we're beginning to lose; the smooth road has gotten rather rough. 

 

James Bond is us.

 

Jack Solomon

Unintended Consequences

Posted by Jack Solomon Expert May 18, 2017

It could be argued that the biggest popular cultural phenomenon of our era has been the advent of digital technology and the Internet—a techo-cultural intervention at least as profound as television, in its time, and cinema.  To adapt the old McCluhan phrase from the pre-digital age, here the medium is indeed both message and massage, and there is no limit to the number of analyses of just what that message is.  But there is one angle on the significance of the Net that, while not entirely ignored, could use some deeper exploration, and that is the effect that it has had on the socio-economic and political situation in America today.

Timothy B. Lee's article, "Pokemon Go is Everything that is Wrong with Capitalism" (which will appear in the 9th edition of Signs of Life in the U.S.A.), does a good job of showing how the economics of the digital explosion have redistributed American wealth into a small number of prosperous enclaves—like California's Silicon Valley and Silicon Beach, along with Seattle and Boston-Cambridge—at the expense of much of the rest of the country.  Languishing at the margins of the new economy, such regions (which comprise most of the Midwest and the South) have stagnated—an entirely unintended postindustrial consequence that goes a long way towards explaining the popularity of Donald Trump in regions that were once considered safely Democratic strongholds, like Michigan and Wisconsin.  And so it is especially ironic that Donald Trump himself makes such use of digital social media (especially Twitter, of course) to build and maintain his power base.

But there has been another, related effect, that has received rather less attention.  This is the socio-economic effect that the digital era has had on those places where the new economy has taken hold.  I am particularly sensitive to this because I grew up in the San Francisco Bay area and now live in Southern California.  The inflation that has been experienced in such areas—especially with respect to housing—is rendering it increasingly impossible for anyone but high income people to live there (this isn't whining: I purchased my present home 28 years ago and live in the sort of setting I prefer, but I would hate to be house hunting in my area today).  The result can be seen in the way that traditionally low income neighborhoods in, say, San Francisco and Venice, are being transformed, as young software engineers who want to live in the city and bicycle to work, move into the last areas where rents are affordable, thus driving up the rents astronomically, so that soon they will no longer be low income neighborhoods.

 

It is important for me to say that none of this was intended, and no individuals should be blamed (though a lot of such people are being blamed).  Young men and women who have worked hard to get their technological training—and simply want to live decent lives in which they can demonstrate their dedication to sustainability by choosing to live where they will not have to rely on their cars to get to work—are not culpable.  But the fact is that, whether we are looking at urban, suburban, or exurban neighborhoods anywhere in the vicinity of the great digital economic hubs, there is no place anymore for anyone but the upper-middle class, or those who already own there or are protected by rent control (an idea whose day is passing, by the way, under the same inflationary pressures).

 

It is also important for me to say that I cannot think of any solution to the problem.  To use that rather dismal verbal shoulder shrug, it is what it is.  If I had children of my own (I don't) I would feel compelled (with great reluctance) to tell them that if they want to live in a reasonably secure and pleasant manner, they are going to have to make plans to pursue high paying careers—not to be rich but simply to be able live in the middle class.  And that means, in all probability, STEM-related careers (including medicine), now that the Law (that economic mainstay of my generation of Humanities majors) has ceased to be a reliable escape hatch into the upper-middle class. 

 

That isn't the fault of the digital era, but it is a consequence of it, and we musn't try to conceal that fact.

 

In the early years of the Internet, one of the most commonly heard slogans of the time was, "information wants to be free."  This ringing affirmation of the uninhibited flow of speech, knowledge, and news was one of the grounding values of that heady era when the Net was known as the "electronic frontier," and was regarded as an unfenced "information superhighway."  Those were the days when the web log (better known in its shorthand form as the "blog") was born, and the opportunities for virtually unfettered communication opened up in ways that the world had never experienced before.

 

That was twenty and more years ago now, and while a superficial glance at things would seem to indicate that nothing has really changed, a closer look reveals quite something else; deep down, the Internet has been fenced, and the superhighway is becoming a toll road.

 

 To see how, we can consider the history of the blog itself.  Yes, blogs still exist, but they have often morphed into what were in the past called "editorials," as online newspapers slap the label onto the writings of pundits and even those of news feature writers.  What you are reading right now is called a "blog," though it is really a semi-formal essay devoted to professional musings and advice, rather than being some sort of online diary or journal.  The blogs that still hew to the original line of being personal and unrestricted communiques to the world still exist, of course, on easy-to-use platforms like WordPress, but most have been abandoned, with their last posts being dated years ago. 

 

Where has everybody gone?  Well, to places like Facebook, of course, or Instagram, or Reddit, or whatever's hot at the moment.  But this is not a mere migration from one lane of the information superhighway to another; it is an exit to a toll booth, beyond which some of us cannot go, not because we cannot afford the cost (the toll is not paid in dollars), but because we are unwilling to make ourselves the commodity that "monetizes" what now should be called the "electronic data mine."

 

Thus, I have seen personal blogs that I used to follow because I was interested in what I learned about their writers, fall fallow because they had moved on to Facebook.  For a long time, some such pages could be accessed by the likes of me if their authors chose to make them public, but they have now all been privatized by Facebook itself.  When I try to visit even the pages of public organizations, a moving barrier fills my screen, ordering me to open an account.  A free account, of course: all I have to do is sell whatever last shred of privacy I have left in order to sign on.

 

Yes, I know that Google is following me, even if I am not using its search engine: it gets me when I visit a site.  But signing on to Facebook (Google too, of course) involves an even deeper surrender of privacy.  This is demonstrated by the fact that Facebook feels that it cannot get enough data on me simply by noting that I have visited one of its subscriber's pages.  And I am not willing to let Facebook have whatever that extra information on me it wants.

 

I realize that I may sound here like someone who is demanding something for free.  I don't mean to sound like that: I realize that the Internet, like commercial television, has to be paid for somehow.  But I'd rather watch an advertisement (indeed, the ads are often better than the programs) to pay for my access than present to corporations like Facebook private information that it will sell to anyone who is willing to pay for it.  And I mean anyone, as one of the new readings in the just-completed 9th edition of Signs of Life in the U.S.A. (with a publish date of November 2017) reveals: Ronald J. Deibert's "Black Code: Surveillance, Privacy, and the Dark Side of the Internet."

 

Not that I am missing much, I think.  The thoughtful blogs that folks used to write have vanished into Facebook personal news bulletins—more like tweets and Instagrams than developed conversations.  It is not unlike what has happened to email, which I gather, is very uncool these days.  Much better to text—a non-discursive form of shorthand which, paradoxically, one does have to pay for in hard cash.

Jack Solomon

The Pepsi Consternation

Posted by Jack Solomon Expert Apr 20, 2017

Some ads are born controversial, some ads achieve controversy, and some ads have controversy thrust upon them.  But in the case of the infamous Kendall Jenner Pepsi ad, we might say that this one accomplished all three attainments at once, and if you are looking for an introductory-level lesson on popular cultural semiotics, you couldn't find a better candidate for analysis than this.

 

There are a number of reasons why the Pepsi/Jenner ad is such a good topic for an introduction to pop cultural semiotics.  First, pretty much everyone knows about it, and though it was yanked shortly after its premiere, it will be available for viewing for years to come, and the dust that it raised will not be settling soon.  This one is virtually guaranteed to have legs.

 

Second, the fact that so many people responded immediately to the ad with what amounts to a semiotic analysis of it demonstrates that cultural semiotics is not some sort of academic conspiracy designed to "read things into" harmlessly insignificant popular cultural artifacts.  All over America, people who may have never even heard of the word "semiotics" instantly performed sophisticated analyses of the Pepsi ad—my favorite example is the reviewer who noted how Kendall Jenner thrusts her blonde wig into the hands of a black assistant without even looking at the woman, as she (Jenner) heads off to join the march —to point out in detail what was wrong with it.  The SNL takedown alone is priceless.

 

I hardly need to repeat all the details of those analyses here: that the ad was "tone deaf"; that it was co-opting the Black Lives Matter movement in order to sell soda (Thomas Frank would say that the ad was a perfect example of the "commodification of dissent); that it managed to tokenize non-whites while putting a white celebrity at the center of attention.  It's all there, and, all in all, I can't think of a better exercise than to play the ad in class and go through it with a fine-tooth comb to see just what it was doing, and why it failed so badly.

 

Just to offer some somewhat less-obvious things to consider while analyzing this ad, I would note, first, that it can be included in an advertising system that contains Coca Cola's famous "I'd like to teach the world to sing" commercial from 1971.  Pepsi's ad was clearly created in the same spirit, but its abject failure marks a critical difference that bears further attention.  Now, like 1971, 2017 America is in the midst of widespread, and often bitter, cultural and political conflict, so one can't simply say that those were more innocent times to explain the difference in response to Pepsi's attempt at selling soda by trying to look culturally forward and hip to the moment.  But I do think that people are much more alert to media semiotics today than they were then, and thus more able to spot what Pepsi was trying to do.  Probably more importantly, the Coke ad didn't pretend to stage a street demonstration; it put together its own event (pseudo-event, I should say), which, though smarmy, made its own direct statement without the use of celebrities.  It wasn't authentic, but it was a lot less phony than the Pepsi ad.  That may have been part of the difference in reactions, too.

 

But the key difference, I believe, was the use of an already somewhat dubious celebrity in the Pepsi ad (Kendall Jenner belongs to an ever-growing line a RTV-created figures who are "famous for being famous") that its creators (mistakenly) believed would be immediately embraced by their target audience of millennials.  Indeed that is the narrative line that the ad assumes, which, in brief, runs like this: as a large crowd of young protesters (complete with electric guitar-and cello-backed band—with break dancers!) marches through urban streets in protest of some unidentified cause, glamorous model Kendall Jenner (whom the ad's audience is expected to recognize) is working a fashion shoot, wearing a blonde wig, stiletto heels, and a lot of makeup.  As the marchers walk past her, she looks troubled, and then decides to flick the shoot—doffing her wig, wiping off her lipstick, and somehow (somehow!) changing into blue jeans and a denim jacket—to join in.  She is immediately made the center of the whole thing, with all the marchers smiling at her in joy, and then going crazy with joy when she hands a Pepsi to a young cop assigned to riot duty (where's his armor, helmet and facemask?), who accepts it and takes a drink. 

 

The whole thing reminds me of an old John Lennon music video that shows John and Yoko leading some sort of protest march, in which it is clear that the only thing being demonstrated is the star power of John Lennon.  Now, the Lennon footage may or may not have been from a real march, but in creating a wholly bogus march for Kendall Jenner (who is hardly known for her social activism), what the Pepsi ad is really saying (contrary to their publicity department's frantic, and ultimately futile, attempts to defend the ad as a fine statement of "global" consciousness) is that what matters in America is celebrity power and wealth.  Thus, there's a good reason why the ad's critics are focusing on Jenner as well as Pepsi, because the ad is as much about her as it is about soda pop.  Someone in marketing presumed that millennials (who have been product- branded from birth) wouldn't notice the implications of that.  It is thus with some satisfaction that I can see most millennials did notice (though there are a surprising number of Youtube comments insisting that there is nothing wrong with the ad).  And that may be the most significant thing of all.

 

Jack Solomon

Popular Classics

Posted by Jack Solomon Expert Apr 6, 2017

Emily Bronte would have loved Game of Thrones.

 

No, this isn't going to be another blog post on the HBO smash hit series; rather, I would like to share some of my thoughts upon my recent rereading, purely for my own pleasure, of Bronte's weird classic, Wuthering Heights—thoughts which happen to have a significant bearing upon teaching popular cultural semiotics.

 

The foremost point to raise in this regard is that, in spite of its long enshrinement in America's high school curriculum, Wuthering Heights was not written to be studied in schools: it was written to be entertaining—to its author, as well as to its reader—for, after all, Emily Bronte had been writing to entertain  herself and her sisters and brother since her infancy.

 

More importantly, as a novel bearing the influence of everything from the Gothic literary tradition to the revenge drama to the star-crossed romance, Wuthering Heights is there to entertain, not mean.  This is where generations of literary critics striving to figure out what Bronte could possibly be getting at, and who (or what) Heathcliff is supposed to be, are missing the point. Wuthering Heights, like the movie Casablanca in Umberto Eco's estimation, is an absolute hodgepodge of often-conflicting literary cliches—a text, as Eco puts it, where "the cliches are having a ball."  And that is what most really popular stories manage to do.

 

How do we know that Wuthering Heights is popular, and not merely for school-room force-feeding?  Let's start with the fact that some forty (yes, forty, but it's hard to keep precise count) movies, TV dramas, operas, and other assorted adaptations have been made of the enigmatic novel over the years, not to mention the biopics about the Brontes themselves that continue to be churned out —most recently the 2016/2017 BBC/PBS production To Walk Invisible.

 

How do we know that it is a cornucopia of cliches?  Well, we can start with Emily Bronte's take on the star-crossed lovers theme, putting Heathcliff and Catherine Earnshaw in the Romeo and Juliet predicament.  But it doesn't quite feel like Romeo and Juliet because of Heathcliff's absolute ferocity.  This is where the revenge theme comes in.  There is not a little of Hamlet in Heathcliff, and there is probably a lot of the Count of Monte Cristo (Emily Bronte could read French, and Dumas' novel was published in 1844-45—in time for Bronte to have read, or at least known of it, before writing her novel).  This is one reason why Heathcliff is such a mystery: he is embodying two very different narrative traditions: that of the revenge hero and of the romantic hero.  Trying to reconcile these traditions is not only a hopeless task for critics, it appears to have overwhelmed Bronte herself, who, just as Heathcliff is about to perfect his decades-in-the-making revenge on the Lintons and the Earnshaws, suddenly decides to call it a day and kill himself (like a very belated Romeo) only pages from the conclusion of the story, in one of the worst-prepared-for denouements in literary history.

 

But let's not forget the ghost story element.  Like The Turn of the Screw a generation later (and James may well have gotten the idea from Bronte), Wuthering Heights is a ghost story, or not, because there may be no ghosts at all, only Heathcliff's feverish psychological projections.  But even as we ponder the ghost element (or lack thereof) in Wuthering Heights, there is the wholly Gothic goulishness of Heathcliff, which puts him in the class not only of vampires (Bronte herself teases us with that possibility) but of the necrophilic monk Ambrosio in that all-time 18th-century best seller, The Monk

 

Then there's the way that Wuthering Heights eventually employs one of the most common conventions of the entire English novelistic tradition:  the actual, and symbolic, marriage that reconciles the fundamental contradictions that the novel dramatizes.  Indeed, one wonders whether Hawthorne's House of the Seven Gables (1851) owes something to Emily Bronte, but Bronte hardly got there first.

 

Finally, there is the character of Catherine Earnshaw Linton, which may be the most popular element of all in the novel today. A likely projection of something of Emily Bronte herself, Catherine is a strong-willed, beautiful woman with masculine as well as feminine characteristics, and who may well prefigure the ever-popular Scarlett O'Hara.  Something of an archetype of the emancipated woman, Catherine, to adapt an old New Critical slogan, is there to be, not mean.   She doesn't point to a moral: she just is, and readers love her for it. 

 

See what I mean?  Wuthering Heights is simply teeming with literary formulae.  And so, just as with any artifact of popular culture whose primary purpose is to entertain, our best approach to it is not to ask what it means, but, instead, to ask what it is in all these conventions and cliches that is so entertaining, generation after generation, and what does that say about the audience (and culture) that is entertained?

 

I won't attempt that analysis now.  Perhaps I'll come back to it some time.  But my point here is that by studying "literature," we often lose track of the role that entertainment plays in literary production, just as in enjoying entertainments we often lose track of the significance of that which is entertaining in entertainment.  Popular cultural semiotics is, accordingly, not only something for self-declared, "mass cultural," entertainments: it can illuminate what we call "the classics," as well.