Home » Communication

Category Archives: Communication

Remembering Sam Becker and university citizenship

Sam Becker, for whom the University of Iowa Department of Communication Studies building is named, and whose six decades career was highly accomplished, passed away on November 8.  While I was a doctoral student at Iowa in the 1990’s, Becker was already retired but still ever-present, and by the sheer randomness of graduate student office assignment two colleagues and I ended up right across the hall from his small retirement office.  He was often available for conversation and also extraordinarily productive, a professor who gave the lie to the all-too-popular idea currently dominant in the humanities and some of the social sciences that the only way to get real research done is to work in seclusion either behind a closed office door or, even better, from the house.  Looking down hallways of closed faculty office doors, and I mean this as no insult since Iowa is typical, I think, of today’s academic culture, I was always struck by the fact that the most open and accessible professors – Gronbeck, Ochs, Becker – were also among the most research productive.

By my time in the program, Dr. Becker was only occasionally teaching, but he taught one of my very first classes, a one credit hour professionalization seminar that only met, as I recall it, for about 45 or 50 minutes a week.  We were coached on the protocols of academic citation, taught the mechanics of the main communication associations (Sam’s lifelong commitment to the National Communication Association meant we heard most about that one), and we talked about how one best organizes one’s research work.  I believe it was there that he told the story about how he first got hooked on academic scholarship in communication.  He was a young undergraduate at the university in the 1940’s, and was encouraged to turn a classroom paper into a publication, which he landed in one of the lead outlets for communication research.  Over the next sixty years he produced more than 100 peer reviewed research essays, advised nearly 70 doctoral dissertations, and won high degrees of acclaim for his work (NCA president, NCA Distinguished Scholar, recipient of the association’s first mentorship award, and many more, including a number of honors on the Iowa campus that include the rare privilege of a namesake building).

Professor Becker’s death evokes in me, against my better judgment perhaps, a nostalgic desire for a sort of academic culture that likely no longer exists.  The temptation to nostalgia when it comes to the academic past is fraught.  Even as the American public universities threw open their doors and programs in the 1960’s and 70’s, they were far from perfect, and the political constraints under which professors work today are in some respects incomparably different.  And universities look a lot different through the eyes of a professor than they do through the eyes of a graduate student.  It is easier to imagine public university work as a sort of exotic salon culture, the pure life of the mind where professors think great thoughts in communion with their colleagues, when one’s schedule, overloaded as is graduate student life always, consists of one intellectual interaction after another, seminar to seminar and great book to great book.  The academic life performed for graduate students, indeed for all students, is simply not the same as the one lived in a profession as dominated by committee meetings as discussions of big ideas.  Comparisons between past and present too often fail.

But my nostalgia lingers.

Sam Becker represented a style of academic life and an extraordinary commitment to building local programmatic excellence, that I find harder to find today (and in my world so infrequent as to be essentially nonexistent), living as we do at a time when many professors understandably find their main intellectual sustenance from longer distance networking – social media, blog- and listserv-centered – and themselves too informationally enriched (or, alternatively, overstimulated) and even overwhelmed by those gushing sources to desire anything but minimal face-to-face engagement with on-campus colleagues.  Part of this, I believe, is the characteristic connection of rather-shy-and-life-of-the-mind-driven academics with the more controllable interactions of online and distance encounter; it is easier to present a polished, a more clever persona through Facebook and blogging than in the heat of a tedious faculty meeting, and so as a result many gravitate to the New Comfort Zones of virtual engagement.

Entire academic generations have been mentored to the view that their most assured path to professional success is isolation – keep your head down, don’t over commit, set up a home office and be disciplined about working there, spend as few hours on campus as you can because if word gets out that you’re available then colleagues and students will eat you alive and rob you of all your productive energy.  This advice is reinforced because when one resolves only to spend ten hours a week on campus then, not surprisingly, those ten hours quickly fill to capacity as students figure out those are the only opportunities for real access not coordinated by emails.  The approach affords little time to linger, for lingering is time wasting.  Sorry!  I’m drowning; gotta run! becomes an easy refrain.

All this is understandable and not unreasonable.  I’m as prone to the temptations as anyone.  The seductive blend of intellectual (over) stimulation, where ideas can be consumed at any pace one prefers, and staged (or scripted) encounters managed from the comfort of the computer desk chair, can simply feel more enriching than sitting through a long research presentation or a comprehensive examination defense.

Donavan Ochs, a Becker colleague at Iowa, and Sam Becker, both veterans of military service (I have the sense that had something to do with it), put in pretty regular daily schedules.  Ochs, with whom I had the chance to study classical rhetoric and do an independent study on Aristotle, often put in 8-to-5 days.  As I recall it Donovan wore a tie every day, even in the 1990’s when few others did, and his door was always open apart from times when he was in private meetings or teaching.  When I asked him once how he got any work done under those conditions, he was plainly surprised at the question, and his reply – what I do here is my work – led to wider conversations about academic life.  Ochs noted that an open door policy did not prevent his research productivity, since the morning hours typically gave him many undisturbed hours to write and think.  His door wasn’t open to enable empty chit-chat – he was always profoundly encouraging but kept conversations mainly work focused.  And because he worked, seriously worked, for the duration of the regular day, he avoided the guilt so many of us feel at thinking we should be working at all hours of the night.  I always had the sense Ochs went home with a clean conscience – he had a life apart from Aristotle, a healthy set of diverse family and life interests, and retirement presented no apparent trauma for him.

It is simply impossible to generalize about the state of faculty engagement given the diversity of campus environments, and unproductive even to try, and there remain, of course, more pastoral campus settings where relatively smaller student cohorts and perhaps better supported faculty lives enable the creation of close intellectual community that at some level still adheres to the wider mythology of enclaved campus life.  But life for students and professors at the big state universities, and I suspect even in places where campus life is residential and intensively communal, is changing.  If the National Surveys of Student Engagement are to be trusted, students report less frequent conversations with their classroom professors outside of regular class times.  Michael Crow, the president of Arizona State University and a key (and controversial) national advocate for delivering high quality and research intensive educational outcomes to very high numbers of enrolled students (ASU is now one of the nation’s largest universities), often repeats the idea that demographic surges require a model of education that is not numerically exclusive (the backward logic that translates so that the more people a school turns away, the better their reputation).  If public state institutions cannot find ways to well serve the students who are academically gifted but not financially or intellectually elite enough to enroll at the most exclusive private schools, Crow often says we’ll end up with a two-tiered system where the rich kids are educated by professors and the rest will be educated by computers.

The truth is that the big public universities are fast veering in the latter direction, not in the sense that MOOC’s educate our students but that the experience, especially in the first couple years, can be awfully impersonal, if not on account of large classes than because so many early classes are taught by graduate students and temporary faculty whose good teaching may nonetheless insufficiently convey a sense of place and local intellectual tradition.  The wider incentive structures are too often negative:  no pay raises, the demoralized sense that follows from the more frequently expressed taxpayer hostility to higher education, the pressures to win grants and relentlessly publish papers, accountability pressures that seem to require more and more administrative meetings, the idea that one must always stay on the job market or you’ll likely not be able to get a pay raise here, the growing number of students and in some states the expectation of higher instructional workloads, a tendency to think of day-to-day intellectual connectivity as simply more uncompensated service.  All this lures professors from the committed work of building local loyalty and into more defensive practices that feel like simple self preservation but are also, I suspect, self-defeating because they only accelerate a vicious cycle of brief and highly focused teaching and mentorship alternated by long stretches away.  Participate in a sustained reading group?  Sorry, I just don’t have any time for that.  Organize a campus colloquium, film or lecture series?  DittoAnd since everyone else is overwhelmed too, what would be the point?  No one would come.  Did you see the lead essay in the new QJS?  I’m curious what you thought.  Gosh, I’m months behind on that kind of reading – all my energy has to go to my book.  What energized you at the last big national conference?  Oh, couldn’t make it – and how could I when the university gives so little for professional development support?

The picture I’ve just drawn is exaggerated, thankfully, but I suspect that even as caricature it retains a certain familiarity.  Fortunately the energetic participation new faculty bring to academic programs is inspirational, and idealism trumps low morale for so many staff and faculty who sustain both distance networked and local connectivity. Whatever the incentives, every department includes professors at all ranks who pour their energies into building real collective intellectual communities.  It might also be added that the struggle I’m describing may be most accentuated in the humanities, where the norms of academic research are only slowly shifting away from the lone-professor-writing-her-latest-book to practices of team-based interdisciplinarity.  The very important beneficial consequences of globally networked disciplinary conversation arose for important reasons – the generation of new knowledge is more dynamic than ever before in human history, even despite data that (at least in communication) the research work is increasingly localized in smaller numbers of publishing faculty (a recent analysis in speech communication showed that something like 85% of all professors have not published anything or received external support for their projects in the previous five years).  But I wonder if the number of high productivity and communally engaged scholars can be sustained when their morale is under assault too, because the dynamics induced by understandable mentorship and reduced support bring into ever-starker relief the old 20/80 rule, where 20% do 80% of the work.  As 20/80 becomes 10/90, this is how intellectual dynamism, and universities, die.

Sam Becker’s career suggests a thought experience that asks whether the considerable benefits of 21st century intellectual life can be improved by some integration of the professional practices of the 20th.  I want to hypothesize that what so often seems like the depressing path of today’s stressed system of public higher education need not necessarily be accepted as a New Normal.  If public higher education is to retain its historical vitality, changes will have to happen on many fronts.  Taxpayers and legislators will need to be persuaded of public education’s value.  Reasonable systems of accountability will need to document the outcomes of pedagogical encounter, I know.  But there is a role for we faculty to play as well, and Sam Becker’s professional life suggests some of the possibilities.  Becker knew that good and committed scholars who simply show up day after day and make themselves open to engaged discussions with others, both online and in person, actually attract other smart students and teachers to join as well in ways that energize the common enterprise, and that calling it quits at the end of the workday creates intellectual sustainability too as people find time away every single day to recharge.  He saw, because he so often created it himself, that the vital and passionate sense of connection that emerges as intelligent participants in the educational experience talk to each other and rev up excitement about ideas one discussion at a time.  He realized that when everyone is present and engaged in program building, service work is made more manageable by division among larger numbers of connected co-workers.  I cannot prove it, but my suspicion is that the great intellectually vital centers of communication scholarship were (are) built more fully by acts of local loyalism than by enterprising free-agent academic nomadism.

The key is not simply hallway cultures of greater presence but also necessarily entail high degrees of intellectual openness, a refusal to see the scholarly enterprise as ideational warfare or zero-sum, even in contexts where resourcing is finite.  And this was another of the Becker legacies.  During his five decades in the department, communication studies nurtured, developed, and then in some cases spun off new academic units, including theater and film.  Those discussions were not always smooth or even friendly, and Becker had strong opinions.  But what he always preserved, as I saw it, was a culture of openness to new and productive work – it led him to shift over his own career from interests in quantitative social science to British cultural studies qualitative research and then back again.  No department is ever entirely free of intellectual entanglements – smart people will tend always to prefer their own lines of inquiry and can too easily fail to see the value of the efforts undertaken by others.  But so long as there are some Beckers around, these inclinations to either/or warfare that have consumed whole programs in acrimony can be channeled productively into both/and collective accomplishment.

Fantasies, perhaps.  But these are ideas whose lifelong embodiment in one Samuel L. Becker – Eagle Scout, decorated war hero, “Mr. University of Iowa,” champion of social justice and the idea that public education enriches us all, extraordinary teacher and scholar and administrator – remain for me compelling, even given the New Normals of this new century.

When the map seems larger than the territory

On one of the websites for students of rhetorical theory, conversation has recently focused on the status of psychoanalytic criticism and the question of whether its insights are being willfully ignored by the larger field.  Josh Gunn kicked off the discussion, in part, by noting that despite recent interest, “rhetorical theory — at least on the communication [studies] side – is hampered by a certain blind spot caused by the avoidance of psychoanalysis, and more specifically, the inadmissibility of the category of the unconscious.”  Gunn rightly wonders at the absurdity of this given how many revered figures in rhetorical theory have been explicitly influenced by or have reacted against Freud, Lacan, Klein, Jung and others.

In the ensuing back-and-forth a range of perspectives have been expressed:  some writing to agree that psychoanalysis does seem to provoke unique antipathy from students assigned to encounter it, others speculating on the causes (is it because communication was more a journal than a book field?  did the discipline’s work in response to behaviorism inoculate scholars against its insights?  has psychoanalysis been more widely tainted, thus deterring investigation from the outset?), and so on.  Perhaps not surprisingly, some of the explanations veer to the therapeutic – several responses convey anecdotes of a visceral (and by implication anti-intellectual) refusal to take psychoanalytic work seriously:  sneering senior scholars, wink-wink-nudge-nudge sorts of boundary policing behavior, and the (not-so-)subtle steering of graduate students away from the theoretical insights of psychoanalysis.

As I’ve been thinking about all this I don’t find myself in particular disagreement except that I don’t think this phenomenon is unique to psychoanalysis.  Rather, I think what we are seeing are the ongoing consequences of theoretical hyper-specialization, where these are simply several of many local occurrences.  By contrast to those who continue to announce the Death of Theory, it seems to me that we are still working to live with the consequences of having at our disposal So Many Seriously Elaborated Theories, which in turn gives rise to a mostly frustrating situation where the maps seem richer, or at least larger, than the territory.

I do not note this to endorse hostility to the elaboration of theoretical sophistication, but simply to note how glutted we are with it.  I think the symptoms of this, at least in communication studies, are everywhere:  A more ready willingness to abandon the mega-conferences in preference for more intimate niche meetings where one can burrow in and keep up.  The tendency to assign secondary sources even in doctoral seminars, or, when primary works are struggled with, to isolate them from the conversations in which they participated (which results in an alternative tendency to see originary controversies, such as the big arguments between Foucault and Sartre, or Fraser and Habermas, as pretty much settled history to be filed in the same category with “the earth is round”). A growing impatience with efforts to provide arms-length or peer review or other gate-keeping work undertaken in the effort to make comprehensible the incoming ocean of new material, and a more widespread corrosive cynicism about the larger enterprise.  The increasing frequency of major conference presentations, even given by serious senior scholars, that don’t seem to say much of anything new but mostly offer a repetition of the theoretically same.  An inclination to see friendly work as fully appreciating the rich nuance of my own tradition, and hostile work as reducing my tradition to caricature.  A wider tendency to see the dissertation not as evidencing a student’s ability to undertake a serious research project, but as an indication of the project whose trajectory will forever define a career.

Another key marker is the level of defensiveness, sometimes veering into animus, I hear often expressed by the advocates of every perspective who feel their work is under siege:  Marxist theory, argumentation studies, close textual analysis, historical/archival work, postcolonial and queer theory, cultural studies, feminist scholarship, and the list could be considerably lengthened.  All feel under attack and to some extent sustain intellectual solidarity by insisting enemies are at the gate.  And within these traditions fragmentation continues apace – a longstanding theme in a number of the convention conversations I hear is how scholars who for many years have labored to make visible the cultural contributions of gays and lesbians see themselves as today marginalized by queer theory, and in turn how queer theory seems to be marginalizing bisexual and transgendered approaches.  This is a theme not limited to rhetorical studies but is more widely sensed within the broader inquiry of communication scholars:  the television studies people feel like they aren’t taken seriously, and so do the performance theorists, the cinema studies scholars, the interpersonal researchers, the quantoids, the public opinion theorists, those who first encountered communication through forensics or theater, the TV and film production faculty, ethnographers, organizational communication scholars, mass communication empiricists, public relations practitioners, and those who teach students for industry work.

As my career has taken me in the direction of administrative work, I see the same trends more widely as they shape conversations within the humanities and beyond.  When I first had the audacity in a meeting of chairs from the full range of disciplines to say that external resources are harder to find in the humanities – I thought everyone agreed with that – I was surprised that the most assertive push-back came from a colleague in biology, who was there to argue in detail his relative deprivation within the wider university.  His case was not absurd:  it is hard to argue anyone is properly supported in the modern public research university.

I don’t see this defensiveness as a reflection of bad faith or of animus.  For in a sense all of us are right – one does have to exercise eternal vigilance in defending one’s research perspective, because in a universe of so many well-elaborated accounts of human behavior the most likely danger is being forgotten or overshadowed given the broader cacophony.  Thus the paradox that while more journals are now published in the humanities than ever before, the individual researchers I talk with see fewer and fewer outlets available for their sort of work.  Or, to further mix the metaphors, there are not only more intellectual fortresses today, but they are better fortified against attack and protected against the wandering tourist and amateur dabbler than ever before.

It is true, I suppose, that within each theoretical community are some who treat, say, Anti-Oedipus or Lacan’s seminars or the Prison Notebooks or the Rhetoric as holy scripture.  But the issue is less that each theorist has induced a cult than that, in general, scholars who are otherwise persuaded they cannot possibly know every perspective well, tend to stick with the one rich approach into which they were first acculturated.  And so what was and is seen by some as a sort of happy theoretical pluralism, a view still promoted by the wider impulses to boundary-cross and be interdisciplinary and all the rest, has devolved into a more frequently expressed surliness about colleagues who “won’t do the work to stay current,” a wider reliance on secondary sources like the Dummy guides and Cambridge Companions, the more frequent play (in responding to outsider critics) of the “you don’t appreciate the subtlety of my theory when it comes to ___” card, and an even more common resort by the basically friendly to the tactic of heavy-note-taking silence or the helpful “you should really read [insert my theorist],” or, more generally, “have you thought about this?” conference response or query.  One of the most common questions i hear my colleagues ask of one another is one I often ask myself:  “If you could recommend three or four short and accessible overviews to ____ that would help me get up to speed, what would you suggest?”  It’s asking for an academic life preserver.

Less of all this is sparked by ill will or ideological refusal than by the simple unwillingness to confess “I am unable to offer a thoughtful response to your read of Ranciere because I didn’t know he would be discussed today and so I didn’t have the chance to beef up on my Ranciere for Dummies, and because it takes every minute I have available for intellectual work just to keep up on my Burke.”  The eye rolling response is sometimes thus less reflective of substantively well-grounded opposition than the expression of a weirdly humble recognition of the game we think everyone is playing: the gotcha strategem of “there s/he goes again showing off everything s/he knows about Cicero.”  At a time when credible humanistic research is said to be impossible apart from mastery of all social theory, all of the philosophical and aesthetic traditions, and (increasingly) the life sciences (cognitive theory, evolutionary psychology, accounts of chaos and networks and more), and the globalized set of artifacts that underwrite comparative work, the task seems overwhelming.

My point is not to be alarmist or defeatist about the enterprise.  Specialization is not new, and has elicited expressions of concern for generations.  To some extent the theoretical proliferation is self correcting – people participating in a bounded academic conversation do move on and not every carefully enunciated perspective finds a following.  There remain exceptionally skilled intellectuals who seem to know everything and who are apparently able to keep up with all the wider literatures.  And too often the expressed difficulties in “keeping up” exaggerate the challenge in an age when more resources than ever are available to enable one’s informational literacy, and when “I don’t have the time to understand [feminist] [critical race] [queer] theory” is a too-convenient excuse to ignore perspectives that elites brushed off even when Renaissance Giants Walked the Earth and only had to stay current on the sum of human knowledge contained in fifty books.

And because the challenges of surfing the sea of new literature and getting others interested in one’s work are by now so universal, I have little to offer to the range of problematic correctives.  The idea of reinstating a common canon holds little appeal, and for good reason.  Nor is announcing the Death of Theory, or insisting on the Priority of the Local or the Case, especially compelling.  My own preference, given a background in debating, is to “teach the controversies,” but that approach isn’t ideologically innocent either.  If book publishers survive, I think the impetus to anthologies that now characterizes cinema studies is likely to expand more widely within communication scholarship.  But there are dangers in too readily recommending hyper-specialization in doctoral students, paper writing revved up out of fast tours of JSTOR and Project Muse, and too quickly acceding to happy talk about theoretical pluralism.  Better, in our own intellectual labors, to insistently listen to and reach out to other perspectives and work like hell to keep up with the wider world of humanistic scholarship.

And sometimes, if only as a mechanism to preserve one’s sanity, a little eye rolling may also be in order.  Just keep it to yourself please.

The future of globalized literary history

A 2008 special issue of New Literary History (vol. 39) is focused on the future of literary history (and, relatedly, comparative literary studies) given globalization.  To some extent one can track the complicated history of World Literature through the early and influential essays of Rene Wellek, who advocated for comparative scholarship even as he warned against the dangers of investing disciplinary energy in the search for covering laws and causal relationships between literature and the wider society.  The titles of Wellek’s much-cited 1958 talk, “The Crisis of Comparative Literature,” and his 1973 essay “The Fall of Literary History,” convey some sense of his pessimism about the prospects for defensible work.

Of course the very term World Literature has to be carefully used since one must always demarcate the multiple possibilities implied by the phrase.  Some use World Literature to reference all the literature produced in the world, some see it as referring to Kant and Goethe’s dream (Goethe in 1827:  “a universal world literature is in the process of being constituted”) of an international body of transcendently superb literature, and still others to reference those few novels that have found truly international fame.  And so some who are invested in comparative work today, often undertaken to throw American cultural productions into a wider perspective of circulation and resistance, prefer terms like transcultural literary history (Pettersson).  In the context of the theoretical care one must take even to begin this kind of work (the complications of which are unwittingly revealed in Walter Veit’s summation of Linda Hutcheon’s call for a “new history of literature” which “has to be constructed as a relational, contrapuntal, polycommunal, polyethnic, multiperspectival comparative history”), the project remains inherently appealing:  who would oppose the idea of research that induces cross-cultural sensitivity and understanding, even realizing its final impossibility?

After buzzing along for decades, or at least since the 1950’s when the International Comparative Literature Association first met to debate the potential for doing literary historical work, new attention has been given to transcultural literary studies thanks to two much-discussed interventions:  Franco Moretti’s essay, “Conjectures on World Literature” (which forms the anchor for a 2004 anthology on Debating World Literature released by Verso and which formed a sort of introduction to his widely read book a year later) and Pascale Casanova’s The World Republic of Letters (trans. M.B. DeBevoise; Cambridge, Mass.:  Harvard UP, 2004).  Moretti’s work has gotten a lot of attention given his heretical view that the sheer quantity of the world’s literature, which now escapes the possibilities of close textual analysis, now requires distant reading, which is to say sophisticated forms of macro-data analysis that can reveal patterns of novelistic diffusion worldwide.

But things get tricky fast.  Fredric Jameson, who leads off and who has long expressed skepticism about the work of literary historians (noting in an address to a 1984 Hong Kong conference on Rewriting Literary History that “few of us think of our work in terms of literary history,” and having subsequently called repeated attention to the essentially ahistorical nature of postmodernity), argues that the dialectical impulses of economic globalization simultaneously promise cultural liberation even as the economic chains are slowly tightened, and in ways that finally limit the range of cultural productions as well.  To be concrete, Jameson highlights how global capital appears to open all cultures to all populations, even as, over time, a shrinking number of transnational conglomerates end up ultimately stifling all but the handful of mainly English-language novels able to turn a profit.  He is especially keen on Michael Mann’s argument that the global economy is “encaging” – that is, as Jameson describes it, “the new global division of labor is” organized so that “at first it is useful for certain countries to specialize…. Today, however, when self-sufficiency is a thing of the past, and when no single country, no matter what its fertility, any longer feeds itself, it becomes clearer what this irreversibility means.  You cannot opt out of the international division of labor any longer” (376).

The cage ensnares more tightly – and not only because “smaller national publishers are absorbed into gigantic German or Spanish publishing empires,” but because a handful of mega-publishers end up publishing all the textbooks kids read even as budding authors everywhere are subtly persuaded to buy in because of their “instinctive desire to be read by the West and in particular in the United States and in the English language:  to be read and to be seen and observed by this particular Big Other” (377).  So what are literary historians to do that will not invariably make them simply complicit in all this?  Jameson, a little bizarrely I think, argues for a sort of criticism that imagines the world-making possibilities of novels-yet-unwritten-that-one-imagines-as-ultimately-failing-to-liberate.  This sort of creative criticism “raises the ante,” according to Jameson, because it helps its audiences recognize the actual “persistence, if insufficiently imagined and radicalized, of current stereotypes of literary history” (381).

Brian Stock, at the University of Toronto, reads the current scene from within the larger new traditions of developmental and cognitive psychology and cognitive neuroscience.  What work done in these areas suggests is that reading has a profound cognitive (and universal) influence on human beings, whose plastic minds are essentially reconfigured by repeated participation in practices of literacy.  As Stock see is,”the only way in which reading can be related to the ubiquitous problem of globalization in communications, without running the risk of new types of intellectual colonization, is by demonstrating that it is in the genetic inheritance for interpreting language in its written or ideographic form that is the truly ‘global’ phenomenon, since it is potentially shared by everyone who can read…  [I]f this approach can be agreed upon, the natural partner of globalization will become a scientifically defended pluralism” (406).

Walter Veit, at Monash University, sees the interpretive key as residing in temporality, which can never be linguistically articulated (Paul Ricouer:  “temporality cannot be spoken of in the direct discourse of phenomenology”) except in novelistic narrative, where the arc of the narrative makes some sense of time’s passage and where, following Hayden White, the linguistic operations of rhetorical tropes and figures provide metaphorical access to the otherwise inexpressible.  One is left with a more sanguine sense of the future within these terms:  both for an analysis of the multiple ways in which the world’s literatures construct time and its passing, and with respect to literary criticism, which is always embedded in the particular and always changing practices of its time and audiences.  Such a view is well supplemented by Nirvana Tanoukhi’s claim that the challenge of understanding transnational literature is also foundationally a question of scale and locale and spaces of production.

The work of literary history, and the conceptualization even of its very possibility, is, finally,  a representative anecdote for the broader work of the humanities.  This is a theme apprehended both by Hayden White, who notes that the questions raised in the symposium reflect the larger conditions of historical knowledge as such, and by David Bleich, who notes the close affinity between the work of literary historians and the broader work of the university (where “scholars have always been involved in the mixing of societies, the sharing of languages and literatures, and the teaching of their findings and understandings,” pg. 497).  The university plays a culturally central role in translating other cultures (for students, for the audiences of its research) that is fraught with all the perils of the work of writing intercultural history – hubris, caricature, misapprehension.  But the effort to make sense of the wider world, however risky, is also indispensable, if only because the alternatives – unmitigated arrogance and blinkered ignorance – are so much worse.

An approaching Singularity?

When Ray Kurzweil published his bestseller, The Singularity is Near, in 2005, the skeptical response reverberated widely, but his track record when it comes to having made accurate predictions has been uncanny.  In the late 1980’s it was Kurzweil who anticipated that soon a computer could be programmed to defeat a human opponent in chess; by 1997 Big Blue was beating Garry Kasparov.  His prediction that within several decades humans will regularly assimilate machines to the body seemed, as Michael Skapinker recently put it, “crazy,” “except that we are already introducing machines into our bodies.  Think of pacemakers – or the procedure for Parkinson’s disease that involves inserting wires into the brain and placing a battery pack in the chest to send electric impulses through them.”

Kurzweil obviously has something more dramatic in mind than pacemakers.  The term singularity both describes the center of a black hole where the universe’s laws don’t hold and that turning point in human history where the forward momentum of machine development (evolution?) will have so quickly accelerated as to outpace human brainpower and arguably human controls.  For Kurzweil the potential implications are socially and scientifically transformational:  as Skapinker catalogs them, “We will be able to live far longer – long enough to be around for the technological revolution that will enable us to live forever.  We will be able to resist many of the diseases, such as cancer, that plague us now, and ally ourselves with digital versions of ourselves that will become increasingly more intelligent than we are.”

Kurzweil’s positions have attracted admirers and detractors.  Bill Gates seems to be an admirer (Kurzweil is “the best person I know at predicting the future of artificial intelligence”).  Others have criticized the claims as hopelessly exaggerated; Douglas Hofstadter admires elements of the work but has also said it presents something like a mix of fine food and “the craziest sort of dog excrement.”  A particular criticism is how much of Kurzweil’s claim rests on what critics call the “exponential growth fallacy.”  As Paul Davies put it in a review of The Singularity is Near:  “The key point about exponential growth is that it never lasts.  The conditions for runaway expansion are always peculiar and temporary.”  Kurzweil responds that the conditions for a computational explosion are essentially unique; as he put it in an interview:  “what we see actually in these information technologies is that the exponential growth associated with a particular paradigm… may come to an end, but that doesn’t stop the ongoing exponential progression of information technology – it just yields to another paradigm.”  Kurzweil’s projection of the trend lines has him predicting that by 2027, computers will surpass human intelligence, and by 2045 “strictly biological humans won’t be able to keep up” (qtd. in O’Keefe, pg. 62).

Now Kurzweil has been named chancellor of a new Singularity University, coordinated by a partnership between NASA and Google.  The idea is simultaneously bizarre and compelling.  The institute is roughly modeled on the International Space Unversity in Strasbourg, where the idea is to bring together Big Thinkers who can, by their interdisciplinary conversations and collaboration, tackle the impossible questions.  One wonders at whether the main outcome will be real research or wannabe armchair metaphysical speculation – time will tell, of course.  NASA’s role seems to be simply that they have agreed to let the “university” rent space at their Moffett Field Ames Research Center facility in California.  The money comes from Peter Diamandis (X Prize Foundation chair), Google co-founder Larry Page, Moses Znaimer (the media impresario), and tuition revenue (the nine week program is charging $25,000, scholarships available).  With respect to the latter the odds seem promising – in only two days 600 potential students applied.

The conceptual issues surrounding talk of a Singularity go right to the heart of the humanistic disciplines, starting with the manner in which it complicates anew and at the outset what one means by the very term human.  The Kurzweil proposition forces the issue by postulating that the exponential rate of information growth and processing capacity will finally result in a transformational break.  When one considers the capacity of human beings to stay abreast of all human knowledge that characterized, say, the 13th century, when Europe’s largest library (housed at the Sorbonne) held only 1,338 volumes, and contrasts that with the difficulty one would encounter today in simply keeping up with, say, research on William Shakespeare or Abraham Lincoln, the age-old humanistic effort to induce practices of close reading and thoughtful contemplation can seem anachronistically naive.

One interesting approach for navigating these issues is suggested in a 2007 essay by Mikhail Epstein.  Epstein suggests that the main issue for the humanities lies less in the sheer quantity of information and its potentially infinite trajectory (where, as Kurzweil has implied, an ever-expanding computational mind finally brings order to the Universe) than in the already evident mismatch between the finite human mind and the accumulated informational inheritance of humanity.  Human beings live for a short period of time, and within the limited timeline of even a well-lived life, the amount of information one can absorb and put to good use will always be easily swamped by the accumulated knowledge of the centuries.  And this is a problem, moreover, that worsens with each generation.  Epstein argues that this results in an ongoing collective trauma, first explained by Marxist theory as inducing both vertigo and alienation, then by the existentialists as an inevitability of the human condition, and now by poststructuralists who (and Epstein concedes this is an oversimplification) who take reality itself “as delusional, fabricated, or infinitely deferred” (19).  Epstein sees all this as evidencing the traumatizing incapacity of humans to comprehend in any detailed way their own collective history or thought.  The postmodern sensibility revealed in such aesthetic traditions as Russian conceptualism, “which from the 1970s to the 1990s was occupied with cliches of totalitarian ideology,” and which “surfaced in the poetry and visual art of Russian postmodernism” in ways “insistently mechanical, distant, and insensitive” (21).  There and elsewhere, “the senses are overwhelmed with signs and images, but the intellect no longer admits and processes them” (22).

The problem to which Epstein calls attention – the growing gap between a given human and the total of humanity – is not necessarily solved by the now well-established traditions that have problematized the Enlightenment sense of a sovereign human.  In Epstein’s estimation, the now-pluralized sense of the human condition brought into being by multiculturalism has only accentuated the wider social trends to particularization and hyper-specialization:  the problem is that “individuals will continue to diversify and specialize:  they will narrow their scope until the words humans and humanity have almost nothing in common” (27).

The wider work on transhumanism and cyborg bodies reflects a longer tradition of engagement with the challenge posed by technological transformation and the possibilities it presents for physical reinvention.  At its best, and in contrast to the more culturally salient cyborg fantasies enacted by Star Trek and the Terminator movies, this work refuses the utopian insistence in some of the popular accounts that technology will fully eradicate disease, environmental risk, war, and death itself.  This can be accomplished by a range of strategies, one of which is to call attention to the essentially religious impulses in the work, all in line with long-standing traditions of intellectual utopianism that imagine wholesale transformation as an object to be greatly desired.  James Carey used to refer to America’s “secular religiosity,” and in doing so followed Lewis Mumford’s critique of the nation’s “machano-idolatry” (qtd. in Dinerstein, pg. 569).  Among the cautionary lessons of such historical contextualization is the reminder of how often thinkers like Kurzweil present their liberatory and also monstrous fantasies as inevitabilities simply to be managed in the name of human betterment.

SOURCES:  Michael Skapinker, “Humanity 2.0:  Downsides of the Upgrade,” Financial Times, 10 February 2009, pg. 11; Mikhail Epstein, “Between Humanity and Human Beings:  Information Trauma and the Evolution of the Species,” Common Knowledge 13.1 (2007), pgs. 18-32; Paul Davies, “When Computers Take Over:  What If the Current Exponential Increase in Information-Processing Power Could Continue Unabated,” Nature 440 (23 March 2006); Brian O’Keefe, “Check One:  __ The Smartest, or __ The Nuttiest Futurist on Earth,” Fortune, 14 May 2007, pgs. 60-69; Myra Seaman, “Becoming More (Than) Human:  Affective Posthumanisms, Past and Future,” Journal of Narrative Theory 37.2 (Summer 2007), pgs. 246-275; Joel Dinerstein, “Technology and Its Discontents:  On the Verge of the Posthuman,” American Quarterly (2006), pgs. 569-595.

Remembering Harold Pinter

Several of the obituaries for Harold Pinter, the Nobel prize winning playwright who died on Christmas Eve, see the puzzle of his life as centered on the question of how so happy a person could remain so consistently angry.  The sense of anger, or perhaps sullenness is the better word, arises mainly from the diffidence of his theatrical persona and the independence of his best characters even, as it were, from himself, and of course his increasingly assertive left-wing politics.  The image works, despite its limitations, because as he suffered in recent years from a gaunting cancer he remained active and in public view, becoming something of a spectral figure.  And of course many who were not fans of theatrical work (from the hugely controversial Birthday Party, to the critically acclaimed Caretaker, and then further forays into drama and film) mainly knew him through his forceful opposition to Bush and Blair and their Iraqi policies and the larger entanglements of American empire.

But Pinter, and this is true I think of all deeply intellectual figures, cannot be reduced to the terms provocateur or leftist.  In this case, to be sure, simple reductions are wholly inadequate to the task given his very methods of work:  one of his most abiding theatrical legacies is his insistence that dramatic characters are inevitably impenetrable – they owe us no “back story,” nor are their utterances ever finally comprehensible, any more than are our interactions in the real world of performed conversation.  And so Pinter set characters loose who even he could not predict or control, an exercise that often meant his productions were themselves angering as audiences struggled to talk sense into the unfolding stories. As the Economist put it, “his characters rose up randomly… and then began to play taunting games with him.  They resisted him, went their own way.  There was no true or false in them.  No certainty, no verifiable past…  Accordingly, in his plays, questions went unanswered.  Remarks were not risen to.”

So what does all this say about the ends of communication?  For Pinter they are not connected to metaphysical reflection or understanding (this was Beckett’s domain; it is somehow fitting that Pinter’s last performance was in Beckett’s Krapp’s Last Tape, played from a wheelchair), but simple self defense, a cover for the emptiness underneath (Pinter: “cover for nakedness”), a response to loneliness where silence often does just as well as words.  And so this is both a dramatic device (the trait that makes a play Pinteresque) and a potentially angering paradox:  “Despite the contentment of his life he felt exposed to all the winds, naked and shelterless.  Only lies would protect him, and as a writer he refused to lie.  That was politicians’ work, criminal Bush or supine Blair, or the work of his critics” (Economist).  Meanwhile, the audience steps into Pinter’s worlds as if into a subway conversation; as Cox puts it, “The strangers don’t give you any idea of their backgrounds, and it’s up to the eavesdropper to decide what their relationships are, who’s telling the truth, and what they’re talking about.”

The boundaries that lie between speaking and silence are policed by timing, and Pinter once said he learned the value of a good pause from watching Jack Benny performing at the Palladium in the early 1950’s.  One eulogist recalls the “legendary note” Pinter once sent to the actor Michael Hordern:  “Michael, I wrote dot, dot, dot, and you’re giving me dot, dot.”  As Siegel notes:  “It made perfect sense to Hordern.”  The shifting boundaries of communication, which in turn provide traces of the shifting relations of power in a relationship, can devolve into cruelty or competition where both players vie for one-up status even as all the rest disintegrates around them.  As his biographer, Michael Billington, put it, “Pinter has always been obsessed with the way we use language to mask primal urges.  The difference in the later plays is not simply that they move into the political arena, but they counterpoint the smokescreen of language with shocking and disturbing images of torture, punishment, and death.”  At the same time, and this because Pinter was himself an actor and knew how to write for them, the written texts always seemed vastly simpler on paper than in performance – and this is not because simple language suggests symbolic meaning (Pinter always resisted readings of his work that found symbolic power in this or that gesture) but because the dance of pauses and stutters and speaking end up enacting scenes of apparently endless complexity.

For scholars of communication who attend to his work, then, Pinter poses interesting puzzles and even at their most cryptic his plays bump up against the possibilities and limits of language.  One such riddle, illuminated in an essay Dirk Visser, is that while most Pinter critics see his plays as revealing the failures of communication, Pinter himself refused to endorse such a reading, which he said misapprehended his efforts.  And as one moves through his pieces, the realization that language is not finally representational of reality slowly emerges (or, in some cases, with the first line), nor even instrumental (where speakers say certain things to achieve certain outcomes).  Pinter helps one see how language can both stabilize and unmoor meaning, even in the same instant (this is the subject of an interesting analysis of Pinter’s drama written by Marc Silverstein), and his work both reflects and straddles the transition from modernism to postmodernism he was helping to write into existence (a point elaborated by Varun Begley).

His politics were similarly complicated, I think, a view that runs contrary to the propagandists who simply read him as a leftist traitor, and a fascist at that.  His attacks on Bush/Blair were often paired with his defense of Milosevic in the press as implying a sort of left-wing fascism where established liberal power is always wrong.  But his intervention in the Milosevic trial was not to defend the war criminal but to argue for a fair and defensible due process, and this insistence on the truth of a thing was at the heart of his compelling Nobel address.  Critics saw his hyperbole as itself a laughable performative contradiction (here he is, talking about the truth, when he hopelessly exaggerates himself).  I saw a long interview done with Charlie Rose, replayed at Pinter’s death, where Rose’s impulse was to save Pinter from this contradiction, and from himself:  (Paraphrasing Rose) “Surely your criticism is not of all the people in America and Britain, but only made against particular leaders.”  “Surely you do not want to oversimplify things.”  Pinter agreed he was not accusing everyone of war crimes but also refused to offer broader absolution, since his criticism was of a culture that allowed and enabled lies as much as of leaders who perpetuated them without consequence.  Bantering with Rose, that is to say, he refused to take the bait, and the intentional contradictions persisted.  His Nobel speech (which was videotaped for delivery because he could not travel to Stockholm and is thus available for view online) starts with this compelling paragraph:

In 1958 I wrote the following:  “There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false.  A thing is not necessarily either true or false; it can be both true or false.”  I believe that these assertions still make sense and do still apply to the exploration of reality through art.  So as a writer I stand by them but as a citizen I cannot.  As a citizen I must ask:  What is true?  What is false?

What was so angering for many was Pinter’s suggestion that the American leadership (and Blair too) had committed war crimes that had first to be recognized and tallied and then perpetrators held to account:

The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile.  The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven.  Hundreds of thousands of deaths took place throughout these countries.  Did they take place?  And are they in all cases attributable to US foreign policy?  The answer is yes they did take place and they are attributable to American foreign policy.  But you wouldn’t know it.  It never happened.  Nothing ever happened.  Even while it was happening it wasn’t happening.  It didn’t matter.  It was of no interest.  The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them.  You have to hand it to America.  It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good.  It’s a brilliant, even witty, highly successful act of hypnosis.

The argument is offensive to many (when the Nobel was announced, the conservative critic Roger Kimball said it was “not only ridiculous but repellent”), though for a playwright most attentive to the power of the obscuring mask and the underlying and sometimes savage operations of power they obscure, it is all of a piece.  McNulty:  “But for all his vehemence and posturing, Pinter was too gifted with words and too astute a critic to be dismissed as an ideological crank.  He was also too deft a psychologist, understanding what the British psychoanalyst D. W. Winnicott meant when he wrote that ‘being weak is as aggressive as the attack of the strong on the weak’ and that the repressive denial of personal aggressiveness is perhaps even more dangerous than ranting and raving.”

As the tributes poured in, the tensions between the simultaneous arrogance (a writer refuses to lie) and the humility (he felt exposed to all the winds, naked and shelterless) in this arise again and again.  The London theatre critic John Peter gets at this when he passingly noted how Pinter “doesn’t like being asked how he is.”  And then, in back to back sentences:  “A big man, with a big heart, and one who had the rare virtue of being able to laugh at himself.  Harold could be difficult, oh yes.”  David Wheeler (at the ART in Cambridge, Massachusetts):  “What I enjoyed [of my personal meeting with him] was the humility of it, and his refusal to accept the adulation of us mere mortals.”  Michael Billington:  “Pinter’s politics were driven by a deep-seated moral disgust… But Harold’s anger was balanced by a rare appetite for life and an exceptional generosity to those he trusted.”  Ireland’s Sunday Independent:  “Pinter was awkward and cussed… It was the cussedness of massive intellect and a profound sense of outrage.”

Others were more unequivocal.  David Hare:  “Yesterday when you talked about Britain’s greatest living playwright, everyone knew who you meant.  Today they don’t.  That’s all I can say.”  Joe Penhall:  Pinter was “my alpha and beta…  I will miss him and mourn him like there’s no tomorrow.”  Frank Gillen (editor of the Pinter Review):  “He created a body of work that will be performed as long as there is theater.”  Sir Michael Gambon:  “He was our God, Harold Pinter, for actors.”

Pinter’s self-selected eulogy conveys, I think, the complication – a passage from No Man’s Land – “And so I say to you, tender the dead as you would yourself be tendered, now, in what you would describe as your life.”  Gentle.  Charitable.  But also a little mocking.  A little difficult.  And finally, inconclusive.

SOURCES:  Beyond Pinter’s own voluminous work, of course – Marc Silverstein, Harold Pinter and the Language of Cultural Power (Bucknell UP, 1993); Varun Begley, Harold Pinter and the Twilight of Modernism (U Toronto P, 2005); “Harold Pinter,” Economist, 3 January 2009, pg. 69; Ed Siegel, “Harold Pinter, Dramatist of Life’s Menace, Dies,” Boston Globe, 26 December 2008, pg. A1; John Peter, “Pinter:  A Difficult But (Pause) Lovely Man Who Knew How to Apologise,” Sunday Times (London), 28 December 2008, pgs. 2-3; Gordon Cox and Timothy Gray, “Harold Pinter, 1930-2008,” Daily Variety, 29 December 2008, pg. 2; Charles McNulty, “Stilled Voices, Sardonic, Sexy:  Harold Pinter Conveyed a World of Perplexing Menace with a Vocabulary All His Own,” Los Angeles Times, 27 December 2008, pg. E1; Dirk Visser, “Communicating Torture: The Dramatic Language of Harold Pinter,” Neophilologus 80 (1996): 327-340; Matt Schudel, “Harold Pinter, 78,” Washington Post, 26 December 2008, pg. B5; Michael Billington, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 15; Esther Addley, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 14; Frank Gillen, “Farewell to an Artist, Friend,” St. Petersburg Times (Florida), 4 January 2009, pg. 4E; “Unflagging in His Principles and Unrivalled in His Genius,” Sunday Independent (Ireland), 28 December 2008; Dominic Dromgoole, “In the Shadow of a Giant,” Sunday Times (London), 28 December 2008, pgs. 1-2; Mel Gussow and Ben Brantley, “Harold Pinter, Whose Silences Redefined Drama, Dies at 78,” New York Times, 26 December 2008, pg. A1.

Whale communication

Every couple weeks I receive alerts from a grants database configured to send me information about communication research informing me of sponsored projects relating to animal communication.  I am always surprised to see these alerts since the field of speech communication in which I’m trained (and in contrast to scholars of the science of language development) only peripherally engages the topic and in ways that either treat it as an issue of passing relevance in our introductory texts for students who inevitably raise the subject, or in fully anecdotal ways.  A famous essay by the literary critic Kenneth Burke thus distinguishes animal communication as motion (which for him is a term referring to instinctive and unthinking movement) as compared to human action; although ideological convictions can make us instinctively hop around this way or that, Burke argued our capacity for intersubjective communication enables thoughtful and motivated acts.  As influential as Burke is for scholars of rhetorical process, it has never occurred to me that he had a sophisticated understanding of, say, primate interaction, and of course his work in the 1950s and 1960s predated the most significant anthropological and biological research on animal sociability, some of which comes very close to suggesting or even proving beyond doubt complex structures of communicated meaning.

I was reminded of all this, and also my own ignorance, in watching a PBS Nature documentary aired the other night in Atlanta which told the story of some animal caretakers who have rescued aging chimpanzees kept in lifelong captivity either because they were raised to be circus performers or the subjects of medical research.  Many of the chimps, who age to be 35 years or older, are living with having been infected with HIV or hepatitis or other illnesses with the goal of discovering cures for these illnesses in humans.  One of the common characteristics of these animals is that they have lived so long in cages and on concrete pads that they may fear a return to more natural habitats, although that rightly does not prevent advocates for their humane treatment from endeavoring to restore them to such living conditions.  One of the sad moments in the documentary showed a chimp who recognized one of his trainers after having not seen him for more than a quarter century.  On renewed contact the old chimp seemed to immediately recognize his old captor with affection.  The professionals who cared for him (the ape) learned that he liked ice cream cones, and so for the last couple weeks of his life they brought him one every day which he would happily devour, not by swallowing it whole but by licking it and nibbling through the cone as any human being might.  One of the happiest moments showed a chimp released at last from concrete and caged confinement into an almost open park – the monkey raced out, off the concrete pad onto the grass, and scrambled to the top of the tallest tree on the property.  The good people who fought to create such a habitat wept tears of joy.  A small victory for primate rights, to be sure, or perhaps simply for the humane treatment of animals who have been, let us say it, tortured.  As I watched all this I realized I was as moved by the fact that the old chimp ate ice cream in ways that made him seem human as I was by the other’s sprinting return into an environs harder to see as recognizably human.

We’ve all heard the statistic noting the degree of genetic separation distinguishing humans from bonobos, gorillas, or chimpanzees as less than one percent.  And although the genetic distance between humans and dolphins is just a bit greater (a new book by Maddalena Bearzi and Craig Stanford notes that we split away from dolphins more than 100 million years ago), dolphins and whales are also astonishingly clever, in part because their neocortex is considerably larger than ours.  The point of the book (Beautiful Minds: The Parallel Lives of Great Apes and Dolphins, Harvard UP), as the titles and the author’s credentials imply (Bearzi studies dolphins, Stanford primates), is to point out the affinities between these animals (and by doing so to implicitly close the gap we typically perceive between them and us):  all “spend their lives navigating a complicated social environment.  An ability to learn quickly and make subtle choices lets individuals track the moods, intentions and shifting dominance ranks of others with an eye to increasing their own position within the social hierarchy…[D]olphins and apes are assessing rivals and friends, inventing distractions, employing bribes, and forming alliances to get [food and mates], relying at least as much on cunning as on brute strength” (Chadwick).

The point of all this is not to establish or prove to skeptics that animals are capable of communication – these authors take such a conclusion as already having been established, where the real questions of interest have more to do with communicative variation among species.  While debate remains over the extent to which non-humans possess the capacity to socially transmit learned information, there is by now no doubt that “dolphins and great apes grasp language to some extent.  [Bearzi and Stanford] describe dolphins responding readily to commands that require an understanding of the meaning and order of words – i.e., syntax – while studiously ignoring researchers who issued illogical orders” (Chadwick).

David Rothenberg has been working on the prospects for interspecies communication for some time now, and his 2005 book Why Birds Sing recounts his efforts to create duets with birds.  Birdsong is interesting because, as Chadwick (and of course Rothenberg) notes, birds sing far more than biologically necessary – Rothenberg is thus interested to discover whether birds sing as a form of entertainment, or because it might bring them joy.  With the discovery that whales make elaborate music beneath the waves, efforts were made to uncode whalesong, in part motivated by the thought that we could perhaps talk to each other.  As Chadwick notes:  “Enthusiasts grabbed flutes and lutes and boated out to serenade minds in the waters, intending to open up lines of contact.  The typical reaction from the whales was avoidance, and the dream evaporated.  Rothenberg thinks it may be time to try again.”  His new book, Thousand Mile Song:  Whale Music in a Sea of Sound (Basic Books), recounts efforts to make recognizable contact with whales both in captivity and in the wild.  The book pays special attention to the clicks and whistle-like sounds made by sperm whales, and maps out the different patterns that distinguish whale families.

In one of the Star Trek movies, Spock mind-melds with an earth whale, who is able to seamlessly bring Spock up to date on how ocean pollution has eviscerated habitats.  The film, of course, is predicated on a hopeless but lingering cultural fantasy that holds out the possibility of mental transference and total communicative transparency, a dream no less utopian when considered in the human context than in cross-species encounters.  But I wish Rothenberg well anyway and understand his work as deeply serious.  If research only gets as far as decoding signs of distress or hunger in other species, then that alone will have justified the effort.

SOURCES:  Douglas H. Chadwick, “Cool sounds for killer whales,” Times Literary Supplement, 14 November 2008, pg. 26; David Rothenberg, “Whale Music: Anatomy of an Interspecies Duet,” Leonardo Music Journal 18 (2008): pgs. 47-53.

The limits of global connectivity

The idea that everyone and everything is networked is an ancient one but now a thoroughly pervasive way of describing the world; it is a commonplace that influence, wealth, and ideas shape the unfolding of human history via increasingly dispersed and vertically organized arrangements.  Two essays in the new (January/February 2009) Foreign Affairs, written from drastically different perspectives, offer accounts opposed not because one lacks optimism (to the contrary, both are exceptionally hopeful for America’s prospects) but because one takes measure of the opportunities created by a globally networked society while the other emphasizes its perils.  Bill Gates, in an essay presumably penned before he knew he would continue working as heads of the Pentagon, talks about the security challenges posed by a “new age” where unconventional insurgencies will require both military and diplomatic flexibility.  And, in an essay that clearly sees the cup as half full, Anne-Marie Slaughter, dean of the Woodrow Wilson School at Princeton, argues for the wonderful benefits that can accrue to a smart America in a “networked century.”  Where Gates emphasizes how the globalized context enables instantaneous and network-destroying threats, Slaughter is more attuned to the ways a networked world create the possibilities for collaborations where the United States can assert its more hopeful cultural worldviews without shouldering the full price or institutional risks.

Many recent events have evoked the downsides of globalization, where small coordinated attacks can cripple or derail whole societies even as globalized media coverage brings small acts of resistance or terror to worldwide attention.  The attacks on Mumbai were a vivid reminder that the potential targets of attack (in this case tourist and religious centers) are almost innumerable; the same outcome might have been attained by attacks on any of the world’s major hotels, shopping centers, schools, or churches.  The recent arrest of the Russian arms dealer Viktor Bout has also brought into view the shadow circuits of arms selling and the secret networks of commerce that often allow rogue governments to purchase almost anything they can afford.  Bout was said to have sold military hardware to everyone from the Taliban to Latin American revolutionaries to Hizbullah and even perhaps to rebels in Chechnya.

Or one might also take note of the alarming uptick in documented piracy over the past twenty years.  The recent episodes off the African coast, where Somalian pirates have been attacking cruise ships and cargo vessels (about a week ago the United Nations Security Council extended its authorization allowing countries to enter Somalian waters to intercept pirates), also show how globalization provides ready resources for nongovernmental organizations (whether legitimate or criminal) as much as for governments.  While the reported number of pirate attacks has been declining since 2003 (445 in that year, 263 in 2007), the enterprise is centered around the anticipation of pure profit, pirates regularly demand hostage payments, and it looks like the long term decrease may be reversed when the final statistics are reported for 2008.  A report issued recently by Chatham House argued that the Somali pirates are clearly aiming to raise cash for attacks on the Somali government and on behalf of some radical Islamic groups.  And a report from the London-based International Maritime Bureau says the average demanded ransom is in the one million dollar range.

Piracy is described by Homer in both the Iliad and the Odyssey, and even then the difficulty of identifying naval heroes from villains was well recognized; Homer said it was hard to distinguish because “both set off in their long ships to distant shores to plunder and kill.  The difference… seems only to be their god-given fate.”  Today those who think about globalization might, following Tony Blair and Bill Clinton and the broader discourses of neoliberalism, note that all this simply illustrates the inevitability of opportunity created by globalization, which in turn suggests that good governance should be aimed not at ending globalization but channeling its in more socially beneficial and just ways.  But the powerful structures of opportunity and potential profit created by global financial networks should also serve as a cautionary note against those inclined to see international regulation as easily set in motion.  Such international consensus offers cold comfort to the victims of Victor Bout’s arms sales or the Somali pirates who move, watched but mainly unimpeded.

Cosmetic surgery and ugly ducklings

A recent issue of Configurations (vol. 15, 2007, pgs. 1-99) focuses on those plastic surgery transformation television shows, especially Fox’s The Swan.  Introducing the essays that follow in the issue, Bernadette Wegenstein cites the crazy wider statistics:  (a) since 1997 the number of plastic surgery procedures in the United States has increased 446 percent; (b) in 2006, 7,915 Botox injections and 16,477 rhinoplasties were performed on adolescents, and (c) in 2006, 3,181,592 Botox injections were made.  A lot of this jump corresponds to the popularity of the makeover shows – ABC’s Extreme Makeover (2002), FX’s Nip and Tuck (2003), E! channel’s Dr. 90210 and MTV’s I Want a Famous Face and Fox’s The Swan (all premiered in 2004) – in 2004 (the peak year for these shows) surgeries increased by 44%.  In The Swan’s third season, the producers received 500,000 applications from people wanting to be selected.

The Swan received most of the attention in this collection, I suspect because it connects to the fairy tale we all know so well (whereas the MTV show is just creepy and the act of watching someone who wants to look exactly like Cher feels like voyeurism).  The childhood fantasy that one is trapped in a life she or he finds miserable, and that someday someone will come along to acknowledge the awful misrecognition that produced such a trap (babies switched at birth!  you are really a princess from Monaco! and so on), is among the most influential in our culture and helps to explain everything from the way our culture fetishizes superheroes (whose true fantastic nature is obscured from all but a few confidants who know the real secret) to the still popular Hollywood/modeling agency fantasy that fame and fortune happens when one is discovered serving cheeseburgers by an agent who sees the true Brad Pitt under all the acne.  These myths imply that you don’t have to really do anything to be fabulous except wait around to be discovered.

There is nothing wrong with wanting to be a better person, and as these essays note, many of those appearing on these shows are clearly working through issues of severe trauma and disparagement, and one can’t help but want to see their lives improved (and part of the success of the shows is how they position audience members as supportive friends).  Personal maturation is in part achieved by bumping up against one’s physical and intellectual limits and struggling to become a better person as those constraints are undone or acknowledged, facing inevitable loss and joy to come to terms with what one can become.  But of course the same shows perpetuate some dangerous mythologies, such as the ideas that external physical transformation can do anything but only briefly forestall the normal mechanisms of aging and deterioration, that quick and intensive periods of professional intervention can correct years of sedimented bad habits, or that one’s true individuality is best attained by reshaping one’s body under the scalpel to look like a million others or some mythical collective ideal type.  It doesn’t help that so often the transformations are described as the outcome of necessary humiliation and overly-hyped emotionalism.  Wegenstein is on target when she asks whether “the desires on display in our makeover culture represent ‘the best or the worst of our nature,’” and the essays do a good job of exploring both sides – Pamela Oresan-Weine argues, for example, that while such TV shows send the wrong message to those fantasizing about, say, a nose-job as the cure to all that ails them, they may send a more beneficial message to wider audiences, which are reminded “that one’s true nature may not, at first, be easily recognized by self or other, but that one’s value will become apparent if one endures.”  Transformation shows can be a lesson in tolerance.

I have education on my mind as I write this both because I work in a university, but also because just this morning I read another review of the new Charles Murray (the Bell Curve author) book whose main claim is that the problem with American higher education is that, in the name of total transformation, too many kids are lured into colleges and universities who have no business being there and who too often simply lack the aptitude to succeed once they arrive on campus.  As I read about the strengths and weaknesses of total transformation TV shows, the parallels to education emerge in interesting ways.  When one of The Swan analysts calls positive attention to the fact that contestants succeed because they “are surrounded by a panel of experts who hold them in mind, think about what they need, what is best for them,” it is hard not to think about the similar role teachers can play.  And when that same author argues that personal maturation is partly the result of youthful narcissism coming to terms with one’s limits, any teacher will recognize the process from having watched students struggle to learn with confidence.  Although Stanley Fish probably doesn’t agree with Murray on anything, his new book also warns against the dangers that are produced by overclaiming the benefits of education.

Even as I kept returning to the analogical relationship between transformational cosmetic surgery and transformational education, I chafed against it since I tend to hate what those shows implicitly promise but love the promises of education.  After all, reality shows lie by implying that transformation is easy (most of the difficult and long process of weight loss and nutritional change is kept off camera).  Then again, riding the Atlanta subway this morning, I saw a large advertisement for one of the city’s best business schools implying that wealth is only six college courses away.  And so one wonders.  Many college teachers I know have grown concerned with the “entitlement culture” that dominates their students’ mindset, where students seem to feel they deserve an “A” because of how hard they worked on an assignment (regardless of the quality of their work) or special consideration just because of who they are and how busy they are otherwise.  All kinds of culprits are named when I hear this complaint, but to some extent the national fantasy that college serves as the final validating point of entry into a life of fame and fortune (which is preached at almost every “follow your bliss” commencement ceremony in America and in every university promotional flyer) likely also plays a part, this despite the important essential truthfulness in the advertisements.

Ultimately the thought experiment of contrasting Barbie-doll surgery TV shows to the experience of, say, the typical public university, leads also to important distinctions, and my point is not to disparage education by claiming that it is no more or less transformational than a breast augmentation operation.  Cosmetic surgery reshapes the body, education the mind, and the benefits from the latter are usually more lasting.  Surgery forces the body into ever tightening constraints (as proved, I think, by that woman who has had more than twenty operations to look like Loni Anderson but now looks like a trauma survivor), whereas education opens possibilities and enlarges identity.  But the dangers of one are also to some extent the dangers of the other, and the comparison is a useful reminder that education is not easy, transformation takes real and extended work, wisdom is both a function of surpassing one’s limits and also recognizing them, and that personal change is never purely individualistic bootstrap work, but relies on a network of support (teachers, coaches, families) available to both recognize talent and support its maturation when the hard work seems hardly worth the effort.

Paula Vogel’s “How I learned to drive”

Tonight I had the opportunity see Paula Vogel’s remarkable Pulitzer Prize-winning “How I Learned to Drive” in production at the Georgia State University theatre.  The show relies on a very small cast, only five in all, a fact that lends some irony to the fact that three of them play multiple roles described in the bill as “Greek choruses.”  First performed off-Broadway about a decade ago (in a production that starred the amazing Mary Louise Parker) in a space likely not much larger than our university theatre (a fact that works considerably to the play’s benefit for reasons to which I’ll soon return), the student-led production I saw this evening was powerful in many respects.

If you haven’t had the chance to see “Drive” on stage or to read the play, you should know that in some ways it is typical of Vogel’s work in the sense that the subject matter it engages is exceptionally difficult, centered on the deeply complex and problematic relationship between Li’l Bit, a young woman who is both taught to drive and is molested by her uncle-by-marriage, Peck.  The piece manages to deploy the gimmicks available to live production without ever quite seeming gimmicky, all the while speaking to unspeakable acts of exploitation without either preaching or rationalizing.  Perhaps the most remarkable aspect of “Drive” is that it leads its audience to a comprehension of how situations of abuse arise in ways that never fully demonize Peck even as we can see him step-by-step approach and then finally fall headfirst into the abyss.  Peck is evil but also sympathetic; Li’l Bit is forever scarred but also able in some sense to move beyond the disaster of abuse and loss, and all at the same time.

For me Paula Vogel’s work comes into sharper focus when one realizes that she is a teacher by trade (head for many years of Brown University’s playwriting workshop and now newly appointed to the Yale Drama School).  It seems to me that in many ways the sometimes pathological relationship between manipulator and victim can be better comprehended in the dynamics of even healthy teacher-student interaction, where differences in age and mutually performed strategies of manipulation are ever-present.  But Vogel’s work cannot be so easily explained:  as sensitive as she is to scenes of educational encounter, she is also deeply thoughtful about the distortions arising in the theatre itself, where innovation is both enabled and destroyed (“We have never figured out how to produce art in this country.  The culture has successfully made sure that we are going to be entertainers of the ruling class, the rich…  We are now nothing more than a backdrop for cocktail parties for the ruling class” – all thoughts expressed even as she challenges all this in her own work).

“How I Learned to Drive” can be read as a retelling of Nabokov’s Lolita (interviewed on the Lehrer News Hour right after winning the Pulitzer, Vogel told interviewer Elzabeth Farnsworth that “in many ways I think that this play is an homage to Lolita, which I think is one of the most astonishing books ever written”).  But the play recasts almost every important detail (apart from the fact of a profoundly wrong older man and younger girl “relationship”) in ways that bring to our attention deeply vexed ethical questions.  Humbert is a literature scholar whose creepy and distorted repetition complex (he sees in Dolores/Lolita traces of his past romantic failures) is wholly pathological, and the novel is narrated through his eyes; “Drive” is narrated by the girl, and Peck is married to a beautiful woman with whom he seems to enjoy sexual intimacy.  Humbert and Peck are both shaped by the World War II years, but whereas for Humbert the damage is done by his true love’s premature death, for Peck the suggestion is that he was scarred by having been himself molested as a young boy.

Lolita is manipulative but also crude and finally unexceptional; Li’l Bit is in some respects more naive and less sexually manipulative but is also more fully formed and apart from the trauma inflicted on her by Peck, weirdly and fully self-aware.  Humbert falls instantly in love with Dolores when he sees her for the first time at the age of twelve, sunbathing; Peck has “loved” Li’l Bit since the day she was born, from the time when he could literally hold her entirely in the palm of his hand (though of course he continues to hold her in the palm of his hand until the day of her eighteenth birthday; the “palm of the hand” becomes a repeated motif in the script).  Nabakov described Humbert as “a vain and cruel wretch”; Peck, in telling contrast, doesn’t come across as vain but at times rather lonely and compellingly charismatic.  Peck is a wretch, to be sure, but is motivated more by tragically misplaced affection than by cruelty.  Humbert’s increasingly pathological behavior leads finally to a murder; Peck’s to self-immolation as he drinks himself to death.

Peck’s driving lessons provide a parallel scaffolding that helps make sense of and externalize his own internally considered strategies of manipulation, and also create a metaphorical apparatus by which we can see the complex patterns of exhilaration and lost control and entrapment that distort familial affection into molestation.  The car is a mode of escape (even finally for Li’l Bit) and a sanctioned space of private encounter, a site where the illicit thrill of sexual exhilaration for Peck literally occurs simultaneous with the guilty pleasure of illegally driving for the girl.  Nowhere is this metaphorical layering more compelling than in the last seconds of the production, when Li’l Bit, now in her mid-thirties, returns again and again to the automobile and the long drive, pressing hard on the accelerator as a means of escaping her past even as the very act of driving reenacts both the trauma and, yes, the guilty pleasures of her remembered relationship with this man in whose orbit she uneasily traveled, filled both with love and its all-too-easily recalled opposite.

Less compelling for me were the more caricatured familial dynamics Vogel enacts through Li’l Bit’s grandparents; while they convey the very real sense in which bystanders can become enablers, the nuance of the core (Li’l Bit/Peck) relationship is missing from the grandparent’s tortured marriage.  And a scene where Peck seduces a nephew (this time the metaphor is fishing, not driving, and the site of molestation a tree house and not the car) is both clearly decisive in depersonalizing Peck’s distorted desires but also perhaps too completely ambiguated (an underlying dynamic that seems at work throughout is the ironic possibility that Peck has deeply sublimated homosexual desires and that part of Li’l Bit’s revulsion at his advances relates to her own coming-into-being as a lesbian).

But these are minor complaints – Peck’s seduction of the boy is challengingly ambiguous but also amazingly evocative since the scene is played in pantomine, the boy never seen – and the more commonly earth-shattering power of Vogel’s writing comes through even in those scenes that seem to fall just slightly short of perfection.  One of the most striking and heartbreaking monologues in the entire play is given by Peck’s wife (Li’l Bit’s aunt), who is (and this is Vogel’s greatest gift, I think) able to articulate in a speech that lasts no more than ninety seconds all the tangled and tragic rhythms of awful knowledge and its denial, love and its capacity both to sharpen and blur one’s comprehension, and a longing wistfulness that desperately wishes for a return to normalcy that has been fully and impossibly foreclosed.

Along the way the play offers a challenging meditation on love:  At what precise moment in a relationship does love lose its innocence and become guilty and wrong?  To what extent, if any, can horrible behavior be mitigated because it arises out of an apparently genuine loving regard?  And what is the meaning of consent?  Vogel’s narrative makes plain that consent is insufficiently finalized even at steps when it is non-coercively and repeatedly requested, and it also complicates the idea that the responsibility of consent runs only one way:  at almost every step of the unfolding narrative both the older man and the younger girl each comprehend what is happening at an important level of conscious realization, even as each is blinded by peculiar and tragically naive misconceptions.

The many recognitions Vogel has received (Obie, Drama Critics, Pulitzer, to name only a few) reflect the perfect affinity between the play and the physical Intimacy of live theatre.  The performance I saw tonight wholly confirmed David Savran’s insight that “A Paula Vogel play is never simply a politely dramatized fiction.  It is always a meditation on the theatre itself – on role-playing, on the socially sanctioned scripts from which characters diverge at their peril and on a theatrical tradition that has punished women who don’t remain quiet, passive and demure.”  Lolita works better on screen because the nature of Humbert’s attraction for Dolores is itself initiated in an act of cinematic spectacle – Humbert falls in love with a distant image of the girl, and is captivated by the mirage before he ever comes to understand her more mundane true persona.  Not so for “Drive,” where the ever accumulating erotic charge does not arise out of Peck’s view-from-afar as much as the more fully embodied encounters of touch and conversation and smell and taste and intimate contact, not to mention their absence.

The theatrical setting also performs another important function that would be difficult to enact on screen.  Vogel’s script jumps around, scrambling chronological time even while the basic characters (Peck and Li’l Bit) are not physically altered or differently made up.  The effect is that at any given time, although the audience never loses sight of the underlying inappropriateness adduced by differences in age, one loses track of how old Li’l Bit is – in this moment is she thirteen or thirty? – and so the combined mechanism of mixed up chronology and theatrical performance help us see her as Peck sees her:  young and old, naive and sophisticated, innocent and maybe also guilty, all at once, blurred together in ways that distort judgment and help make Peck’s agonizingly awful missteps also more comprehensible.

SOURCES:  “A Prize Winning Playwright,” Elizabeth Farnsworth interviews Paula Vogel on the Lehrer Newshour (online), 16 April 1998; Elena More, “Stage Left:  An Interview with Paula Vogel,” PoliticalAffairs.net: Marxist Thought Online, May 2004;  Gerard Raymond, “Paula Vogel:  The Signature Season,” Theater Mania, 12 October 2004; David Savran, “Paula Vogel’s Acts of Retaliation,” American Theatre 13.4 (April 1996), pg. 46; Dick Scanlon, “Say uncle theatre,” The Advocate, 10 June 1997, pg. 61; Stefan Kanfer, “Li’l bit of incest,” New Leader, 30 June 1997, pg. 21; David Savran, “Driving Ms. Vogel,” American Theatre 15.8 (October 1998), pg. 16.

Remembering the radio “War of the Worlds”

Seventy years ago this week Orson Welles and his Mercury Theatre on the Air performed a radio broadcast version of H.G. Wells’ War of the Worlds which immediately became a legendarily contested example of the power of mass mediated communication.  The broadcast, enlivened with simulated but realistic-sounding journalistic reporting, told the story of a Martian invasion that was presented as actually underway in Grover’s Mill, New Jersey.  An absence of commercial interruptions helped convince some listeners that the drama was in fact a nonfictional account, and the ensuing reports of panic – the New York Times front page headline read “Radio Listeners in Panic, Taking War Drama as Fact” – were judged by one scholar to have affected as many as 1.7 million listeners (the portion of the six million estimated to have heard the broadcast) who “believed it to be true,” of whom 1.2 million were said to be “genuinely frightened.”

The War of the Worlds radio broadcast reached far fewer people than the number who read about it in the more than 12,000 newspaper accounts published afterward.  Some argue that the disproportionate attention was the result of poor reporting practices and an implicit arrogance among newspaper reporters about the delusional power of the newer broadcast media; the net effect of the self-interested accounts was thus a widespread exaggerated sense of the actual panic.  Others have noted that while the contemporary accounts do establish that many listeners were genuinely frightened, far fewer actually acted on their fear in ways confirming a sense that an invasion was actually underway.

A sense lingers to this day that the radio listeners panicked by the broadcast were rubes or hopelessly naive (for Hitler the incident confirmed American “decadence”).  A recent reenactment staged as a theatrical production in Washington, DC, planted panicky rubes in the audience who would leap to their feet and act frightened.  But this sense is not quite fair given the care given to trick the listening audience.  Welles is alleged to have timed the script so that listeners of the much more popular NBC competitor (the Chase & Sanborn show) on at the time would turn the dial to CBS timed to miss the opening disclaimers.  And the tale was specifically manipulated to take advantage of the broadcast norms of the time, during which news interruptions were taken seriously and had not been parodied in this way.  And of course many of those panicked were less spooked by having heard the radio show and judged it real than having been told about it through the rumor mill.  And elsewhere random coincidences contributed to misperception; in Concrete, Washington the local power went out right at the moment of highest drama, and so listeners seemed to have found confirmation of the radio drama as their lights flickered out.  The sense of embarrassment aroused in the aftermath of the national broadcast and its unmasking as pure fiction is sometimes said to have led Americans to first downplay news reports coming out of Pearl Harbor.

The irony, of course, is that media spoofs continue to sometimes trigger panic to this day, making it hard to sustain the narrative that we get it in ways our parents did not.  Eleven years after the original broadcast, a Spanish language version of WOTW was produced on Ecuadorean radio that reportedly set off widespread panic.  When the broadcast’s fictional nature was revealed, angry crowds actually mobbed Radio Quito and six people died in the volatile aftermath.  In the early 1970’s a Buffalo (NY) radio station updated the script and induced some sense of panic when they described scenes of a Martian invasion of Niagara Falls.  But apart from the WOTW episodes, one might point to the hoaxes perpetrated by broadcasts in Estonia (1991; which set in motion a brief currency crisis), Bulgaria (1991; which triggered panic about nuclear safety), Belgium (2006; a fake-news announcement that Flanders was seceding provoked alarm), or Boston (2007; a weird marketing campaign triggered a mammoth security and bomb alert crackdown).

Given the sporadic but continuing episodes of the apparent dread induced by Orson Welles and his imitators, a subsequent weird mix of analysis and disclaimer in the scholarship centered on media influence has resulted.  Some conspiracy theorists have kept alive the rumor that the WOTW broadcast was actually a secret project of the Rockefeller Foundation, a live action social science experiment.  The more serious accounts still tend either to anchor a Whig narrative of media history (people used to be gullible, and incidents like the Wells/Welles broadcast gave rise to accounts of the media’s totalizing power that we now understand to be naive) or to connect to warnings about how media influence is taught today.  With respect to the latter, David Martinson (a Florida International University professor) has written:

Many communication scholars trace a decline in support for the magic bullet theory – interestingly and paradoxically to a radio broadcast that many “lay persons” continue to cite as “definitive” evidence to support their belief in an omnipotently powerful mass media.  That broadcast was Orson Welles’ adaptation of H.G. Well’s War of the Worlds which was broadcast on Halloween eve in 1938…  But – and this became critically important as researchers later examined the impact of the Welles’ broadcast – everyone did not panic.  If the magic bullet theory were valid, there should have been something approaching almost full-scale hysteria.  Instead, studies showed “that ‘critical ability’ was the most significant variable related to the response people made to the broadcast.”

Martinson’s point is reasonable, but the difficulty, it seems to me, relates to ongoing and conflicting tendencies in the scholarship to disavow strong media effects even as reports of extreme consequence surface.  The point is not that one should or can generalize from scattered reports of extreme panic to make indefensible claims about the sustained influence of the media forming the wallpaper backdrop of contemporary culture, but rather that in disclaiming strong overall media effects one should not disavow their possibility altogether.  Beyond the continuing incidence of extreme reaction, which obviously arise under very peculiar circumstances, media scholars still struggle to explain the durability of media influence both at the level of the specific program and as it shapes a culture’s fantasies.  Some notion of the latter was conveyed by Jeffrey Sconce’s (2000) Haunted Media: Electronic Presence from Telegraphy to Television (Duke UP), which calls attention to the subtle ways in which mass mediation reinforces cultural fantasies about the human capacity to connect with the spirit world and other domains of existence.

The genuine and I think undeniable psychic unease that new forms of mass mediation continue to provoke are not simply a result of a lurking but sometimes absent sense of spiritual fulfillment and a related longing to truly connect with forces external to oneself.  Rather, the wider but too often demeaned significance of mass media influence also connects to the large scale accomplishments of massive industrialization and organization that underwrite the contemporary networked society (electrical grids, systems of food distribution, bureaucratized multinational corporate culture), systems of connection and alienation that because of their size also evoke paranoia and incomprehension and, sometimes, panic.  The technologies of mass media production (e.g., digital special effects) allow artists to exaggerate and evoke extreme responses.  As Ray Bradbury, writing an introduction to a recent collection of WOTW materials, put it,

Wells and Welles prepared us for the delusionist madness of the past fifty years.  In fact, the entire history of the United States in the last half of the twentieth century is exemplified beautifully in Well’s work.  Starting with the so-called arrival of flying saucers in the 1950s, we’ve had a continuation of our mild panic at being invaded by creatures from some other part of the universe….  So we are all closet paranoids preached to by a paranoid…  The War of the Worlds is a nightmare vision of humanity’s conquest – one that inspired paranoia in all its forms throughout the twentieth century… Truth be told, ever since the novel and the broadcast, we are still in the throes of believing that we’ve been invaded by creatures from somewhere else.

In the context of the now-popular efforts to visually and emotionally render the perils of global warming, some commentators have taken to referring to climate porn – those money shots in which we all seem to revel in the cinematic moment when the Statue of Liberty is wiped out by a tidal wave and the oxygen sucked instantaneously out of flash-frozen lungs.  An American paleoclimatologist (William Hyde), reviewing The Day After Tomorrow (the 2004 blockbuster I’m referring to) noted how “this movie is to climate science as Frankenstein is to heart transplant surgery.”

The persistence and pervasiveness of mass mediated evocations of deep unease, enacted in everything from science fiction to negative political commercials (e.g., Elizabeth Dole’s truly revolting new ad that seriously claims, over sinister music, that her Senate opponent Kay Hagan is secretly part of an atheistic cabal) to Snakes on a Plane depictions that only minimally metaphorize Terrorists on a Plane to endlessly emailed conspiracy messages about Barack Obama’s “true” religious commitments have to be taken seriously even while one also insists on the limits of media influence.  In an age of too readily trumped up dread it seems to me overly simple to conclude, with Michael Socolow, that accounts of media influence can be deeply discounted simply by asking the questions Would you have fallen for Welles’ broadcast?  If not, why do you assume so many other people did?  

To the contrary, it seems to me that people fall for the hyped and distorted accounts of mega-risk (yes, me included) all the time.  Indeed, the words screamed in Indianapolis upon hearing War of the Worlds seventy years ago might as easily have been uttered around the nation seven years ago on another crisp autumn day:  “New York is destroyed.  It’s the end of the world. We might as well go home to die.  I’ve just heard it on the radio.”

SOURCES:  Michael Socolow, “The hyped panic over ‘War of the Worlds,’” Chronicle of Higher Education, 24 October 2008, pgs. B16-B17; The War of the Worlds:  Mars’ Invasion of Earth, Inciting Panic and Inspiring Terror from H.G. Wells to Orson Wells and Beyond (Naperville, Ill., Sourcebooks MediaFusion, 2005); David Martinson, “Teachers must not pass along popular ‘myths’ regarding the supposed omnipotence of the mass media,” High School Journal, October/November 2006, pgs. 16-21; Matthew Warren, “Drama, not doomsday,” The Australian, 28 August 2008, pg. 10; “The archive – November 1, 1938 – ‘US panic at Martian attack: Wireless drama causes uproar,” The London Guardian, 8 November 2007; Jason Zinoman, “Just close your eyes and pretend you’re scared,” New York Times, 17 October 2007, pg. 3; Michael Powell, “Marketing gimmick does bad in Boston: Light devices cause bomb scare,” Washington Post, 1 February 2007, p. A3; “TV prank leaves country divided,” New Zealand Herald, 4 January 2007.

Follow

Get every new post delivered to your Inbox.