Home » Intellectuals

Category Archives: Intellectuals

An approaching Singularity?

When Ray Kurzweil published his bestseller, The Singularity is Near, in 2005, the skeptical response reverberated widely, but his track record when it comes to having made accurate predictions has been uncanny.  In the late 1980’s it was Kurzweil who anticipated that soon a computer could be programmed to defeat a human opponent in chess; by 1997 Big Blue was beating Garry Kasparov.  His prediction that within several decades humans will regularly assimilate machines to the body seemed, as Michael Skapinker recently put it, “crazy,” “except that we are already introducing machines into our bodies.  Think of pacemakers – or the procedure for Parkinson’s disease that involves inserting wires into the brain and placing a battery pack in the chest to send electric impulses through them.”

Kurzweil obviously has something more dramatic in mind than pacemakers.  The term singularity both describes the center of a black hole where the universe’s laws don’t hold and that turning point in human history where the forward momentum of machine development (evolution?) will have so quickly accelerated as to outpace human brainpower and arguably human controls.  For Kurzweil the potential implications are socially and scientifically transformational:  as Skapinker catalogs them, “We will be able to live far longer – long enough to be around for the technological revolution that will enable us to live forever.  We will be able to resist many of the diseases, such as cancer, that plague us now, and ally ourselves with digital versions of ourselves that will become increasingly more intelligent than we are.”

Kurzweil’s positions have attracted admirers and detractors.  Bill Gates seems to be an admirer (Kurzweil is “the best person I know at predicting the future of artificial intelligence”).  Others have criticized the claims as hopelessly exaggerated; Douglas Hofstadter admires elements of the work but has also said it presents something like a mix of fine food and “the craziest sort of dog excrement.”  A particular criticism is how much of Kurzweil’s claim rests on what critics call the “exponential growth fallacy.”  As Paul Davies put it in a review of The Singularity is Near:  “The key point about exponential growth is that it never lasts.  The conditions for runaway expansion are always peculiar and temporary.”  Kurzweil responds that the conditions for a computational explosion are essentially unique; as he put it in an interview:  “what we see actually in these information technologies is that the exponential growth associated with a particular paradigm… may come to an end, but that doesn’t stop the ongoing exponential progression of information technology – it just yields to another paradigm.”  Kurzweil’s projection of the trend lines has him predicting that by 2027, computers will surpass human intelligence, and by 2045 “strictly biological humans won’t be able to keep up” (qtd. in O’Keefe, pg. 62).

Now Kurzweil has been named chancellor of a new Singularity University, coordinated by a partnership between NASA and Google.  The idea is simultaneously bizarre and compelling.  The institute is roughly modeled on the International Space Unversity in Strasbourg, where the idea is to bring together Big Thinkers who can, by their interdisciplinary conversations and collaboration, tackle the impossible questions.  One wonders at whether the main outcome will be real research or wannabe armchair metaphysical speculation – time will tell, of course.  NASA’s role seems to be simply that they have agreed to let the “university” rent space at their Moffett Field Ames Research Center facility in California.  The money comes from Peter Diamandis (X Prize Foundation chair), Google co-founder Larry Page, Moses Znaimer (the media impresario), and tuition revenue (the nine week program is charging $25,000, scholarships available).  With respect to the latter the odds seem promising – in only two days 600 potential students applied.

The conceptual issues surrounding talk of a Singularity go right to the heart of the humanistic disciplines, starting with the manner in which it complicates anew and at the outset what one means by the very term human.  The Kurzweil proposition forces the issue by postulating that the exponential rate of information growth and processing capacity will finally result in a transformational break.  When one considers the capacity of human beings to stay abreast of all human knowledge that characterized, say, the 13th century, when Europe’s largest library (housed at the Sorbonne) held only 1,338 volumes, and contrasts that with the difficulty one would encounter today in simply keeping up with, say, research on William Shakespeare or Abraham Lincoln, the age-old humanistic effort to induce practices of close reading and thoughtful contemplation can seem anachronistically naive.

One interesting approach for navigating these issues is suggested in a 2007 essay by Mikhail Epstein.  Epstein suggests that the main issue for the humanities lies less in the sheer quantity of information and its potentially infinite trajectory (where, as Kurzweil has implied, an ever-expanding computational mind finally brings order to the Universe) than in the already evident mismatch between the finite human mind and the accumulated informational inheritance of humanity.  Human beings live for a short period of time, and within the limited timeline of even a well-lived life, the amount of information one can absorb and put to good use will always be easily swamped by the accumulated knowledge of the centuries.  And this is a problem, moreover, that worsens with each generation.  Epstein argues that this results in an ongoing collective trauma, first explained by Marxist theory as inducing both vertigo and alienation, then by the existentialists as an inevitability of the human condition, and now by poststructuralists who (and Epstein concedes this is an oversimplification) who take reality itself “as delusional, fabricated, or infinitely deferred” (19).  Epstein sees all this as evidencing the traumatizing incapacity of humans to comprehend in any detailed way their own collective history or thought.  The postmodern sensibility revealed in such aesthetic traditions as Russian conceptualism, “which from the 1970s to the 1990s was occupied with cliches of totalitarian ideology,” and which “surfaced in the poetry and visual art of Russian postmodernism” in ways “insistently mechanical, distant, and insensitive” (21).  There and elsewhere, “the senses are overwhelmed with signs and images, but the intellect no longer admits and processes them” (22).

The problem to which Epstein calls attention – the growing gap between a given human and the total of humanity – is not necessarily solved by the now well-established traditions that have problematized the Enlightenment sense of a sovereign human.  In Epstein’s estimation, the now-pluralized sense of the human condition brought into being by multiculturalism has only accentuated the wider social trends to particularization and hyper-specialization:  the problem is that “individuals will continue to diversify and specialize:  they will narrow their scope until the words humans and humanity have almost nothing in common” (27).

The wider work on transhumanism and cyborg bodies reflects a longer tradition of engagement with the challenge posed by technological transformation and the possibilities it presents for physical reinvention.  At its best, and in contrast to the more culturally salient cyborg fantasies enacted by Star Trek and the Terminator movies, this work refuses the utopian insistence in some of the popular accounts that technology will fully eradicate disease, environmental risk, war, and death itself.  This can be accomplished by a range of strategies, one of which is to call attention to the essentially religious impulses in the work, all in line with long-standing traditions of intellectual utopianism that imagine wholesale transformation as an object to be greatly desired.  James Carey used to refer to America’s “secular religiosity,” and in doing so followed Lewis Mumford’s critique of the nation’s “machano-idolatry” (qtd. in Dinerstein, pg. 569).  Among the cautionary lessons of such historical contextualization is the reminder of how often thinkers like Kurzweil present their liberatory and also monstrous fantasies as inevitabilities simply to be managed in the name of human betterment.

SOURCES:  Michael Skapinker, “Humanity 2.0:  Downsides of the Upgrade,” Financial Times, 10 February 2009, pg. 11; Mikhail Epstein, “Between Humanity and Human Beings:  Information Trauma and the Evolution of the Species,” Common Knowledge 13.1 (2007), pgs. 18-32; Paul Davies, “When Computers Take Over:  What If the Current Exponential Increase in Information-Processing Power Could Continue Unabated,” Nature 440 (23 March 2006); Brian O’Keefe, “Check One:  __ The Smartest, or __ The Nuttiest Futurist on Earth,” Fortune, 14 May 2007, pgs. 60-69; Myra Seaman, “Becoming More (Than) Human:  Affective Posthumanisms, Past and Future,” Journal of Narrative Theory 37.2 (Summer 2007), pgs. 246-275; Joel Dinerstein, “Technology and Its Discontents:  On the Verge of the Posthuman,” American Quarterly (2006), pgs. 569-595.

Remembering Harold Pinter

Several of the obituaries for Harold Pinter, the Nobel prize winning playwright who died on Christmas Eve, see the puzzle of his life as centered on the question of how so happy a person could remain so consistently angry.  The sense of anger, or perhaps sullenness is the better word, arises mainly from the diffidence of his theatrical persona and the independence of his best characters even, as it were, from himself, and of course his increasingly assertive left-wing politics.  The image works, despite its limitations, because as he suffered in recent years from a gaunting cancer he remained active and in public view, becoming something of a spectral figure.  And of course many who were not fans of theatrical work (from the hugely controversial Birthday Party, to the critically acclaimed Caretaker, and then further forays into drama and film) mainly knew him through his forceful opposition to Bush and Blair and their Iraqi policies and the larger entanglements of American empire.

But Pinter, and this is true I think of all deeply intellectual figures, cannot be reduced to the terms provocateur or leftist.  In this case, to be sure, simple reductions are wholly inadequate to the task given his very methods of work:  one of his most abiding theatrical legacies is his insistence that dramatic characters are inevitably impenetrable – they owe us no “back story,” nor are their utterances ever finally comprehensible, any more than are our interactions in the real world of performed conversation.  And so Pinter set characters loose who even he could not predict or control, an exercise that often meant his productions were themselves angering as audiences struggled to talk sense into the unfolding stories. As the Economist put it, “his characters rose up randomly… and then began to play taunting games with him.  They resisted him, went their own way.  There was no true or false in them.  No certainty, no verifiable past…  Accordingly, in his plays, questions went unanswered.  Remarks were not risen to.”

So what does all this say about the ends of communication?  For Pinter they are not connected to metaphysical reflection or understanding (this was Beckett’s domain; it is somehow fitting that Pinter’s last performance was in Beckett’s Krapp’s Last Tape, played from a wheelchair), but simple self defense, a cover for the emptiness underneath (Pinter: “cover for nakedness”), a response to loneliness where silence often does just as well as words.  And so this is both a dramatic device (the trait that makes a play Pinteresque) and a potentially angering paradox:  “Despite the contentment of his life he felt exposed to all the winds, naked and shelterless.  Only lies would protect him, and as a writer he refused to lie.  That was politicians’ work, criminal Bush or supine Blair, or the work of his critics” (Economist).  Meanwhile, the audience steps into Pinter’s worlds as if into a subway conversation; as Cox puts it, “The strangers don’t give you any idea of their backgrounds, and it’s up to the eavesdropper to decide what their relationships are, who’s telling the truth, and what they’re talking about.”

The boundaries that lie between speaking and silence are policed by timing, and Pinter once said he learned the value of a good pause from watching Jack Benny performing at the Palladium in the early 1950’s.  One eulogist recalls the “legendary note” Pinter once sent to the actor Michael Hordern:  “Michael, I wrote dot, dot, dot, and you’re giving me dot, dot.”  As Siegel notes:  “It made perfect sense to Hordern.”  The shifting boundaries of communication, which in turn provide traces of the shifting relations of power in a relationship, can devolve into cruelty or competition where both players vie for one-up status even as all the rest disintegrates around them.  As his biographer, Michael Billington, put it, “Pinter has always been obsessed with the way we use language to mask primal urges.  The difference in the later plays is not simply that they move into the political arena, but they counterpoint the smokescreen of language with shocking and disturbing images of torture, punishment, and death.”  At the same time, and this because Pinter was himself an actor and knew how to write for them, the written texts always seemed vastly simpler on paper than in performance – and this is not because simple language suggests symbolic meaning (Pinter always resisted readings of his work that found symbolic power in this or that gesture) but because the dance of pauses and stutters and speaking end up enacting scenes of apparently endless complexity.

For scholars of communication who attend to his work, then, Pinter poses interesting puzzles and even at their most cryptic his plays bump up against the possibilities and limits of language.  One such riddle, illuminated in an essay Dirk Visser, is that while most Pinter critics see his plays as revealing the failures of communication, Pinter himself refused to endorse such a reading, which he said misapprehended his efforts.  And as one moves through his pieces, the realization that language is not finally representational of reality slowly emerges (or, in some cases, with the first line), nor even instrumental (where speakers say certain things to achieve certain outcomes).  Pinter helps one see how language can both stabilize and unmoor meaning, even in the same instant (this is the subject of an interesting analysis of Pinter’s drama written by Marc Silverstein), and his work both reflects and straddles the transition from modernism to postmodernism he was helping to write into existence (a point elaborated by Varun Begley).

His politics were similarly complicated, I think, a view that runs contrary to the propagandists who simply read him as a leftist traitor, and a fascist at that.  His attacks on Bush/Blair were often paired with his defense of Milosevic in the press as implying a sort of left-wing fascism where established liberal power is always wrong.  But his intervention in the Milosevic trial was not to defend the war criminal but to argue for a fair and defensible due process, and this insistence on the truth of a thing was at the heart of his compelling Nobel address.  Critics saw his hyperbole as itself a laughable performative contradiction (here he is, talking about the truth, when he hopelessly exaggerates himself).  I saw a long interview done with Charlie Rose, replayed at Pinter’s death, where Rose’s impulse was to save Pinter from this contradiction, and from himself:  (Paraphrasing Rose) “Surely your criticism is not of all the people in America and Britain, but only made against particular leaders.”  “Surely you do not want to oversimplify things.”  Pinter agreed he was not accusing everyone of war crimes but also refused to offer broader absolution, since his criticism was of a culture that allowed and enabled lies as much as of leaders who perpetuated them without consequence.  Bantering with Rose, that is to say, he refused to take the bait, and the intentional contradictions persisted.  His Nobel speech (which was videotaped for delivery because he could not travel to Stockholm and is thus available for view online) starts with this compelling paragraph:

In 1958 I wrote the following:  “There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false.  A thing is not necessarily either true or false; it can be both true or false.”  I believe that these assertions still make sense and do still apply to the exploration of reality through art.  So as a writer I stand by them but as a citizen I cannot.  As a citizen I must ask:  What is true?  What is false?

What was so angering for many was Pinter’s suggestion that the American leadership (and Blair too) had committed war crimes that had first to be recognized and tallied and then perpetrators held to account:

The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile.  The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven.  Hundreds of thousands of deaths took place throughout these countries.  Did they take place?  And are they in all cases attributable to US foreign policy?  The answer is yes they did take place and they are attributable to American foreign policy.  But you wouldn’t know it.  It never happened.  Nothing ever happened.  Even while it was happening it wasn’t happening.  It didn’t matter.  It was of no interest.  The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them.  You have to hand it to America.  It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good.  It’s a brilliant, even witty, highly successful act of hypnosis.

The argument is offensive to many (when the Nobel was announced, the conservative critic Roger Kimball said it was “not only ridiculous but repellent”), though for a playwright most attentive to the power of the obscuring mask and the underlying and sometimes savage operations of power they obscure, it is all of a piece.  McNulty:  “But for all his vehemence and posturing, Pinter was too gifted with words and too astute a critic to be dismissed as an ideological crank.  He was also too deft a psychologist, understanding what the British psychoanalyst D. W. Winnicott meant when he wrote that ‘being weak is as aggressive as the attack of the strong on the weak’ and that the repressive denial of personal aggressiveness is perhaps even more dangerous than ranting and raving.”

As the tributes poured in, the tensions between the simultaneous arrogance (a writer refuses to lie) and the humility (he felt exposed to all the winds, naked and shelterless) in this arise again and again.  The London theatre critic John Peter gets at this when he passingly noted how Pinter “doesn’t like being asked how he is.”  And then, in back to back sentences:  “A big man, with a big heart, and one who had the rare virtue of being able to laugh at himself.  Harold could be difficult, oh yes.”  David Wheeler (at the ART in Cambridge, Massachusetts):  “What I enjoyed [of my personal meeting with him] was the humility of it, and his refusal to accept the adulation of us mere mortals.”  Michael Billington:  “Pinter’s politics were driven by a deep-seated moral disgust… But Harold’s anger was balanced by a rare appetite for life and an exceptional generosity to those he trusted.”  Ireland’s Sunday Independent:  “Pinter was awkward and cussed… It was the cussedness of massive intellect and a profound sense of outrage.”

Others were more unequivocal.  David Hare:  “Yesterday when you talked about Britain’s greatest living playwright, everyone knew who you meant.  Today they don’t.  That’s all I can say.”  Joe Penhall:  Pinter was “my alpha and beta…  I will miss him and mourn him like there’s no tomorrow.”  Frank Gillen (editor of the Pinter Review):  “He created a body of work that will be performed as long as there is theater.”  Sir Michael Gambon:  “He was our God, Harold Pinter, for actors.”

Pinter’s self-selected eulogy conveys, I think, the complication – a passage from No Man’s Land – “And so I say to you, tender the dead as you would yourself be tendered, now, in what you would describe as your life.”  Gentle.  Charitable.  But also a little mocking.  A little difficult.  And finally, inconclusive.

SOURCES:  Beyond Pinter’s own voluminous work, of course – Marc Silverstein, Harold Pinter and the Language of Cultural Power (Bucknell UP, 1993); Varun Begley, Harold Pinter and the Twilight of Modernism (U Toronto P, 2005); “Harold Pinter,” Economist, 3 January 2009, pg. 69; Ed Siegel, “Harold Pinter, Dramatist of Life’s Menace, Dies,” Boston Globe, 26 December 2008, pg. A1; John Peter, “Pinter:  A Difficult But (Pause) Lovely Man Who Knew How to Apologise,” Sunday Times (London), 28 December 2008, pgs. 2-3; Gordon Cox and Timothy Gray, “Harold Pinter, 1930-2008,” Daily Variety, 29 December 2008, pg. 2; Charles McNulty, “Stilled Voices, Sardonic, Sexy:  Harold Pinter Conveyed a World of Perplexing Menace with a Vocabulary All His Own,” Los Angeles Times, 27 December 2008, pg. E1; Dirk Visser, “Communicating Torture: The Dramatic Language of Harold Pinter,” Neophilologus 80 (1996): 327-340; Matt Schudel, “Harold Pinter, 78,” Washington Post, 26 December 2008, pg. B5; Michael Billington, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 15; Esther Addley, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 14; Frank Gillen, “Farewell to an Artist, Friend,” St. Petersburg Times (Florida), 4 January 2009, pg. 4E; “Unflagging in His Principles and Unrivalled in His Genius,” Sunday Independent (Ireland), 28 December 2008; Dominic Dromgoole, “In the Shadow of a Giant,” Sunday Times (London), 28 December 2008, pgs. 1-2; Mel Gussow and Ben Brantley, “Harold Pinter, Whose Silences Redefined Drama, Dies at 78,” New York Times, 26 December 2008, pg. A1.

Counting the humanities

Last week the American Academy of Arts and Sciences released a long-anticipated prototype of its Humanities Indicators project.  The initiative – organized a decade ago by the American Council of Learned Societies, the National Endowment for the Humanities, and the National Humanities Alliance, and funded by the Hewlett and Mellon Foundations – responds to the accumulating sense that (and I guess this is ironic) the humanities haven’t paid enough attention to quantifying their impact and history.  As Roger Geiger notes, “gathering statistics on the humanities might appear to be an unhumanistic way to gain understanding of its current state of affairs.”  But noting the value of a fuller accounting, the HI project was proposed as a counterpart to the Science and Engineering Indicators (done biennially by the National Science Board), which have helped add traction to the now widely recognized production crisis in the so-called STEM disciplines.

The Chronicle of Higher Education summarized the interesting findings this way (noting that these were their extrapolations; the Indicators simply present data without a narrative overlay apart from some attached essays):

In recent years, women have pulled even with men in terms of the number of graduate humanities degrees they earn but still lag at the tenure-track job level.  The absolute number of undergraduate humanities degrees granted annually, which hit bottom in the mid-1980s, has been climbing again.  But so have degrees in all fields, so the humanities’ share of all degrees granted in 2004 was a little less than half of what it was in the late 1960s.

This published effort is just a first step, and the reported data mainly usefully repackage data gleaned by other sources (such as from the Department of Education and the U.S. Bureau of Labor Statistics).  Information relating to community colleges is sparse for now.  Considerably more data have been originally generated by a 2007-2008 survey, and that will be added to the website in coming months.

The information contained in the tables and charts confirm trends long suspected and more anecdotally reported at the associational level:  the share of credit hours and majors and faculty hired who connect to the humanistic disciplines has fallen dramatically as a percentage of totals.  The percentage of faculty hired into tenure lines, which dropped most significantly in the late 1980s and 1990s, is still dropping, though more modestly, today.  Perhaps most telling, if a culture can be said to invest in what it values, is the statistic that in 2006, “spending on humanities research added up to less than half a percent of the total devoted to science and engineering research” (Howard).  As Brinkley notes, in 2007, “NEH funding… was approximately $138.3 million – 0.5 percent of NIH funding and 3 percent of NSF… [And] when adjusted for inflation, the NEH budget today is roughly a third of what it was thirty years ago.”  Even worse:  “[T]his dismal picture exaggerates the level of support for humanistic research, which is only a little over 13% of the NEH program budget, or about $15.9 million.  The rest of the NEH budget goes to a wide range of worthy activities.  The largest single outlay is operating grants for state humanities councils, which disburse their modest funds mostly for public programs and support of local institutions.”  And from private foundations, “only 2.1% percent of foundation giving in 2002 went to humanities activities (most of it to nonacademic activities), a 16% relative decline since 1992.”  Meanwhile, university presses are in trouble.  Libraries are struggling to sustain holdings growth.

Other information suggests interesting questions.  For instance:  why did the national production of humanities graduates climb so sharply in the 1960’s (doubling between 1961 and 1966 alone)?  Geiger argues the bubble was a product of circa-1960s disillusionment with the corporate world, energy in the humanistic disciplines, the fact that a humanities degree often provided employment entree for women (especially to careers in education), and a booming economy that made jobs plentiful regardless of one’s academic training.  After 1972, Geiger argues, all these trends were flipped:  the disciplines became embroiled in theoretical disputes and thus less intellectually compelling for new students (some attracted by Big Theory and arguably more antagonized), universities themselves became the target of disillusion, business schools expanded fast and became a more urgent source of competition, and so on.  Today, although enrollments are booming across the board in American universities, the humanities remain stable in generating roughly 8% of B.A. degrees, which may mean the collapse has reached bottom.

One interesting suggestion is posed by David Laurence, who reads the Indicators as proving that the nation can be said to have produced a “humanities workforce,” which in turn “makes more readily apparent how the functioning of key cultural institutions and significant sectors of the national economy depends on the continued development and reproduction of humanistic talent and expertise.”  This infrastructure includes (as listed by Laurence) schools and teachers, libraries, clergy, writers, editors, museums, arts institutions, theater and music, publishing, entertainment and news (where the latter involve the production of books, magazines, films, TV, radio, and Internet content).  And this gives rise to some potential confidence:  humanities programs continue to attract brilliant students, good scholarship is still produced, and the “’rising generation’ of humanities scholars is eager to engage directly with publics and communities” (Ellison), implying that the public humanities may grow further.  An outreach focus for humanists is a double-edged sword, of course, but might enhance the poor standing university humanities programs have, for example, with state funding councils.

SOURCES:  Jennifer Howard, “First National Picture of Trends in the Humanities is Unveiled,” Chronicle of Higher Education, 16 January 2009, pg. A8; Jennifer Howard, ‘Early Findings From Humanities-Indicators Project are Unveiled at Montreal Meeting,” Chronicle of Higher Education, 18 May 2007, pg. A12; Essays attached to the AAAS Humanities Indicators website, including Roger Geiger, “Taking the Pulse of the Humanities: Higher Education in the Humanities Indicators Project,” David Laurence, “In Progress: The Idea of a Humanities Workforce,” Alan Brinkley, “The Landscape of Humanities Research and Funding,” and Julie Ellison, “This American Life:  How Are the Humanities Public?”

When social science is painful

The latest issue of the Annals of the American Academy of Political and Social Science (#621, January 2009) is wholly focused on the report authored in 1965 (read it here) by Daniel Patrick Moynihan focused on the status of black families, “the most famous piece of social scientific analysis never published” (Massey and Sampson, pg. 6).  The report arose out of Moynihan’s experience in the Kennedy and Johnson administrations working on poverty policy; his small group of underlings included Ralph Nader, working in his first Washington job.  Inspired by Stanley Elkins’ work on slavery (a book of that name argued that slavery set in motion a still-continuing tendency to black economic and social dependency), Moynihan’s group examined the ways in which welfare policy was, as he saw it, perpetuating single-family households led mainly by women, and at the expense of social stability and racial progress.  [In what follows I am relying almost totally on the full set of essays appearing in the January 2009 AAPSS, and the pagination references that follow are to those articles.]

Moynihan was writing in the immediate aftermath of passage of the 1964 Civil Rights Act, and a principle theme of the report is that the eradication of legal segregation would not be enough to assure racial equality given larger structural forces at work.  Pressures on the black family had produced a state of crisis, a “tangle of pathology” that was reinforcing patterns of African-American poverty, he wrote.  Moynihan’s larger purpose was to recommend massive federal interventions, a goal subverted, unfortunately, by the report’s rhetorical overreaching (e.g.:  matriarchy in black families were said to prevent black men from fulfilling “the very essence of the male animal from the bantam rooster to the four star general… to strut”).  The solution, in his view, was to be found in a major federal jobs program for African American men.

The report was leaked to the press and was, by and large, immediately condemned, first because it seemed to provide aid and comfort to racists in its emphasis on out-of-wedlock births as a demographic pathology, and second because it seemed to many readers a classic case of “blaming the victim.”  In fact, the term “blaming the victim” may have its genesis in William Ryan’s use of the phrase to critique Moynihan in the Nation.  I think it likely that cultural salience of these critiques was later reinforced by a memo he wrote to Richard Nixon advocating the idea that “the issue of race could benefit from a period of ‘benign neglect,’” a locution he came to regret since that one soundbite came to dominate the actual point of the memo, better encapsulated in this perspective:  “We need a period in which Negro progress continues and racial rhetoric fades” (contrary to the impression given by the benign neglect comment, he was actually trying to be critical of the hot and racially charged rhetoric coming out of Vice President Agnew).  Moynihan’s report proved divisive in the African American community, endorsed on release by Roy Wilkins and Martin Luther King, Jr., but condemned by James Farmer.  By the time the report itself was more widely read its reception was distorted by the press frame, and a counter-tradition of research, celebrating the distinctiveness of black community formation, was well underway.

Read by today’s lights the Moynihan report has in some respects been both confirmed and its critics also partly vindicated too.  The essays in this special issue offer many defenses.  Douglas Massey (the Princeton sociologist) and Sampson (chair of sociology at Harvard, both writing in the introduction at pgs. 7-8) defend the report against the accusation of sexism:

Although references to matriarchy, pathological families, and strutting roosters are jarring to the contemporary ear, we must remember the times and context.  Moynihan was writing in the prefeminist era and producing an internal memo whose purpose was to attract attention to a critical national issue.  While his language is certainly sexist by today’s standards, it was nonetheless successful in getting the attention of one particular male chauvinist, President Johnson, who drew heavily on the Moynihan Report for his celebrated speech at Howard University on June 4.

Ironically, though, the negative reactions to the leaked report (which suffered since the report itself was not publicly circulated, only the critical synopses) led Johnson himself to disavow it, and no major jobs program for black men was forthcoming as part of Great Society legislative action.  Moynihan left government soon afterward and found the national coverage, a lot of which attacked him as a bigot, scarring and unwarranted given the overall argumentative arc of the report.  Only when serious riots reerupted in 1968 did jobs get back on the agenda, but the watered down affirmative action programs that resulted failed to transform the economic scene for racial minorities while proving a galvanizing lightning rod for conservative opponents (Massey and Sampson, 10).  The main policy change relating to black men since then has been sharp increases in rates of incarceration, not rises in employment or economic stability, a phenomenon which is the focus of an essay by Bruce Western (Harvard) and Christopher Wildeman (University of Michigan).

Several of the contributors to the special issue mainly write to insist that Moynihan has been vindicated by history.  His simple thesis, that in subgroups pressures tending to disemploy males will in turn fragment families and produce higher incidences of out-of-wedlock birth, divorce, all at the main expense of women and children, is explicitly defended as having been vindicated by the newest data.  James Q. Wilson writes that the criticism the report received at the time “reflects either an unwillingness to read the report or an unwillingness to think about it in a serious way” (29). Harry Holzer, an Urban Institute senior fellow, argues that the subsequent trends in black male unemployment have only intensified since the 1960’s, thereby reaffirming the prescience of Moynihan’s position and strengthening the need for a dramatic federal response (for instance, Holzer defends the idea that without larger educational investments the destructive perceptions of working opportunities will produce perceptual barriers to cultural transformation).  The predicate in Ron Haskins (of the Brookings Institution) essay is announced by its title:  “Moynihan Was Right:  Now What?” (281-314).

Others argue that the Moynihan claims, which relied on the assumption that only traditional family arrangements can suitably anchor culture, ignore the vitality of alternative family forms that have become more common in the forty years since.  Frank Furstenberg notes that “Moynihan failed to see that the changes taking place in low-income black families were also happening, albeit at a slower pace, among lower-income families more generally” (95).  For instance, rates of single parenting among lower-income blacks have dropped while increasing among lower-income whites.  Linda Burton (Duke) and Belinda Tucker (UCLA) reiterate the criticism that the behavior of young women of color should not be pathologized, but is better understood as a set of rational responses to the conditions of cultural uncertainty that pervade poorer communities (132-148):  “Unlike what the Moynihan Report suggested, we do not see low-income African American women’s trends in marriage and romantic unions as pathologically out of line with the growing numbers of unmarried women and single mothers across all groups in contemporary American culture.  We are hopeful that the uncertainty that is the foundation of romantic relationships today will reinforce the adaptive skills that have sustained African American women and their families across time” (144).  Kathryn Edin (Harvard) et al., criticize Moynihan’s work for diverting research away from actual attention to the conditions of black fatherhood, which in turn has meant that so-called “hit and run” fathers could be criticized in ways that have raced far out of proportion to their actual incidence in urban populations (149-177).

The lessons drawn by the AAPSS commentators from all this for the practice of academic research are interesting.  One drawn by Massey relates to the “chilling effect on social science over the next two decades [caused by the Moynihan report and its reception in the media].  Sociologists avoided studying controversial issues related to race, culture, and intelligence, and those who insisted on investigating such unpopular notions generally encountered resistance and ostracism” (qtd. from a 1995 review in Massey and Sampson, 12). Because of this, and the counter-tendency among liberal/progressive scholars to celebrate single parenting and applaud the resilience of children raised in single-parent households, conservatives were given an ideological opening to drumbeat media reports about welfare fraud, drug usage rates, and violence, and to pathologize black men, an outcome M/S argue led to a conservative rhetoric of “moralistic hectoring and cheap sermonizing to individuals (“Just say no!”).  Not until William Julius Wilson’s The Truly Disadvantaged (1987), did the scholarly tide shift back to a publicly articulated case for social interventions more in tune with Moynihan’s original proposals – writing in the symposium WJW agrees with that judgment and traces the history of what he argues has been a social science abandonment of structural explanations for the emergence of poverty cultures.  The good news is arguably that “social scientists have never been in such a good position to document and analyze various elements in the ‘tangle of pathology’ he hypothesized” (Massey and Sampson, pg. 19).

The history of the report also calls attention to the limits of government action, a question with which Moynihan is said to have struggled for his entire career in public service.  Even accepting the critiques of family disintegration leaves one to ask what role the government might play in stabilizing family formations, a question now controversial on many fronts.  James Q. Wilson notes that welfare reform is more likely to shift patterns of work than patterns of family, since, e.g.,  bureaucrats can more reasonably ask welfare recipients to apply for a job than for a marriage license (32-33).  Moynihan’s answer was that the government’s best chances were to provide indirect inducements to family formation, mainly in the form of income guarantees (of the sort finally enacted in the Earned Income Tax Credit).  But asked at the end of his career about the role of government, Moynihan replied:  “If you think a government program can restore marriage, you know more about government than I do” (qtd. in Wilson, 33).

Moynihan was an intensely interesting intellectual who thrived, despite his peculiarities, in the United States Senate (four terms from New York before retiring and blessing Hillary Clinton’s run for his seat), as he had earlier serving as Nixon’s ambassador to India and Ford’s representative at the United Nations.  At his death in 2003, a tribute in Time magazine said that “Moynihan stood out because of his insistence on intellectual honesty and his unwillingness to walk away from a looming debate, no matter how messy it promised to be.  Moynihan offered challenging, groundbreaking – sometimes even successful – solutions to perennial public policy dilemmas, including welfare and racism.  This is the sort of intellectual stubbornness that rarely makes an appearance in Washington today” (Jessica Reaves, Time, March 27, 2003).  His willingness to defend his views even when deeply unpopular gave him a thick skin and the discipline to write big books during Senate recesses while his other colleagues were fundraising.

Moynihan’s intellectualism often put him at odds with Democratic orthodoxy, and maybe on the wrong side of the issue – he opposed the Clinton efforts to produce a national health insurance system, publicly opposed partial birth abortion (“too close to infanticide”), was famously complicit in pushing the American party line at the United Nations, a fact that has been much criticized as enabling the slaughter of maybe 200,000 victims, killed in the aftermath of Indonesia’s takeover of East Timor.  But he also held a range of positions that reassured his mainly liberal and working class base:  opposed to the death penalty, the Defense of Marriage Act, NAFTA, and a famous champion of reducing the government’s proclivity to classify everything as top secret.

But Daniel Patrick Moynihan will be forever linked to his first and most (in)famous foray into the nation’s conversation on race, which simultaneously revealed the possibilities for thoughtful social science to shape public policy and the risks of framing such research in language seeking to make such research dramatic and attention-getting in a glutted sea of white papers and task force reports whose issuance typically come and go without any serious notice.

Claude Levi-Strauss at 100

On Friday, November 28, Claude Levi-Strauss turned 100, an event that set loose a series of worldwide commemorations.  As one might expect, an intellectual of such enormous influence provoked competing reactions.  In London, the Guardian dismissed Levi-Strauss (“the intricacies of the structural anthropology he propounded now seem dated… [and] he has become the celebrated object of a cult’) while the Independent celebrated him (“his work, after going out of fashion several times, is more alive than ever”), both judgments issued on the same day.  French President Nicolas Sarkozy paid a personal evening visit to the Levi-Strauss apartments, and the museum he inspired in Paris, the Musee du Quai Branly, gave away free admission for a day in his honor (that day 100 intellectuals gave short addresses at the museum or read excerpts from his writings).  ARTE, the French-German cultural TV channel, dedicated the day to Levi-Strauss, playing documentaries and interviews and films centered on his lifework, and the New York Times reported that “centenary celebrations were being held in at least 25 countries.”

Levi-Strauss has not, for obvious reasons, made many public appearances of late.  His last was at the opening of the Quai Branly in 2006; not only did he inspire the museum intellectually but many of the exhibit objects were donated by him, the accumulation of his own worldwide life of travels.  In a 2005 interview given with Le Monde, he expressed some pessimism about the planet:  “The world I knew and loved had 2.5 billion people in it.  The terrifying prospect of a population of 9 billion has plunged the human race into a system of self-poisoning.”  In my own field of communication studies, I am not aware that he is widely read or remembered at all, even in seminars on mythology and narrative (two fields in which he made significant contributions), probably an unfortunate byproduct of Jacques Derrida’s sharp attack in two essays that are widely read by rhetorical scholars (“Structure, Sign and Play in the Discourse of the Human Sciences,” in Writing and Difference, Routledge, 1978 and “The Violence of the Letter:  From Levi-Strauss to Rousseau,” in Of Grammatology, Johns Hopkins UP, 1976).

For all I know Levi-Strauss remains must-reading in anthropology, the discipline he did so much to shape as an intellectual giant of the twentieth century.  But his wider absence from the larger humanities (which I mean simply as a reference to the extent to which he is read or cited across the disciplines) is, I think, unfortunate.  No intellectual of his longevity and productivity will leave a legacy as pure as the driven snow.  His campaign against admitting women to the Academie Francaise (he argued for what he saw as long tradition) was wrong and rightly alienating.  His attempt to systemize the universal laws of mythology, which formed what was for some an off-putting four-volume work, remains a brilliant and densely rich analysis of the underlying logics of mythological meaning-making.

But the trajectory of structuralism, and in turn poststructuralism and contemporary French social thought (including the research tradition shaped by Jacques Lacan, who founded his account of the Symbolic on Levi-Strauss’ work on kinship and the gift), cannot be understood without engaging his work, his engagements with Marxist dialectics, Malinowski, Roland Barthes, Jean-Paul Sartre, Paul Ricoeur and many others who respected his work even when they profoundly disagreed with it.  Lacan’s first 1964 seminar on “The Four Fundamental Concepts of Psychoanalysis” virtually begins by raising a Levi-Strauss-inspired question (Lacan wonders whether the idea of the pensée sauvage is itself capacious enough to account for the unconscious as such).  Today it is Foucault who is fondly remembered for pushing back Sartre’s temporally-based dialectical theory, but at the time Levi-Strauss played as significant a role (and his essays, which take on Sartre in part by deconstructing the binary distinction between diachronic and synchronic time, remain models of intellectual engagement).

Levi-Strauss has been a key advocate for a number of important ideas that have now become accepted as among the conventional wisdoms of social theory, and that absent his articulate forcefulness might still have to be fought for today:  the idea that Saussure and Jakobson’s work on language should be brought to bear on questions relating to social structure, the thought that comprehending the relationship of ideas within a culture is more important to intercultural understanding than anthropological tourism, the sense that cultural difference cannot be reduced to the caricature that modern peoples are somehow smarter or wiser than ancient ones or that modern investigators should inevitably disparage the “primitive,” the insight that the relationship between things can matter more than the thing-in-itself, and many more.

But the reasons to read Levi-Strauss are well justified on grounds that go beyond his interesting biography (including his sojourn in exile from the Nazis at the New School for Social Research in New York and public longevity as a national intellectual upon his return to France), his historical role in old philosophical disputes, or even the sheer eloquence of his writing (Tristes tropiques, written in 1955, remains a lovely piece of work and a cleverly structured narrative argument).  It is, I think, a mistake to dismiss Levi-Strauss’ work as presuming to offer a science of myth – the best point of entry on this point is the set of lectures he delivered in English for the Canadian Broadcast Corporation in the late 1970’s (published as Myth and Meaning in 1978), where his overview reveals, as if it was necessary, the layers of ambiguity and interpretation that always protected Levi-Strauss’ work from easy reductionism).

And the exchanges with Derrida and Sartre merit a return as well.  There is an impulse, insidious in my view, to judge Derrida’s claims as a definitive refutation when they signal a larger effort to push the logic of structuralism and modernism to its limits.  The post in poststructuralism is not an erasure or even a transcendence but a thinking-through-the-implications-of maneuver that lays bare both the strengths and limits of the tradition begun by Saussure.  Levi-Strauss developed a still-powerful account of how linguistic binaries structure human action but he was also deeply self-reflective as he interrogated the damage done to anthropological theory by its own reversion to binary logics (such as the elevation of literacy over orality, or advanced over primitive societies).  Paul Ricoeur, and Derrida himself, saw the debate with Levi-Strauss as a definitive refutation (Ricoeur, writing in his Conflict of Interpretation, set Derrida’s “school of suspicion” against Levi-Strauss’ “school of reminiscence”).  But the insights generated by principles that Derrida (and Levi-Strauss) rightly understood as provisional and even contradictory remain powerful, perhaps even more so at a time when poststructuralist logics seem to be running their course.

None of this denies the real objections raised against Levi-Strauss’ version of structuralism – its methodological conservatism or its tendency (offered in the name of scholarly description) to valorize or make invisible power arrangements that reinforce the tendency of one part of any binary to obliterate or repress its opposite.  But the richness of Derridean thought is enriched and not subverted by putting it back into conversation with Levi-Strauss.  To take just one example, CLS’s work on myth usefully presages Derrida’s own insights on the limits of inferring a “final” or “original” meaning.  The elements of myths circulate within the constraints of social structure to create endless transformations and possibilities of meaning best understood not through the logics of reference or mimesis but logics of context and relationship.  And the case Levi-Strauss articulated against phenomenology still holds up pretty well in the context of its reemergence in some quarters (in communication studies, phenomenological approaches are increasingly advocated as a way forward in argumentation theory and cinema studies).  The first volume of Structural Anthropology remains one of the most important manifestos for structuralism.

From the vantage point of communication, one of the intriguing dimensions of CLS’s work is his claim that modern societies are plagued by an excess of communication.  When first articulated, his concern related to the risk that too much cross-cultural exchange would obliterate differences, a view then current in the work of scholars like Herbert Schiller and the circa-1970s view that the allures of America’s entertainment culture was producing a one-way destruction of other societies.  But Levi-Strauss means something more too, and his argument is made intriguing in the light of his lifelong commitment to the idea that the deep grammars of cultural mythologies are universal.  For it is the interplay of universally shared experience and local variability that expresses the real genius of the human condition, and the twin threats of global groupthink and overcrowding are still not quite fully apprehended, even within the terms of the poststructuralist conversations he did so much to shape.

Michel Foucault, writing in Order of Things, says of Levi-Strauss that his work is motivated “by a perpetual principle of anxiety, of setting in question, of criticism and contestation of everything that could seem, in other respects, as taken for granted.”  Foucault’s sentiment is complicated and not intended, as I read it, as a simple compliment.  But it points to an aspect of his century-long work that should also attract continued interest.

SOURCES:  “In praise of Claude Levi Strauss,” (London) Guardian, 29 November 2008, pg. 44; John Lichfield, “Grand chieftain of anthropology lives to see his centenary,” (London) Independent, 29 November 2008, pg. 38; Steven Erlanger, “100th birthday tributes pour in for Levi-Strauss,” New York Times, 29 November 2008, pg. C1; Albert Doja, “The advent of heroic anthropology in the history of ideas,” Journal of the History of Ideas (2005): 633-650;  Lena Petrovic, “Remembering and disremembering: Derrida’s reading of Levi-Strauss,” Facts Universitatis 3.1 (2004): 87-96.

William Eggleston invented color

The Whitney in New York has just opened a major retrospective of William Eggleston’s long career as an innovator in photography (William Eggleston:  Democratic Camera, Photographs and Video, 1961-2008), which perhaps brings full circle a journey that has been mainly centered in the American south and the Mississippi Delta (Memphis most of all) but that in 1976, and connected with an exhibit at the Museum of Modern Art (MOMA), has had galvanizing force for the wider arts.

Although the MOMA had exhibited color photography once before and had shown photos in its galleries as far back as 1932, its decision to showcase Eggleston and his color-saturated pictures in 1976 was exceptionally controversial.  At the time the New York Times said it was “the most hated show of the year.”  “Critics didn’t just dislike it; they were outraged.  Much the way viewers were aghast when Manet exhibited Olympia, a portrait of a prostitute, many in the art community couldn’t figure out why Eggleston was shooting in color” (Belcove).  Eggleston’s subjects can be seen as totally mundane (as in the above) and his public refusal to illuminate how his main works are staged proved infuriating (and actually, to the contrary, Eggleston has long insisted that he never poses his subjects, arguing, astonishingly, that these are in every case single-shot images and that either he gets the shot or moves onto the next without regret).  Prior to Eggleston, art photography was most often black-and-white.  Thus, for students of the art and practice of photography, and given his enormous visual influence, one can say without complete hyperbole that William Eggleston invented color.

Well, maybe that is a little hyperbolic.  After all, those seeking the color founding might better retreat to the period of the “Cambrian Explosion” 543 million years ago, when the diversification of the species was sparked by the evolutionary development of vision; in that time, “color first arose to help determine who ate dinner and who ended up on the plate” (Finlay 389).  Or one might look to the late Cretaceous period – prior to that “plants did not produce flowers and colored leaves.”  Further elaborating this perspective, Finlay (391) writes that:

As primates gained superior color vision from the Paleocene to the Oligocene (65 to 38 million years ago), the world for the first time blossomed into a range of hues.  At the same time, other creatures and plants also evolved and settled into ecological niches.  Flowering plants (angiosperms) radiated, developing colored buds and fruits; vivid insects and birds colonized the plants, attracted by their tints and serving to disperse their pollen and seeds.  Plants, insects, birds, and primates evolved in tandem, with color playing a crucial role in the survival and proliferation of each.  The heart of these developments lay in upland tropical Africa, where lack of cloud cover and therefore greater luminance resulted in selective evolutionary pressure for intense coloration.

It states the obvious, but I’ll do it anyway, to note that colors, along with the human capacity to recognize and distinguish among them, transforms human experience.  Part of the reason why Aristotle so famously preferred drawing to color is that the latter can too easily overwhelm one’s critical capacities (for him this was evidenced by the fact a viewer in the presence of rich color has to step back, color blurring at close range, and in the process a necessary distancing will inevitably divert audiences from attending to the artistic details present in good drawing).  Plato had disdained color too, thinking it was merely an ornamental, ephemeral and surface distraction, a view oddly recalled later by Augustine, who warned against the threat posed by the “queen of colors” who “works by a seductive and dangerous sweetness to season the life of those who blindly love the world” (qtd. in Finlay, 400).  It was only in the 12th century that Christians came fully around to color, at about the time stained glass technology was undergoing fast refinement; suddenly colored lights were seen as evoking the Divine and True Light of God.

But for centuries color was dismissed as feminine and theoretically disparaged since it “is impossible to grasp and evanescent in thought; it transcends language, clouds the intellect, and evades categorization” (Finlay, 401).  Color was thus seen as radically irrational by the thinking and professing classes – Cato the Elder said that colores floridi (florid colors) were foreign to republican virtue – all of this an interesting contrast to the Egyptian kings who saturated their tombs with gorgeous coloration and to the Greeks who ignored Aristotle’s warnings and painted their Parthenon bright blue and their heroic marble sculptures right down to the red pupils we would today prefer to digitize out since they apparently evoke the idea of Satanic possession.

The history of color is regularly bifurcated by scholars into work emphasizing chromophilia (the love of color) and chromophobia, which by contrast has often reflected an elite view that color is garish and low class.  Wittgenstein concluded that the radically subjective response to color could never be adequately specified in a manner adequate to philosophy:  “there is merely an inability to bring the concepts into some kind of order.  We stand there like the ox in front of the newly-painted stall door” (qtd. in Finlay, pg. 383).

In the context of early film production and the industry’s emerging use of color and then Technicolor, colors were seen by some as a “threat to classical standards of legibility and coherence,” necessitating close control:

For instance, filmmakers monitored compositions for unwanted color contrasts, sometimes termed visual magnets, that might vie for attention with the narratively salient details of a scene.  Within a few years the body of conventions for regulating color’s function as a spatial cue had been widely adopted.  The most general guideline was that background information should be carried by cool colors of low saturation, leaving warm, saturated hues for the foreground.  Narrative interest should coincide with the point of greatest color contrast. (Higgins)

The ongoing power of such conventions has recently led Brian Price, a film scholar at Oklahoma State University, to argue that the imposition of saturated and abstracted color in recent films made by Claire Denis and Hou Hsiao-Hsien exemplify a resistive threat to globalized filmmaking and its industrial grip on the world’s imagination.

A paradox in Eggleston’s work is that although his subjects – Elvis’ Graceland, southern strip malls, the run down architecture produced as often by the simple ravages of time and nature as of neglect – are dated and immediately evocative of a completely different though not wholly lost and variously tempoed time, his photographs seem timeless.  Like the man himself, described by one journalist as “out of place and out of time,” Eggleston captures elements of modern life that persist and his attention to the formalistic properties of color and framing make his work arresting even for those uninterested or unimpressed by the odd assemblages of southern culture who constitute his most interesting subjects.  This paradox, in turn, can produce a sense in the viewer of vague dread, as if the contradictions inhabited by the idea of serendipitous composition reveal dangers of which we are customarily unaware.  At the same time, because Eggleston has never seemed interested in documentary reportage and has defaulted to literal photographs that instead accentuate the commonplace, he “belongs to that rare and disappearing breed, the instinctive artist who seems to see into and beyond what we refer to as the ‘everyday’” (O’Hagan).

Technically speaking, Eggleston beat others to the punch because his personal wealth enabled him to produce very high quality and expensive prints of his best work; another benefit of this wealth may be that, as Juergen Teller has put it, “he has never had the pressure of being commercial.”  The dye-transfer print process he has used since the 1960’s (Eggleston resists the shift to the digital camera and image manipulation, simply noting that it is an instrument he does not know how to play) was borrowed from high-end advertising.  And although rejected early on and in some quarters – the conservative art critic Hilton Kramer notoriously described his 1976 New York exhibit as “perfectly banal” – he has been honored late in life as a prophet in his own time – a lifetime achievement award from the Institute of Contemporary Photography and another from Getty, and other honors from the National Arts Club and others to numerous to mention.  Eggleston seems immune to the critiques whether hostile or friendly, a fact reflected in the details of his mercurial and sometimes weird personal life but also in his refusal to talk talk talk about his work:  “A picture is what it is, and I’ve never noticed that it helps to talk about them, or answer specific questions about them, much less volunteer information in words.  It wouldn’t make any sense to explain them.  Kind of diminishes them.”

The distinctive Eggleston aesthetic has influenced David Lynch (readily evident in his Blue Velvet), Gus Van Sant (e.g., Elephant, an explicit homage), Sofia Coppola (the Virgin Suicides; “it was the beauty of banal details that was inspirational”), the band Primal Scream (his “Troubled Waters” forms the cover art for Give Out But Don’t Give Up) and many others.  David Byrne is a friend and Eudora Welty was a fan.  Curiously, despite his influence on avant-garde cinema and his own efforts at videography, Eggleston professes faint interest in film, although he is said to like Hitchcock.

Finlay has noted that “Brilliant color was rare in the premodern world.  An individual watching color television, strolling through a supermarket, or examining a box of crayons sees a larger number of bright, saturated hues in a few moments than did most persons in a traditional society in a lifetime” (398).  What was true of premodernity was also true of photography wings in the world’s major art museums.  Until William Eggleston.

SOURCES:  Holland Cotter, “Old South Meets New, in Living Color,” New York Times, 6 November 2008; Sean O’Hagan, “Out of the Ordinary,” The (London) Observer, 25 July 2004; Rebecca Bengal, “Southern Gothic: William Eggleston is Even More Colorful than His Groundbreaking Photographs,” New York Magazine, 2 November 2008; Julie Belcove, “William Eggleston,” W Magazine, November 2008; Scott Higgins, “Color Accents and Spatial Itineraries,” Velvet Light Trap, no. 62 (Fall 2008)L 68-70; Brian Price, “Color, the Formless, and Cinematic Eros,” Framework 47.1 (Spring 2006): 22-35; Jacqueline Lichtenstein, The Eloquence of Color:  Rhetoric and Painting in the French Classical Age, trans. Emily McVarish (Berkeley:  University of California Press, 1993); Robert Finlay, “Weaving the Rainbow:  Visions of Color in World History,” Journal of World History 18.4 (2007): 383-431; Christopher Phillips, “The Judgment Seat of Photography,” October 22 (October 1982): 27-63.

The other Williams Ayers

Driving to work yesterday I heard one of Atlanta’s conservative talk radio hosts announce with a mixture of pride and wistfulness that, as a concession to Barack Obama’s victory, he had thrown out all his “research” on William Ayers, whose violent past he had been preaching for months.  Now that Obama has been chosen by the voters to lead the nation, the talk show host noted, it was time to move past Ayers and Jeremiah Wright and onto larger topics.  At the same time, though, while Sarah Palin has been insisting that the association (however modest) still matters, Ayers himself has emerged into the public spotlight, having given some recent interviews (he was on Good Morning America the other morning) and published some op-ed pieces.

As the election unfolded, only passing notice was typically given to the other/older William Ayers, the University of Illinois (Chicago) professor of education.  Now that November 4th has passed, and accepting for the moment the impulse to bracket his past to better understand his influence today as an advocate for educational reform, I’ve been reading some of his work on social justice pedagogy.  It was this work, actually, that led him to cross paths with Obama, since their mutual interest in school reform led both to agree to serve on the same Chicago board of directors, an association that obviously led Obama’s critics to question the wisdom of his political and intellectual alliances.

Ayers has a way of getting right to the point, a trait much on display in the recent interviews but which also makes him an interesting writer.  One book review he authored starts:  “Drawing on traditional methods and straightforward approaches… Vonovskis fails to add anything new to the story of the origins of Head Start despite constant and irritating assertions to the contrary.”  And an essay co-authored with Michael Klonsky begins, “Each day, children in Chicago are cheated out of a challenging, meaningful, or even an adequate education…  Despite the well-publicized crime rate in Chicago’s poor neighborhoods, the greatest robbery is not in the streets, but in the schools.”  But Ayers’ purpose is not just attention-grabbing or op-ed-style hyperbole, for he quickly moves to back up such provocative claims by the presentation of truly appalling data about urban education.  The Chicago research, which appeared in 1994, noted that as of that year, for instance, “reading scores in almost half of Chicago’s schools are in the lowest 1% of the nation.”

Ayers’ work in Chicago does partly mirror the logic of his anti-war activism, which was animated by the view that one must deal with criminal negligence by use of a proportionally urgent response (this was the argument he made on GMA in justifying his participation in anti-Vietnam War insurgency; his view was that what he saw as America’s murderous policies in southeast Asia were so monstrous that they demanded even the use of violent opposition).  In the context of education reform, this has led to the mobilization of what might best be considered a social movement, organized to provide tangible opposition to schooling bureaucracies.  And this, in turn, leads to a wide-scale systemic perspective that attends as much to the macro-allocation (or misallocation) of educational funds as to the local dynamics of this or that classroom.  Schools in Illinois, as elsewhere, are funded by property taxes, and because urban property values tend to be lower they generate less revenue than ends up available in the richer suburbs.  In 1992, Illinois voters narrowly rejected a statewide constitutional amendment to provide funding equalization (a constitutional amendment requires 60% support, while this one received 56%).

The passions elicited by the issue of educating children run deep.  Ayers recounts the firestorm evoked when, in 1988, then-governor of Illinois Bill Thompson resisted higher funding for Chicago schools – he didn’t want to throw more money into a “black hole.”  When one of Chicago’s representatives in the state legislature accused Thompson of having made a racist comment, pundits accused the senator of playing the race card.  But such back-and-forths are not surprising given the complex history of racial politics that has characterized the city’s political history, not to mention the long period of conflict between the city and its teacher union that led to a regular cycle of walkouts in the 1980’s and ‘90’s.  One can gather some sense of Ayers’ fuller indictment in the following passage, also written in the mid-1990’s:

Returning to Chicago [from a discussion of schooling in South Africa], a similarly persuasive argument can be made that the failure of some schools and some children is not due to a failure of the system.  That is, if one suspects for a moment the rhetoric of democratic participation, fairness, and justice, and acknowledges (even tentatively) that our society, too, is one of privilege and oppression, inequality, class divisions, and racial and gender stratifications, then one might view the schools as a whole as doing an adequate job both of sorting youngsters for various roles in society and convincing them that they deserve their privileges and their failures.  Sorting students may be the single, brutal accomplishment of U.S. schools, even if it runs counter to the ideal of education as a process that opens possibilities, provides opportunities to challenge and change fate, and empowers people to control their own lives.  The wholesale swindle in Chicago, then, is neither incidental nor accidental; rather, it is an expression of the smooth functioning of the system.

The movement that emerged as a reaction to the frustrating situation in Chicago was in large measure centered on the idea of accountability, a rhetorical rubric that can accommodate both conservatives (who might prefer to emphasize how schools fail to respond to or engage the interests of parents) and liberals (who might prefer to emphasize the need for greater investments, paired with oversight better able to hold bureaucracies to account) both.  Emerging as it did under the leadership of Mayor Harold Washington, the mobilization of parents and educational reformers brought (Ayers and Klonsky argue) African-American parents to the forefront, along with the traditional themes of civil rights organizing (grassroots activity, decentralization, desegregation, community empowerment).  But they were also assisted by the then-recent creation of academic research activity that provided concrete data able to call attention to the true problems.  Early on, Mayor Washington was able to bring together mainly minority parents and white business leaders, all of whom shared concerns about poor schooling, but that coalition was fragmented when the funding issue percolated to the top of the reform agenda (community leaders favored more equitable tax policies and greater funding, while many in the business community were opposed).

Starting with the local reflects an ongoing theme in Ayers’ work, and in an essay he wrote in 1988, it becomes an explicit focus of his account of his past.  Ayers wrote:  “My experience with making change leaves me unimpressed with theories of change, big theories and little theories alike.  Big theories are often compelling because of their bold self-assurance and their tidy certainty…, [but] too often the self-activity of people is lost in a kind of determinism…  Small theories of change promise a different kind of certainty, but they fail as often for missing the larger context…”  Such a view, in turn, has shaped Ayer’s subsequent work on education as social justice, in which he repeatedly insists he is not seeking airy abstraction but on-the-ground changes for children.

Ayers’ departs from social justice accounts of education that see education as a mechanism for improving students’ economic and social prospects.  For Ayers such an approach reflects a certain naivete, since it rests on a basic endorsement of the overall forces and institutions that shape society and often constrain progress even for the well educated (the emphasis in such an approach can too fully rest on the idea of equipping under-educated students for society, without enabling changes in the political and social system that will make the resulting educated citizens more welcome).  Ayers thus also argues that social justice education has to be politically empowering even as basic life skills are inculcated, where schools might be imagined as also fostering real political agency.

The challenge, of course, is that education is complicated and the dynamics of successful teaching cannot be reduced to axiomatic rules teachable in college education classrooms.  In Teaching Toward Freedom, his 2004 book, Ayers (channelling Walt Whitman) cites the following as offering a more hopeful (and explicitly poetic) view of the well formed citizen:

Love the earth and the sun and the animals, despise riches, give alms to everyone that asks, stand up for the stupid and the crazy, devote your income and labor to others, hate tyrants, argue not concerning God, have patience and indulgence toward the people, take off your hat to nothing known or unknown or to any man or number of men, go freely with powerful uneducated persons and with the young and with the mothers of families, re-examine all you have been told at school or church or in any book, dismiss whatever insults your soul, and your very flesh shall be a great poem.

SOURCE:  William Ayers, “The Republican’s Favorite Whipping Boy, Former Student Radical William Ayers Tells What it Was Like to Be Painted as a Symbol of Evil by McCain and Palin,” Montreal Gazette, 8 November 2008, pg. B7; Colin Moynihan, “Ex-Radical Talks of Education and Justice, Not Obama,” New York Times, 27 October 2008, pg. A22; William Ayers and Michael Klonsky, “Navigating a Restless Sea:  The Continuing Struggle to Achieve a Decent Education for African American Youngsters in Chicago,” Journal of Negro Education 63.1 (1994): pgs. 5-18; Ayers, “The Shifting Ground of Curriculum Thought and Everyday Practice,” Theory into Practice 31.3 (Summer 1992): pgs. 259-263; Ayers, “Problems and Possibilities of Radical Reform:  A Teacher Educator Reflects on Making Change,” Peabody Journal of Education 65.2 (Winter 1988): pgs. 35-50;  Emery Hyslop-Margison, “Teaching for Social Justice,” Journal of Moral Education 34.2 (June 2005): pgs. 251-256; John Pulley, “Former Radicals, Now Professors, Draw Ire of Alumni at Two Universities,” Chronicle of Higher Education, 16 November 2001, pg. A32.

Paula Vogel’s “How I learned to drive”

Tonight I had the opportunity see Paula Vogel’s remarkable Pulitzer Prize-winning “How I Learned to Drive” in production at the Georgia State University theatre.  The show relies on a very small cast, only five in all, a fact that lends some irony to the fact that three of them play multiple roles described in the bill as “Greek choruses.”  First performed off-Broadway about a decade ago (in a production that starred the amazing Mary Louise Parker) in a space likely not much larger than our university theatre (a fact that works considerably to the play’s benefit for reasons to which I’ll soon return), the student-led production I saw this evening was powerful in many respects.

If you haven’t had the chance to see “Drive” on stage or to read the play, you should know that in some ways it is typical of Vogel’s work in the sense that the subject matter it engages is exceptionally difficult, centered on the deeply complex and problematic relationship between Li’l Bit, a young woman who is both taught to drive and is molested by her uncle-by-marriage, Peck.  The piece manages to deploy the gimmicks available to live production without ever quite seeming gimmicky, all the while speaking to unspeakable acts of exploitation without either preaching or rationalizing.  Perhaps the most remarkable aspect of “Drive” is that it leads its audience to a comprehension of how situations of abuse arise in ways that never fully demonize Peck even as we can see him step-by-step approach and then finally fall headfirst into the abyss.  Peck is evil but also sympathetic; Li’l Bit is forever scarred but also able in some sense to move beyond the disaster of abuse and loss, and all at the same time.

For me Paula Vogel’s work comes into sharper focus when one realizes that she is a teacher by trade (head for many years of Brown University’s playwriting workshop and now newly appointed to the Yale Drama School).  It seems to me that in many ways the sometimes pathological relationship between manipulator and victim can be better comprehended in the dynamics of even healthy teacher-student interaction, where differences in age and mutually performed strategies of manipulation are ever-present.  But Vogel’s work cannot be so easily explained:  as sensitive as she is to scenes of educational encounter, she is also deeply thoughtful about the distortions arising in the theatre itself, where innovation is both enabled and destroyed (“We have never figured out how to produce art in this country.  The culture has successfully made sure that we are going to be entertainers of the ruling class, the rich…  We are now nothing more than a backdrop for cocktail parties for the ruling class” – all thoughts expressed even as she challenges all this in her own work).

“How I Learned to Drive” can be read as a retelling of Nabokov’s Lolita (interviewed on the Lehrer News Hour right after winning the Pulitzer, Vogel told interviewer Elzabeth Farnsworth that “in many ways I think that this play is an homage to Lolita, which I think is one of the most astonishing books ever written”).  But the play recasts almost every important detail (apart from the fact of a profoundly wrong older man and younger girl “relationship”) in ways that bring to our attention deeply vexed ethical questions.  Humbert is a literature scholar whose creepy and distorted repetition complex (he sees in Dolores/Lolita traces of his past romantic failures) is wholly pathological, and the novel is narrated through his eyes; “Drive” is narrated by the girl, and Peck is married to a beautiful woman with whom he seems to enjoy sexual intimacy.  Humbert and Peck are both shaped by the World War II years, but whereas for Humbert the damage is done by his true love’s premature death, for Peck the suggestion is that he was scarred by having been himself molested as a young boy.

Lolita is manipulative but also crude and finally unexceptional; Li’l Bit is in some respects more naive and less sexually manipulative but is also more fully formed and apart from the trauma inflicted on her by Peck, weirdly and fully self-aware.  Humbert falls instantly in love with Dolores when he sees her for the first time at the age of twelve, sunbathing; Peck has “loved” Li’l Bit since the day she was born, from the time when he could literally hold her entirely in the palm of his hand (though of course he continues to hold her in the palm of his hand until the day of her eighteenth birthday; the “palm of the hand” becomes a repeated motif in the script).  Nabakov described Humbert as “a vain and cruel wretch”; Peck, in telling contrast, doesn’t come across as vain but at times rather lonely and compellingly charismatic.  Peck is a wretch, to be sure, but is motivated more by tragically misplaced affection than by cruelty.  Humbert’s increasingly pathological behavior leads finally to a murder; Peck’s to self-immolation as he drinks himself to death.

Peck’s driving lessons provide a parallel scaffolding that helps make sense of and externalize his own internally considered strategies of manipulation, and also create a metaphorical apparatus by which we can see the complex patterns of exhilaration and lost control and entrapment that distort familial affection into molestation.  The car is a mode of escape (even finally for Li’l Bit) and a sanctioned space of private encounter, a site where the illicit thrill of sexual exhilaration for Peck literally occurs simultaneous with the guilty pleasure of illegally driving for the girl.  Nowhere is this metaphorical layering more compelling than in the last seconds of the production, when Li’l Bit, now in her mid-thirties, returns again and again to the automobile and the long drive, pressing hard on the accelerator as a means of escaping her past even as the very act of driving reenacts both the trauma and, yes, the guilty pleasures of her remembered relationship with this man in whose orbit she uneasily traveled, filled both with love and its all-too-easily recalled opposite.

Less compelling for me were the more caricatured familial dynamics Vogel enacts through Li’l Bit’s grandparents; while they convey the very real sense in which bystanders can become enablers, the nuance of the core (Li’l Bit/Peck) relationship is missing from the grandparent’s tortured marriage.  And a scene where Peck seduces a nephew (this time the metaphor is fishing, not driving, and the site of molestation a tree house and not the car) is both clearly decisive in depersonalizing Peck’s distorted desires but also perhaps too completely ambiguated (an underlying dynamic that seems at work throughout is the ironic possibility that Peck has deeply sublimated homosexual desires and that part of Li’l Bit’s revulsion at his advances relates to her own coming-into-being as a lesbian).

But these are minor complaints – Peck’s seduction of the boy is challengingly ambiguous but also amazingly evocative since the scene is played in pantomine, the boy never seen – and the more commonly earth-shattering power of Vogel’s writing comes through even in those scenes that seem to fall just slightly short of perfection.  One of the most striking and heartbreaking monologues in the entire play is given by Peck’s wife (Li’l Bit’s aunt), who is (and this is Vogel’s greatest gift, I think) able to articulate in a speech that lasts no more than ninety seconds all the tangled and tragic rhythms of awful knowledge and its denial, love and its capacity both to sharpen and blur one’s comprehension, and a longing wistfulness that desperately wishes for a return to normalcy that has been fully and impossibly foreclosed.

Along the way the play offers a challenging meditation on love:  At what precise moment in a relationship does love lose its innocence and become guilty and wrong?  To what extent, if any, can horrible behavior be mitigated because it arises out of an apparently genuine loving regard?  And what is the meaning of consent?  Vogel’s narrative makes plain that consent is insufficiently finalized even at steps when it is non-coercively and repeatedly requested, and it also complicates the idea that the responsibility of consent runs only one way:  at almost every step of the unfolding narrative both the older man and the younger girl each comprehend what is happening at an important level of conscious realization, even as each is blinded by peculiar and tragically naive misconceptions.

The many recognitions Vogel has received (Obie, Drama Critics, Pulitzer, to name only a few) reflect the perfect affinity between the play and the physical Intimacy of live theatre.  The performance I saw tonight wholly confirmed David Savran’s insight that “A Paula Vogel play is never simply a politely dramatized fiction.  It is always a meditation on the theatre itself – on role-playing, on the socially sanctioned scripts from which characters diverge at their peril and on a theatrical tradition that has punished women who don’t remain quiet, passive and demure.”  Lolita works better on screen because the nature of Humbert’s attraction for Dolores is itself initiated in an act of cinematic spectacle – Humbert falls in love with a distant image of the girl, and is captivated by the mirage before he ever comes to understand her more mundane true persona.  Not so for “Drive,” where the ever accumulating erotic charge does not arise out of Peck’s view-from-afar as much as the more fully embodied encounters of touch and conversation and smell and taste and intimate contact, not to mention their absence.

The theatrical setting also performs another important function that would be difficult to enact on screen.  Vogel’s script jumps around, scrambling chronological time even while the basic characters (Peck and Li’l Bit) are not physically altered or differently made up.  The effect is that at any given time, although the audience never loses sight of the underlying inappropriateness adduced by differences in age, one loses track of how old Li’l Bit is – in this moment is she thirteen or thirty? – and so the combined mechanism of mixed up chronology and theatrical performance help us see her as Peck sees her:  young and old, naive and sophisticated, innocent and maybe also guilty, all at once, blurred together in ways that distort judgment and help make Peck’s agonizingly awful missteps also more comprehensible.

SOURCES:  “A Prize Winning Playwright,” Elizabeth Farnsworth interviews Paula Vogel on the Lehrer Newshour (online), 16 April 1998; Elena More, “Stage Left:  An Interview with Paula Vogel,” PoliticalAffairs.net: Marxist Thought Online, May 2004;  Gerard Raymond, “Paula Vogel:  The Signature Season,” Theater Mania, 12 October 2004; David Savran, “Paula Vogel’s Acts of Retaliation,” American Theatre 13.4 (April 1996), pg. 46; Dick Scanlon, “Say uncle theatre,” The Advocate, 10 June 1997, pg. 61; Stefan Kanfer, “Li’l bit of incest,” New Leader, 30 June 1997, pg. 21; David Savran, “Driving Ms. Vogel,” American Theatre 15.8 (October 1998), pg. 16.

On the relevance of Lionel Trilling

I am aware of no specific anniversary that has prompted the spat of recently revitalized interest in the life work of Lionel Trilling, the legendary Columbia University professor and author most famously of The Liberal Imagination (1950).  But suddenly his writing has sprung back into intellectual circulation:  the first third of an unfinished novel, The Journey Abandoned, has been published this year, and New York Review Books has just reissued The Liberal Imagination.  Read by today’s lights, which is to say to read it outside the culturally dominant frame of the Cold War and American anti-communism that shaped its production and Trilling’s world view, it is hard to imagine what made it a national bestseller (more than 100,000 copies were sold in paperback).  All the essays had previously appeared in print, many in the Partisan Review to which Trilling was long attached, and many of the essays engage particular novelistic texts in ways one would assume rather inaccessible to the wider reading public.  Still, I have found myself attracted to Liberal Imagination (and have been recently reading my way through it), in part because of the way it has been described as a “monument of humanism” (McCarter) but also just to gain purchase on the basis of his enormous influence in American literary critical circles.

Louis Menand’s introduction to the new reprint, which has been strongly attacked by Leon Wieseltier (a Trilling student) as misconstruing Trilling’s sense of the relationship between art and literature and thereby demeaning the sense of urgency Trilling saw in the literary critical enterprise, nonetheless rightly calls attention to a combination of humbled arrogance I find attractive in his work.  Trilling did not mainly want to be remembered as a critic (he wished most of all to be considered a novelist); in fact, because he only knew the English language he expressed the concern that he was not even properly a scholar.  “But,” writes Menand, “although he may not have wanted what he had, and he may not have understood entirely why he had it, he appreciated its value and tended it with care.”  The result is deeply polished prose that, if it fails, likely does so because Trilling’s work is saturated by the expression of dialectical tendencies that can become sources of frustration when one seeks to finally understanding his position, more than any sense of overweening arrogance in his compositional style.

The central theme of the book, which was also a central problematic of Trilling’s lifetime critical production, strikes me as possessing a profound continuing relevance even if Trilling’s own position reads as less coherent than it would have more than a half century ago.  Trilling was concerned to specify and sometimes to ambiguate the relationship between literature and liberal politics.  Liberalism, whose ideological impulses (and this is true of all ideological formations) can lead to an inevitable oversimplification of the human condition (in the case of liberalism by reducing the aim of all politics to the attainment of equality and freedom, which when applied risk doing violence to the rougher edges of the polity that should by liberalism’s own lights be tolerated), required reflective challenge if it was to survive without lapsing into empty and dangerous dogma.  Because conservatism seemed to Trilling an unavailable corrective in producing morally mature individuals (as he famously put it in the preface, “In the United States at this time liberalism is not only the dominant but even the sole intellectual tradition.  For it is the plain fact that nowadays there are no conservative or reactionary ideas in general circulation.”), it fell to the novelist to interrogate the tendency to empty certitude to which liberalism in all its American variations was prone.

Why literature?  Because great novels (and for Trilling this mainly meant stories to some extent historically distant from contemporary culture) offer representations that invite critical speculation and open ethical vistas.  This is so because the novelist situates moral and political struggle within characters, imagined persons who make ideological abstractions concrete and on account of their embodiment reveal the limits of theory (Donald Pease has suggested that Trilling’s main contribution was to “elevate the liberal imagination [and the liberal anticommunist consensus] into the field’s equivalent of a reality principle”).  Literature, Trilling wrote, is “the human activity that takes the fullest and most precise account of variousness, possibility, complexity, and difficulty.”  And all this is accomplished in a manner assured to interest and engage readers able to connect emotionally to vivid and rich scenes of imagined human interaction.  The novel thus possesses the twin capacity to enact moral ambiguities while also attracting the interest of audiences more numerous that those who would ever read theology or philosophy or other theory. (Ironically, perhaps, John Vernon criticized Trilling’s later writing as suffering because it offered a wholly disembodied and thus cold analysis, which is to say his criticism lacked the formal virtues of the novel he so regularly praised).

Trilling did not believe that literature always apprehends or represents or has some unique insight into the Truth.  He understood that not all writers see themselves as working in explicit opposition to liberalism, which for him was beside the point since any rich ethical interrogative novel poses an useful if implicit challenge to ideological certitude.  Nor did he believe that writers have (either on account of their separation from the wider culture or their innate madness) special access to privileged knowledge.  He simply believed that writers who attempt to offer richly plotted stories recognizable to their readers will necessarily induce critical analysis and reflection.  As Menand notes, referring to Trilling’s famous essay “On the Teaching of Modern Literature,” Trilling

…had come to believe that “art does not always tell the truth or the best kind of truth and does not always point out the right way, that is can even generate falsehood and habituate us to it, and that, on frequent occasions, it might well be subject, in the interests of autonomy, to the scrutiny of the rational intellect.”  …Humanism might be a false friend. This willingness to follow out the logic of his own premises, to register doubts about a faith for which he is still celebrated by people who are offended by attempts to understand books as fully and completely implicated in their historical times, is the finest thing about his work.

Along with mass culture, literary criticism can too easily become a culprit in degrading the complexity proper to a well-functioning liberalism as well, for if the critic tries to ignore the broader culture and its history altogether (and this was the major shortcoming Trilling saw in the work done under the name New Criticism), or insists on applying the strictures of scientific covering laws or a predetermined ideology, all the richness of the realist novel is erased, thereby simply opposing liberalism’s potential platitudes with the verities of alternatively over-basic theories of collective life.

In judging the contemporary relevance of Trilling’s case for high literary culture one immediately wonders if a position so intimately connected to 1950’s hyper-ideological Cold War culture makes sense given today’s arguably post-ideological times.  Here is the case made by McCarter:

The “Stalinist-colored” ideas that Trilling sought to rebuke are now tough to spot, unless you’re a Fox News contributor.  But even as some liberal excesses have receded, the book has lost none of its urgency.  For it celebrates something that is imperiled in our high-speed, always-on media culture:  imagination itself.  Trilling foresaw the threat:  “The emotional space of the human mind is large but not infinite, and perhaps it will be pre-empted by the substitutes for literature – the radio, the movies, and certain magazines,” he wrote, prophetically.  A shrinking national attention span and eroding reading habits aren’t just bad news for liberal politics.  The moral imagination excited by good books, he argues, teaches us sympathy and a respect for variety:  the waning novel leads to “our waning freedom.”

Such a position is not altogether self-evident, especially given the manner by which popular culture has been vigorously defended in the last quarter-century (or more) as enabling vernaculars both of understanding and potential resistance to the stultifications of ideology.  To specify the point by asking a rather mundane question: why is it that the nation’s critical faculties are raised by reading an E.M. Forster novel (a writer Trilling praised) but not by seeing A Room With a View in the cinema multiplex?  I have not encountered a fully elaborated critique of popular cultural mass mediation so far in Trilling, but can imagine some lines of argument he might attempt.  He might first call to mind his often articulated view that the historical distance created by great novels is required to counteract the tendency to revert to current ideological accounts, possibilities subverted by necessarily simple film or journalistic treatments that translate rich novels into the contemporary vernacular.

Trilling might also evoke the long-standing case against mass culture as inevitably inclined to conformity and utopianism, versions of which often start with the view that, organized as they are by the desire for lowest-common-denominator mass audiences and controversy shyness (since controversy can be a stigma that suppresses profits), mass cultural artifacts will inevitably lapse into intellectual quietism or outright boosterism for self-satisfying verities.  As Hersch puts the potential case, “while literature encourages critical reflection, mass culture produces a predetermined emotional and intellectual response in the reader, discouraging and atrophying the ability to think independently.  Such pseudo-literature encouraged passivity, paving the way for totalitarianism.”  Agree or disagree, it should be noted that this view of mass culture may have contributed to Trilling’s own late-in-life pessimism even regarding the capacity of literature to break through, since (again quoting Hersch), “in a conformist culture, literature presents minority views that are likely to be scorned by the majority” (99).

Even conceding Trilling’s case, which many thoughtful observers of contemporary culture would never do (Herbert Gans and Raymond Williams would stand near the head of a long line), LT is often attacked for his tendency to read liberalism as wholly shaped by a now nonexistent monolithic middle class (that if it existed in the 1950’s certainly does not today, a point that underwrites part of Cornel West’s critique), which given current conditions of fragmentation does not exist in any meaningful way and probably cannot be rearticulated.  Another common criticism is that in developing his case for interrogating liberalism Trilling only paved the way for neoconservatism (a cottage industry continues to debate whether Trilling was a closet case neoconservative:  his wife Diana has adamantly refused the possibility, while Irving Kristol has claimed that LT was simply a neocon lacking the courage to say so in print).

Both arguments, it seems to me, miss the deeper commitment in Trilling’s work to a messy and complex humanism, and his recognition that for societies to proceed thoughtfully requires both a sense of common vision and purpose and also an always acknowledged sense that ideologies cannot be permitted, in the name of such commonalities, to erase or suppress what he called the “wildness of spirit which it is still our grace to believe is the mark of full humanness.”  As Bender has argued, “Trilling’s very middle classness – by providing the perspective of distance – ends up, however paradoxically, providing contemporary American culture with a radical challenge, urging critics to find some space among nostalgia, politicized group identities, and specialized academic autonomy for the creation of a public culture” (pg. 344).

SOURCES:  Lionel Trilling, The Liberal Imagination: Essays on Literature and Society, intro. by Louis Menand (New York:  New York Review Books, 2008 [1950]); Jeremy McCarter, “He Gave Liberalism a Good Name,” Newsweek, 6 October 2008, pg. 57;  Leon Wieseltier, “The Shrinker,” New Republic, 22 October 2008, pg. 48; Louis Menand, “Regrets Only: Lionel Trilling and His Discontents,” New Yorker, 29 September 2008, pgs. 80-90; Russell Reising, “Lionel Trilling, The Liberal Imagination, and the Emergence of the Cultural Discourse of Anti-Stalinism,” boundary2 20.1 (1993): pgs. 94-124; Donald Pease, “New Americanists: Revisionist Interventions into the Canon,” boundary2 17 (1990);  Cornel West, The American Evasion of Philosophy (Madison: University of Wisconsin Press, 1989); Thomas Bender, “Lionel Trilling and American Culture,” American Quarterly 42.2 (June 1990): pgs. 324-347; John Vernon, “On Lionel Trilling,” boundary2 2.3 (Spring 1974): pgs. 625-632; Charles Hersch, “Liberalism, the Novel, and the Self:  Lionel Trilling on the Political Functions of Literature,” Polity 24.1 (Fall 1991): pgs. 91-106; Robert Genter, “’I’m Not His Father’: Lionel Trilling, Allen Ginsberg, and the Contours of Literary Modernism,” College Literature 31.2 (Spring 2004): pgs. 22-52; T. H. Adamowski, “Demoralizing Liberalism:  Lionel Trilling, Leslie Fiedler, and Norman Mailer,” University of Toronto Quarterly 73.3 (Summer 2006): pgs. 883-904.

When humanistic scholarship is not beautiful

When Pablo Picasso first exhibited his work at the young age of eighteen, the reviews were not very promising.  His friends had found him a gallery space he could use for free, but there were also no funds available to properly mount the (mostly) bohemian portraits.  So the canvases were literally pinned to the walls, and in rows since there were more artworks to hang than the small gallery space allowed.  The main review in the Diario de Barcelona (dated 7 February 1900) was not kind:  Picasso was said to exhibit “an obsession with the most extreme form of modernisme… a lamentable derangement of the artistic sense and a mistaken concept of art.”

A decade or so later Picasso was first exhibited in the United States, and although he garnered early and strong enthusiasm in France and Germany, the American reception was also underwhelming.  Personally promoted by Max Weber (the artist, not the sociologist), the photographer Alfred Stieglitz exhibited Picasso in New York City in March 1911 at his 291 Gallery.  Although the show was later described as launching Picasso’s American career, it was something of a bust at the time – only one painting sold, and that for just eleven dollars.  Gertrude Stein was an early American advocate, but when she tried to interest her friends the Cone sisters in Picasso, they said no, on the grounds that his work was repulsive cubism (actually the specific word they used was tommyrot).

Picasso provides a ready example of a broader phenomena that subverts the public reception not only of twentieth century modernist art (and music and architecture) but also taints the wider scholarship of the humanities.  And I think this problem is more endemic to the so-called crisis of the humanities than its alleged inaccessibility to wider audiences, its failure to celebrate national cultures and literary traditions, or its increasingly distant relationship to the professional worlds of commerce and the professions.

The challenge is that the work product of the most brilliant scholars and artists laboring in the humanities (especially over the last half century), broadly defined, is often actually unattractive, sometimes even ugly:  jarring, intentionally disorienting, inelegant, apparently self-absorbed, tedious, at times even disgusting, and understandable only within the contours of a highly specialized and technically sophisticated audience whose reach (by definition) will be small.  By contrast to some other domains of human endeavor, where increasingly rigorous technical standards of evaluation have also been tightly wedded to sustaining standards of aesthetic elegance (I have in mind activities like figure skating and landscaping and perhaps even fields of study like mathematics, where the ideal achievement seems to remain the beautiful proof), work done by humanists is now widely dismissed as having abandoned its duty to actively attract audiences.

The viscerally negative reaction induced in many very bright students to some of the leading written works of humanist thinkers is better explained by this shock of first encounter than by its political agenda or by any innate inability to perceive its claims.  And the simultaneous public adoration and (for the most part) scholarly disparagement of the research published by the Joseph Ellises and David McCulloughs of the world (and one might add the Thomas Kinkades and John Williams of painting and movie soundtrack fame) only highlights the often intentional arms-length relationship sustained by serious humanistic heavyweights and their potential publics.

Now of course ugliness is not necessarily undesirable.  No imperative dictates that scholarship enact an aesthetic allure for its audiences, especially if it accomplishes other purposes (such as generating useful knowledge or essential insight).  Nor does the observation have universal relevance:  too many exceptions of elegant and even beautiful work are produced to launch this as any kind of generalized indictment and in fact it is a regular tactic of the humanities’ opponents to exaggerate the critique.

Meanwhile, a number of quite defensible factors have combined to lessen the perceived importance of beauty as a goal of, say, philosophical or literary production.  One is the raging debate over the status of aesthetics itself, which was sharply problematized under the emergence of modernism and structuralism, both often seeing surface appearance as a deceptive fraud masking underlying matrices of meaning and political signification, and beauty a concept more evasive than helpful.  (Of course considerable recent work has accomplished something of an aesthetic turn, as the pendulum swings back toward a view of aesthetics as empowering and not simply obliterative of difference; this is the point explored by Castiglia and Castronovo).

Jürgen Habermas argued some time ago that we are also seeing the inevitable outcomes of the specialization of knowledge accelerated by late capitalism; in contrast to an earlier Enlightenment view that the work of the scholar should culminate in findings that advanced the aspirations of Truth, Beauty, and Justice, today we inhabit a fragmented lifeworld where the philosophers fixate on truth and the lawyers fixate on their technically specialized concepts of justice (the wider complexities of Habermas’ views on aesthetics are elaborated by Duvenage).  One could actually trace such forces back as far as Rome; the Latin phrase pulchritudo splendor varitatis (“beauty is the splendor of truth”) is more than twenty centuries old.  Today, specialists in the humanities occasionally disavow the very idea of making their work accessible to wider literate audiences as antithetical to their projects, which they often reasonably argue obligate the use of difficult vernaculars.

Jerome McGann, the University of Virginia literary critic, has recently addressed the issue in a rather particular way.  Speaking of poetry he writes,

Poetry has become a byword for incomprehensible language.  It is our fault, scholars and educators, that poetry has acquired this reputation.  We have hyped its depth, profundity, importance…. We have some important unlearning to do.  We get into trouble, we get others into trouble, when we set either the criterion of “meaning” or the criterion of “beauty” as the measure of value for imaginative works.  Like theory, criteria are ponderous things, deadly to the imagination.  Yet these criteria pervade the discourse of culture, both inside and outside the academy.  On the contrary, poetry and at function at more fundamental, even primitive, levels.  Beauty and meaning, what the ancients called pleasure and instruction, are secondary constructions laid upon poetry by scholars who try to explain how poems work, how they arrest, astonish, reveal.

As one can see, McGann, despite his interest in the “death of beauty,” is not committed to a renunciation of poetry or criticism but rather to their reconceptualization.  And it remains important to insist on the often vital role paid by enactment of the grotesque in leading a society to a broader comprehension, and even to yield, in some cases, pleasure (Matthew Kieran:  “…even though an artwork may be constituted from repugnant materials, depict perverse scenes or people, we may be afforded pleasure by attending to them rather than being repelled by them”).

Still, even without judgmentalism, one might as well acknowledge that the intellectual currents that have produced increasing technical specialization in all of the humanistic endeavors have also necessarily come at the price of making them less attractive to those who encounter what will seem on first approach and to the uninitiated as impossibly obstuse and even repulsive scholarship.

All this is on my mind because of the recent controversy over the termination by Fort Hays State University of its debate coach on account of a screaming obscenity-laced argument he had after a debate round at the 2008 national tournament with the professor who directs debate at the University of Pittsburgh.  The YouTube video was painful to watch and of course has now been ridiculed on nationwide television as a kind of Professors Gone Wild.  The exchange was extreme and by my lights wholly uncharacteristic of the broader activity (at least with respect to its incivility, if not with respect to the passion all bring to their encounters).  But the commentary, and the schisms it has reawakened (or brought to public view) within the debate community actually are far older than the introduction of identity politics, performance activism, and philosophical argument to the activity that occurred in the 1990’s.

Academic debate is a paradigm instance of the phenomenon whereby a merged humanistic practice (rooted, after all, in the ancient art of rhetoric) of intellectual substance and eloquent (even beautiful) style has for the most part given way to the elevation of intellectualism over persuasion.  To the average person first encountering high level competitive policy debate, the experience is thus now most often unpleasant, and in fact, until thoroughly initiated, many are a little repulsed by the hyperfast screaming, inadvertent spitting, and red faced gasping characteristic of the activity.  Fortunately, for many that first encounter also conveys some sense of the incredible thinking and research skills needed to succeed in competitive debate.

Debate is an amazingly worthwhile intellectual endeavor and even as practiced at the most competitive levels still evokes a certain compelling though occasional persuasiveness.  Its participants develop astonishing aptitudes for critical thinking, the mastery of actually vast domains of public policy and philosophical literatures, and in part this is so because as an activity it has downplayed the conventional elements of recognizable persuasiveness.  But as with all the broader humanities, this extracurricular activity pays a price for its accentuated emphasis on particular and idiosyncratic modes of delivery that has for the moment made it (sadly) too easy to caricature.  And so even as brilliance regularly emerges, involvement (especially at the high school point of first entry) has dwindled.

I’m dismayed by the fact that beneficial co-curricular activities like intercollegiate debate are often outright opposed by faculty members who, in the name of abhorring its hyper-specialization, would never think for a second to discount their own scholarship for its arcane and limited and sometimes off-putting reach.  Such a reaction is, I believe, hypocritical.  But the fact of such hypocrisy should not be read as denying the importance of a discussion about whether the extent of intellectual specialization has too greatly come at the expense of the wider attractiveness of humanistic scholarship for intellectually literate audiences.

Ugly, perhaps, but true.

SOURCES:  John Richardson, A Life of Picasso: The Prodigy, 1881-1906 (New York:  Alfred Knopf, 2007 edition); Richardson, A Life of Picasso:  The Cubist Rebel, 1907-1916 (NY:  Knopf, 2007 edition); Jerome McGann, The Scholar’s Art:  Literary Studies in a Managed World (Chicago:  University of Chicago Press, 2006); Matthew Kieran, “Aesthetic Value:  Beauty, Ugliness, and Incoherence,” Philosophy 72 (1997): 383-399; Pieter Duvenage, Habermas and Aesthetics:  The Limits of Communicative Reason (Cambridge:  Polity, 2003); Christopher Castiglia and Russ Castronovo, “A ‘Hive of Subtlety’:  Aesthetics and the End(s) of Cultural Studies,” American Literature 76.3 (September 2004): 423-435.

Follow

Get every new post delivered to your Inbox.