Home » Intellectuals

Category Archives: Intellectuals

An approaching Singularity?

When Ray Kurzweil published his bestseller, The Singularity is Near, in 2005, the skeptical response reverberated widely, but his track record when it comes to having made accurate predictions has been uncanny.  In the late 1980’s it was Kurzweil who anticipated that soon a computer could be programmed to defeat a human opponent in chess; by 1997 Big Blue was beating Garry Kasparov.  His prediction that within several decades humans will regularly assimilate machines to the body seemed, as Michael Skapinker recently put it, “crazy,” “except that we are already introducing machines into our bodies.  Think of pacemakers – or the procedure for Parkinson’s disease that involves inserting wires into the brain and placing a battery pack in the chest to send electric impulses through them.”

Kurzweil obviously has something more dramatic in mind than pacemakers.  The term singularity both describes the center of a black hole where the universe’s laws don’t hold and that turning point in human history where the forward momentum of machine development (evolution?) will have so quickly accelerated as to outpace human brainpower and arguably human controls.  For Kurzweil the potential implications are socially and scientifically transformational:  as Skapinker catalogs them, “We will be able to live far longer – long enough to be around for the technological revolution that will enable us to live forever.  We will be able to resist many of the diseases, such as cancer, that plague us now, and ally ourselves with digital versions of ourselves that will become increasingly more intelligent than we are.”

Kurzweil’s positions have attracted admirers and detractors.  Bill Gates seems to be an admirer (Kurzweil is “the best person I know at predicting the future of artificial intelligence”).  Others have criticized the claims as hopelessly exaggerated; Douglas Hofstadter admires elements of the work but has also said it presents something like a mix of fine food and “the craziest sort of dog excrement.”  A particular criticism is how much of Kurzweil’s claim rests on what critics call the “exponential growth fallacy.”  As Paul Davies put it in a review of The Singularity is Near:  “The key point about exponential growth is that it never lasts.  The conditions for runaway expansion are always peculiar and temporary.”  Kurzweil responds that the conditions for a computational explosion are essentially unique; as he put it in an interview:  “what we see actually in these information technologies is that the exponential growth associated with a particular paradigm… may come to an end, but that doesn’t stop the ongoing exponential progression of information technology – it just yields to another paradigm.”  Kurzweil’s projection of the trend lines has him predicting that by 2027, computers will surpass human intelligence, and by 2045 “strictly biological humans won’t be able to keep up” (qtd. in O’Keefe, pg. 62).

Now Kurzweil has been named chancellor of a new Singularity University, coordinated by a partnership between NASA and Google.  The idea is simultaneously bizarre and compelling.  The institute is roughly modeled on the International Space Unversity in Strasbourg, where the idea is to bring together Big Thinkers who can, by their interdisciplinary conversations and collaboration, tackle the impossible questions.  One wonders at whether the main outcome will be real research or wannabe armchair metaphysical speculation – time will tell, of course.  NASA’s role seems to be simply that they have agreed to let the “university” rent space at their Moffett Field Ames Research Center facility in California.  The money comes from Peter Diamandis (X Prize Foundation chair), Google co-founder Larry Page, Moses Znaimer (the media impresario), and tuition revenue (the nine week program is charging $25,000, scholarships available).  With respect to the latter the odds seem promising – in only two days 600 potential students applied.

The conceptual issues surrounding talk of a Singularity go right to the heart of the humanistic disciplines, starting with the manner in which it complicates anew and at the outset what one means by the very term human.  The Kurzweil proposition forces the issue by postulating that the exponential rate of information growth and processing capacity will finally result in a transformational break.  When one considers the capacity of human beings to stay abreast of all human knowledge that characterized, say, the 13th century, when Europe’s largest library (housed at the Sorbonne) held only 1,338 volumes, and contrasts that with the difficulty one would encounter today in simply keeping up with, say, research on William Shakespeare or Abraham Lincoln, the age-old humanistic effort to induce practices of close reading and thoughtful contemplation can seem anachronistically naive.

One interesting approach for navigating these issues is suggested in a 2007 essay by Mikhail Epstein.  Epstein suggests that the main issue for the humanities lies less in the sheer quantity of information and its potentially infinite trajectory (where, as Kurzweil has implied, an ever-expanding computational mind finally brings order to the Universe) than in the already evident mismatch between the finite human mind and the accumulated informational inheritance of humanity.  Human beings live for a short period of time, and within the limited timeline of even a well-lived life, the amount of information one can absorb and put to good use will always be easily swamped by the accumulated knowledge of the centuries.  And this is a problem, moreover, that worsens with each generation.  Epstein argues that this results in an ongoing collective trauma, first explained by Marxist theory as inducing both vertigo and alienation, then by the existentialists as an inevitability of the human condition, and now by poststructuralists who (and Epstein concedes this is an oversimplification) who take reality itself “as delusional, fabricated, or infinitely deferred” (19).  Epstein sees all this as evidencing the traumatizing incapacity of humans to comprehend in any detailed way their own collective history or thought.  The postmodern sensibility revealed in such aesthetic traditions as Russian conceptualism, “which from the 1970s to the 1990s was occupied with cliches of totalitarian ideology,” and which “surfaced in the poetry and visual art of Russian postmodernism” in ways “insistently mechanical, distant, and insensitive” (21).  There and elsewhere, “the senses are overwhelmed with signs and images, but the intellect no longer admits and processes them” (22).

The problem to which Epstein calls attention – the growing gap between a given human and the total of humanity – is not necessarily solved by the now well-established traditions that have problematized the Enlightenment sense of a sovereign human.  In Epstein’s estimation, the now-pluralized sense of the human condition brought into being by multiculturalism has only accentuated the wider social trends to particularization and hyper-specialization:  the problem is that “individuals will continue to diversify and specialize:  they will narrow their scope until the words humans and humanity have almost nothing in common” (27).

The wider work on transhumanism and cyborg bodies reflects a longer tradition of engagement with the challenge posed by technological transformation and the possibilities it presents for physical reinvention.  At its best, and in contrast to the more culturally salient cyborg fantasies enacted by Star Trek and the Terminator movies, this work refuses the utopian insistence in some of the popular accounts that technology will fully eradicate disease, environmental risk, war, and death itself.  This can be accomplished by a range of strategies, one of which is to call attention to the essentially religious impulses in the work, all in line with long-standing traditions of intellectual utopianism that imagine wholesale transformation as an object to be greatly desired.  James Carey used to refer to America’s “secular religiosity,” and in doing so followed Lewis Mumford’s critique of the nation’s “machano-idolatry” (qtd. in Dinerstein, pg. 569).  Among the cautionary lessons of such historical contextualization is the reminder of how often thinkers like Kurzweil present their liberatory and also monstrous fantasies as inevitabilities simply to be managed in the name of human betterment.

SOURCES:  Michael Skapinker, “Humanity 2.0:  Downsides of the Upgrade,” Financial Times, 10 February 2009, pg. 11; Mikhail Epstein, “Between Humanity and Human Beings:  Information Trauma and the Evolution of the Species,” Common Knowledge 13.1 (2007), pgs. 18-32; Paul Davies, “When Computers Take Over:  What If the Current Exponential Increase in Information-Processing Power Could Continue Unabated,” Nature 440 (23 March 2006); Brian O’Keefe, “Check One:  __ The Smartest, or __ The Nuttiest Futurist on Earth,” Fortune, 14 May 2007, pgs. 60-69; Myra Seaman, “Becoming More (Than) Human:  Affective Posthumanisms, Past and Future,” Journal of Narrative Theory 37.2 (Summer 2007), pgs. 246-275; Joel Dinerstein, “Technology and Its Discontents:  On the Verge of the Posthuman,” American Quarterly (2006), pgs. 569-595.

Remembering Harold Pinter

Several of the obituaries for Harold Pinter, the Nobel prize winning playwright who died on Christmas Eve, see the puzzle of his life as centered on the question of how so happy a person could remain so consistently angry.  The sense of anger, or perhaps sullenness is the better word, arises mainly from the diffidence of his theatrical persona and the independence of his best characters even, as it were, from himself, and of course his increasingly assertive left-wing politics.  The image works, despite its limitations, because as he suffered in recent years from a gaunting cancer he remained active and in public view, becoming something of a spectral figure.  And of course many who were not fans of theatrical work (from the hugely controversial Birthday Party, to the critically acclaimed Caretaker, and then further forays into drama and film) mainly knew him through his forceful opposition to Bush and Blair and their Iraqi policies and the larger entanglements of American empire.

But Pinter, and this is true I think of all deeply intellectual figures, cannot be reduced to the terms provocateur or leftist.  In this case, to be sure, simple reductions are wholly inadequate to the task given his very methods of work:  one of his most abiding theatrical legacies is his insistence that dramatic characters are inevitably impenetrable – they owe us no “back story,” nor are their utterances ever finally comprehensible, any more than are our interactions in the real world of performed conversation.  And so Pinter set characters loose who even he could not predict or control, an exercise that often meant his productions were themselves angering as audiences struggled to talk sense into the unfolding stories. As the Economist put it, “his characters rose up randomly… and then began to play taunting games with him.  They resisted him, went their own way.  There was no true or false in them.  No certainty, no verifiable past…  Accordingly, in his plays, questions went unanswered.  Remarks were not risen to.”

So what does all this say about the ends of communication?  For Pinter they are not connected to metaphysical reflection or understanding (this was Beckett’s domain; it is somehow fitting that Pinter’s last performance was in Beckett’s Krapp’s Last Tape, played from a wheelchair), but simple self defense, a cover for the emptiness underneath (Pinter: “cover for nakedness”), a response to loneliness where silence often does just as well as words.  And so this is both a dramatic device (the trait that makes a play Pinteresque) and a potentially angering paradox:  “Despite the contentment of his life he felt exposed to all the winds, naked and shelterless.  Only lies would protect him, and as a writer he refused to lie.  That was politicians’ work, criminal Bush or supine Blair, or the work of his critics” (Economist).  Meanwhile, the audience steps into Pinter’s worlds as if into a subway conversation; as Cox puts it, “The strangers don’t give you any idea of their backgrounds, and it’s up to the eavesdropper to decide what their relationships are, who’s telling the truth, and what they’re talking about.”

The boundaries that lie between speaking and silence are policed by timing, and Pinter once said he learned the value of a good pause from watching Jack Benny performing at the Palladium in the early 1950’s.  One eulogist recalls the “legendary note” Pinter once sent to the actor Michael Hordern:  “Michael, I wrote dot, dot, dot, and you’re giving me dot, dot.”  As Siegel notes:  “It made perfect sense to Hordern.”  The shifting boundaries of communication, which in turn provide traces of the shifting relations of power in a relationship, can devolve into cruelty or competition where both players vie for one-up status even as all the rest disintegrates around them.  As his biographer, Michael Billington, put it, “Pinter has always been obsessed with the way we use language to mask primal urges.  The difference in the later plays is not simply that they move into the political arena, but they counterpoint the smokescreen of language with shocking and disturbing images of torture, punishment, and death.”  At the same time, and this because Pinter was himself an actor and knew how to write for them, the written texts always seemed vastly simpler on paper than in performance – and this is not because simple language suggests symbolic meaning (Pinter always resisted readings of his work that found symbolic power in this or that gesture) but because the dance of pauses and stutters and speaking end up enacting scenes of apparently endless complexity.

For scholars of communication who attend to his work, then, Pinter poses interesting puzzles and even at their most cryptic his plays bump up against the possibilities and limits of language.  One such riddle, illuminated in an essay Dirk Visser, is that while most Pinter critics see his plays as revealing the failures of communication, Pinter himself refused to endorse such a reading, which he said misapprehended his efforts.  And as one moves through his pieces, the realization that language is not finally representational of reality slowly emerges (or, in some cases, with the first line), nor even instrumental (where speakers say certain things to achieve certain outcomes).  Pinter helps one see how language can both stabilize and unmoor meaning, even in the same instant (this is the subject of an interesting analysis of Pinter’s drama written by Marc Silverstein), and his work both reflects and straddles the transition from modernism to postmodernism he was helping to write into existence (a point elaborated by Varun Begley).

His politics were similarly complicated, I think, a view that runs contrary to the propagandists who simply read him as a leftist traitor, and a fascist at that.  His attacks on Bush/Blair were often paired with his defense of Milosevic in the press as implying a sort of left-wing fascism where established liberal power is always wrong.  But his intervention in the Milosevic trial was not to defend the war criminal but to argue for a fair and defensible due process, and this insistence on the truth of a thing was at the heart of his compelling Nobel address.  Critics saw his hyperbole as itself a laughable performative contradiction (here he is, talking about the truth, when he hopelessly exaggerates himself).  I saw a long interview done with Charlie Rose, replayed at Pinter’s death, where Rose’s impulse was to save Pinter from this contradiction, and from himself:  (Paraphrasing Rose) “Surely your criticism is not of all the people in America and Britain, but only made against particular leaders.”  “Surely you do not want to oversimplify things.”  Pinter agreed he was not accusing everyone of war crimes but also refused to offer broader absolution, since his criticism was of a culture that allowed and enabled lies as much as of leaders who perpetuated them without consequence.  Bantering with Rose, that is to say, he refused to take the bait, and the intentional contradictions persisted.  His Nobel speech (which was videotaped for delivery because he could not travel to Stockholm and is thus available for view online) starts with this compelling paragraph:

In 1958 I wrote the following:  “There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false.  A thing is not necessarily either true or false; it can be both true or false.”  I believe that these assertions still make sense and do still apply to the exploration of reality through art.  So as a writer I stand by them but as a citizen I cannot.  As a citizen I must ask:  What is true?  What is false?

What was so angering for many was Pinter’s suggestion that the American leadership (and Blair too) had committed war crimes that had first to be recognized and tallied and then perpetrators held to account:

The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile.  The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven.  Hundreds of thousands of deaths took place throughout these countries.  Did they take place?  And are they in all cases attributable to US foreign policy?  The answer is yes they did take place and they are attributable to American foreign policy.  But you wouldn’t know it.  It never happened.  Nothing ever happened.  Even while it was happening it wasn’t happening.  It didn’t matter.  It was of no interest.  The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them.  You have to hand it to America.  It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good.  It’s a brilliant, even witty, highly successful act of hypnosis.

The argument is offensive to many (when the Nobel was announced, the conservative critic Roger Kimball said it was “not only ridiculous but repellent”), though for a playwright most attentive to the power of the obscuring mask and the underlying and sometimes savage operations of power they obscure, it is all of a piece.  McNulty:  “But for all his vehemence and posturing, Pinter was too gifted with words and too astute a critic to be dismissed as an ideological crank.  He was also too deft a psychologist, understanding what the British psychoanalyst D. W. Winnicott meant when he wrote that ‘being weak is as aggressive as the attack of the strong on the weak’ and that the repressive denial of personal aggressiveness is perhaps even more dangerous than ranting and raving.”

As the tributes poured in, the tensions between the simultaneous arrogance (a writer refuses to lie) and the humility (he felt exposed to all the winds, naked and shelterless) in this arise again and again.  The London theatre critic John Peter gets at this when he passingly noted how Pinter “doesn’t like being asked how he is.”  And then, in back to back sentences:  “A big man, with a big heart, and one who had the rare virtue of being able to laugh at himself.  Harold could be difficult, oh yes.”  David Wheeler (at the ART in Cambridge, Massachusetts):  “What I enjoyed [of my personal meeting with him] was the humility of it, and his refusal to accept the adulation of us mere mortals.”  Michael Billington:  “Pinter’s politics were driven by a deep-seated moral disgust… But Harold’s anger was balanced by a rare appetite for life and an exceptional generosity to those he trusted.”  Ireland’s Sunday Independent:  “Pinter was awkward and cussed… It was the cussedness of massive intellect and a profound sense of outrage.”

Others were more unequivocal.  David Hare:  “Yesterday when you talked about Britain’s greatest living playwright, everyone knew who you meant.  Today they don’t.  That’s all I can say.”  Joe Penhall:  Pinter was “my alpha and beta…  I will miss him and mourn him like there’s no tomorrow.”  Frank Gillen (editor of the Pinter Review):  “He created a body of work that will be performed as long as there is theater.”  Sir Michael Gambon:  “He was our God, Harold Pinter, for actors.”

Pinter’s self-selected eulogy conveys, I think, the complication – a passage from No Man’s Land – “And so I say to you, tender the dead as you would yourself be tendered, now, in what you would describe as your life.”  Gentle.  Charitable.  But also a little mocking.  A little difficult.  And finally, inconclusive.

SOURCES:  Beyond Pinter’s own voluminous work, of course – Marc Silverstein, Harold Pinter and the Language of Cultural Power (Bucknell UP, 1993); Varun Begley, Harold Pinter and the Twilight of Modernism (U Toronto P, 2005); “Harold Pinter,” Economist, 3 January 2009, pg. 69; Ed Siegel, “Harold Pinter, Dramatist of Life’s Menace, Dies,” Boston Globe, 26 December 2008, pg. A1; John Peter, “Pinter:  A Difficult But (Pause) Lovely Man Who Knew How to Apologise,” Sunday Times (London), 28 December 2008, pgs. 2-3; Gordon Cox and Timothy Gray, “Harold Pinter, 1930-2008,” Daily Variety, 29 December 2008, pg. 2; Charles McNulty, “Stilled Voices, Sardonic, Sexy:  Harold Pinter Conveyed a World of Perplexing Menace with a Vocabulary All His Own,” Los Angeles Times, 27 December 2008, pg. E1; Dirk Visser, “Communicating Torture: The Dramatic Language of Harold Pinter,” Neophilologus 80 (1996): 327-340; Matt Schudel, “Harold Pinter, 78,” Washington Post, 26 December 2008, pg. B5; Michael Billington, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 15; Esther Addley, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 14; Frank Gillen, “Farewell to an Artist, Friend,” St. Petersburg Times (Florida), 4 January 2009, pg. 4E; “Unflagging in His Principles and Unrivalled in His Genius,” Sunday Independent (Ireland), 28 December 2008; Dominic Dromgoole, “In the Shadow of a Giant,” Sunday Times (London), 28 December 2008, pgs. 1-2; Mel Gussow and Ben Brantley, “Harold Pinter, Whose Silences Redefined Drama, Dies at 78,” New York Times, 26 December 2008, pg. A1.

Counting the humanities

Last week the American Academy of Arts and Sciences released a long-anticipated prototype of its Humanities Indicators project.  The initiative – organized a decade ago by the American Council of Learned Societies, the National Endowment for the Humanities, and the National Humanities Alliance, and funded by the Hewlett and Mellon Foundations – responds to the accumulating sense that (and I guess this is ironic) the humanities haven’t paid enough attention to quantifying their impact and history.  As Roger Geiger notes, “gathering statistics on the humanities might appear to be an unhumanistic way to gain understanding of its current state of affairs.”  But noting the value of a fuller accounting, the HI project was proposed as a counterpart to the Science and Engineering Indicators (done biennially by the National Science Board), which have helped add traction to the now widely recognized production crisis in the so-called STEM disciplines.

The Chronicle of Higher Education summarized the interesting findings this way (noting that these were their extrapolations; the Indicators simply present data without a narrative overlay apart from some attached essays):

In recent years, women have pulled even with men in terms of the number of graduate humanities degrees they earn but still lag at the tenure-track job level.  The absolute number of undergraduate humanities degrees granted annually, which hit bottom in the mid-1980s, has been climbing again.  But so have degrees in all fields, so the humanities’ share of all degrees granted in 2004 was a little less than half of what it was in the late 1960s.

This published effort is just a first step, and the reported data mainly usefully repackage data gleaned by other sources (such as from the Department of Education and the U.S. Bureau of Labor Statistics).  Information relating to community colleges is sparse for now.  Considerably more data have been originally generated by a 2007-2008 survey, and that will be added to the website in coming months.

The information contained in the tables and charts confirm trends long suspected and more anecdotally reported at the associational level:  the share of credit hours and majors and faculty hired who connect to the humanistic disciplines has fallen dramatically as a percentage of totals.  The percentage of faculty hired into tenure lines, which dropped most significantly in the late 1980s and 1990s, is still dropping, though more modestly, today.  Perhaps most telling, if a culture can be said to invest in what it values, is the statistic that in 2006, “spending on humanities research added up to less than half a percent of the total devoted to science and engineering research” (Howard).  As Brinkley notes, in 2007, “NEH funding… was approximately $138.3 million – 0.5 percent of NIH funding and 3 percent of NSF… [And] when adjusted for inflation, the NEH budget today is roughly a third of what it was thirty years ago.”  Even worse:  “[T]his dismal picture exaggerates the level of support for humanistic research, which is only a little over 13% of the NEH program budget, or about $15.9 million.  The rest of the NEH budget goes to a wide range of worthy activities.  The largest single outlay is operating grants for state humanities councils, which disburse their modest funds mostly for public programs and support of local institutions.”  And from private foundations, “only 2.1% percent of foundation giving in 2002 went to humanities activities (most of it to nonacademic activities), a 16% relative decline since 1992.”  Meanwhile, university presses are in trouble.  Libraries are struggling to sustain holdings growth.

Other information suggests interesting questions.  For instance:  why did the national production of humanities graduates climb so sharply in the 1960’s (doubling between 1961 and 1966 alone)?  Geiger argues the bubble was a product of circa-1960s disillusionment with the corporate world, energy in the humanistic disciplines, the fact that a humanities degree often provided employment entree for women (especially to careers in education), and a booming economy that made jobs plentiful regardless of one’s academic training.  After 1972, Geiger argues, all these trends were flipped:  the disciplines became embroiled in theoretical disputes and thus less intellectually compelling for new students (some attracted by Big Theory and arguably more antagonized), universities themselves became the target of disillusion, business schools expanded fast and became a more urgent source of competition, and so on.  Today, although enrollments are booming across the board in American universities, the humanities remain stable in generating roughly 8% of B.A. degrees, which may mean the collapse has reached bottom.

One interesting suggestion is posed by David Laurence, who reads the Indicators as proving that the nation can be said to have produced a “humanities workforce,” which in turn “makes more readily apparent how the functioning of key cultural institutions and significant sectors of the national economy depends on the continued development and reproduction of humanistic talent and expertise.”  This infrastructure includes (as listed by Laurence) schools and teachers, libraries, clergy, writers, editors, museums, arts institutions, theater and music, publishing, entertainment and news (where the latter involve the production of books, magazines, films, TV, radio, and Internet content).  And this gives rise to some potential confidence:  humanities programs continue to attract brilliant students, good scholarship is still produced, and the “’rising generation’ of humanities scholars is eager to engage directly with publics and communities” (Ellison), implying that the public humanities may grow further.  An outreach focus for humanists is a double-edged sword, of course, but might enhance the poor standing university humanities programs have, for example, with state funding councils.

SOURCES:  Jennifer Howard, “First National Picture of Trends in the Humanities is Unveiled,” Chronicle of Higher Education, 16 January 2009, pg. A8; Jennifer Howard, ‘Early Findings From Humanities-Indicators Project are Unveiled at Montreal Meeting,” Chronicle of Higher Education, 18 May 2007, pg. A12; Essays attached to the AAAS Humanities Indicators website, including Roger Geiger, “Taking the Pulse of the Humanities: Higher Education in the Humanities Indicators Project,” David Laurence, “In Progress: The Idea of a Humanities Workforce,” Alan Brinkley, “The Landscape of Humanities Research and Funding,” and Julie Ellison, “This American Life:  How Are the Humanities Public?”

When social science is painful

The latest issue of the Annals of the American Academy of Political and Social Science (#621, January 2009) is wholly focused on the report authored in 1965 (read it here) by Daniel Patrick Moynihan focused on the status of black families, “the most famous piece of social scientific analysis never published” (Massey and Sampson, pg. 6).  The report arose out of Moynihan’s experience in the Kennedy and Johnson administrations working on poverty policy; his small group of underlings included Ralph Nader, working in his first Washington job.  Inspired by Stanley Elkins’ work on slavery (a book of that name argued that slavery set in motion a still-continuing tendency to black economic and social dependency), Moynihan’s group examined the ways in which welfare policy was, as he saw it, perpetuating single-family households led mainly by women, and at the expense of social stability and racial progress.  [In what follows I am relying almost totally on the full set of essays appearing in the January 2009 AAPSS, and the pagination references that follow are to those articles.]

Moynihan was writing in the immediate aftermath of passage of the 1964 Civil Rights Act, and a principle theme of the report is that the eradication of legal segregation would not be enough to assure racial equality given larger structural forces at work.  Pressures on the black family had produced a state of crisis, a “tangle of pathology” that was reinforcing patterns of African-American poverty, he wrote.  Moynihan’s larger purpose was to recommend massive federal interventions, a goal subverted, unfortunately, by the report’s rhetorical overreaching (e.g.:  matriarchy in black families were said to prevent black men from fulfilling “the very essence of the male animal from the bantam rooster to the four star general… to strut”).  The solution, in his view, was to be found in a major federal jobs program for African American men.

The report was leaked to the press and was, by and large, immediately condemned, first because it seemed to provide aid and comfort to racists in its emphasis on out-of-wedlock births as a demographic pathology, and second because it seemed to many readers a classic case of “blaming the victim.”  In fact, the term “blaming the victim” may have its genesis in William Ryan’s use of the phrase to critique Moynihan in the Nation.  I think it likely that cultural salience of these critiques was later reinforced by a memo he wrote to Richard Nixon advocating the idea that “the issue of race could benefit from a period of ‘benign neglect,’” a locution he came to regret since that one soundbite came to dominate the actual point of the memo, better encapsulated in this perspective:  “We need a period in which Negro progress continues and racial rhetoric fades” (contrary to the impression given by the benign neglect comment, he was actually trying to be critical of the hot and racially charged rhetoric coming out of Vice President Agnew).  Moynihan’s report proved divisive in the African American community, endorsed on release by Roy Wilkins and Martin Luther King, Jr., but condemned by James Farmer.  By the time the report itself was more widely read its reception was distorted by the press frame, and a counter-tradition of research, celebrating the distinctiveness of black community formation, was well underway.

Read by today’s lights the Moynihan report has in some respects been both confirmed and its critics also partly vindicated too.  The essays in this special issue offer many defenses.  Douglas Massey (the Princeton sociologist) and Sampson (chair of sociology at Harvard, both writing in the introduction at pgs. 7-8) defend the report against the accusation of sexism:

Although references to matriarchy, pathological families, and strutting roosters are jarring to the contemporary ear, we must remember the times and context.  Moynihan was writing in the prefeminist era and producing an internal memo whose purpose was to attract attention to a critical national issue.  While his language is certainly sexist by today’s standards, it was nonetheless successful in getting the attention of one particular male chauvinist, President Johnson, who drew heavily on the Moynihan Report for his celebrated speech at Howard University on June 4.

Ironically, though, the negative reactions to the leaked report (which suffered since the report itself was not publicly circulated, only the critical synopses) led Johnson himself to disavow it, and no major jobs program for black men was forthcoming as part of Great Society legislative action.  Moynihan left government soon afterward and found the national coverage, a lot of which attacked him as a bigot, scarring and unwarranted given the overall argumentative arc of the report.  Only when serious riots reerupted in 1968 did jobs get back on the agenda, but the watered down affirmative action programs that resulted failed to transform the economic scene for racial minorities while proving a galvanizing lightning rod for conservative opponents (Massey and Sampson, 10).  The main policy change relating to black men since then has been sharp increases in rates of incarceration, not rises in employment or economic stability, a phenomenon which is the focus of an essay by Bruce Western (Harvard) and Christopher Wildeman (University of Michigan).

Several of the contributors to the special issue mainly write to insist that Moynihan has been vindicated by history.  His simple thesis, that in subgroups pressures tending to disemploy males will in turn fragment families and produce higher incidences of out-of-wedlock birth, divorce, all at the main expense of women and children, is explicitly defended as having been vindicated by the newest data.  James Q. Wilson writes that the criticism the report received at the time “reflects either an unwillingness to read the report or an unwillingness to think about it in a serious way” (29). Harry Holzer, an Urban Institute senior fellow, argues that the subsequent trends in black male unemployment have only intensified since the 1960’s, thereby reaffirming the prescience of Moynihan’s position and strengthening the need for a dramatic federal response (for instance, Holzer defends the idea that without larger educational investments the destructive perceptions of working opportunities will produce perceptual barriers to cultural transformation).  The predicate in Ron Haskins (of the Brookings Institution) essay is announced by its title:  “Moynihan Was Right:  Now What?” (281-314).

Others argue that the Moynihan claims, which relied on the assumption that only traditional family arrangements can suitably anchor culture, ignore the vitality of alternative family forms that have become more common in the forty years since.  Frank Furstenberg notes that “Moynihan failed to see that the changes taking place in low-income black families were also happening, albeit at a slower pace, among lower-income families more generally” (95).  For instance, rates of single parenting among lower-income blacks have dropped while increasing among lower-income whites.  Linda Burton (Duke) and Belinda Tucker (UCLA) reiterate the criticism that the behavior of young women of color should not be pathologized, but is better understood as a set of rational responses to the conditions of cultural uncertainty that pervade poorer communities (132-148):  “Unlike what the Moynihan Report suggested, we do not see low-income African American women’s trends in marriage and romantic unions as pathologically out of line with the growing numbers of unmarried women and single mothers across all groups in contemporary American culture.  We are hopeful that the uncertainty that is the foundation of romantic relationships today will reinforce the adaptive skills that have sustained African American women and their families across time” (144).  Kathryn Edin (Harvard) et al., criticize Moynihan’s work for diverting research away from actual attention to the conditions of black fatherhood, which in turn has meant that so-called “hit and run” fathers could be criticized in ways that have raced far out of proportion to their actual incidence in urban populations (149-177).

The lessons drawn by the AAPSS commentators from all this for the practice of academic research are interesting.  One drawn by Massey relates to the “chilling effect on social science over the next two decades [caused by the Moynihan report and its reception in the media].  Sociologists avoided studying controversial issues related to race, culture, and intelligence, and those who insisted on investigating such unpopular notions generally encountered resistance and ostracism” (qtd. from a 1995 review in Massey and Sampson, 12). Because of this, and the counter-tendency among liberal/progressive scholars to celebrate single parenting and applaud the resilience of children raised in single-parent households, conservatives were given an ideological opening to drumbeat media reports about welfare fraud, drug usage rates, and violence, and to pathologize black men, an outcome M/S argue led to a conservative rhetoric of “moralistic hectoring and cheap sermonizing to individuals (“Just say no!”).  Not until William Julius Wilson’s The Truly Disadvantaged (1987), did the scholarly tide shift back to a publicly articulated case for social interventions more in tune with Moynihan’s original proposals – writing in the symposium WJW agrees with that judgment and traces the history of what he argues has been a social science abandonment of structural explanations for the emergence of poverty cultures.  The good news is arguably that “social scientists have never been in such a good position to document and analyze various elements in the ‘tangle of pathology’ he hypothesized” (Massey and Sampson, pg. 19).

The history of the report also calls attention to the limits of government action, a question with which Moynihan is said to have struggled for his entire career in public service.  Even accepting the critiques of family disintegration leaves one to ask what role the government might play in stabilizing family formations, a question now controversial on many fronts.  James Q. Wilson notes that welfare reform is more likely to shift patterns of work than patterns of family, since, e.g.,  bureaucrats can more reasonably ask welfare recipients to apply for a job than for a marriage license (32-33).  Moynihan’s answer was that the government’s best chances were to provide indirect inducements to family formation, mainly in the form of income guarantees (of the sort finally enacted in the Earned Income Tax Credit).  But asked at the end of his career about the role of government, Moynihan replied:  “If you think a government program can restore marriage, you know more about government than I do” (qtd. in Wilson, 33).

Moynihan was an intensely interesting intellectual who thrived, despite his peculiarities, in the United States Senate (four terms from New York before retiring and blessing Hillary Clinton’s run for his seat), as he had earlier serving as Nixon’s ambassador to India and Ford’s representative at the United Nations.  At his death in 2003, a tribute in Time magazine said that “Moynihan stood out because of his insistence on intellectual honesty and his unwillingness to walk away from a looming debate, no matter how messy it promised to be.  Moynihan offered challenging, groundbreaking – sometimes even successful – solutions to perennial public policy dilemmas, including welfare and racism.  This is the sort of intellectual stubbornness that rarely makes an appearance in Washington today” (Jessica Reaves, Time, March 27, 2003).  His willingness to defend his views even when deeply unpopular gave him a thick skin and the discipline to write big books during Senate recesses while his other colleagues were fundraising.

Moynihan’s intellectualism often put him at odds with Democratic orthodoxy, and maybe on the wrong side of the issue – he opposed the Clinton efforts to produce a national health insurance system, publicly opposed partial birth abortion (“too close to infanticide”), was famously complicit in pushing the American party line at the United Nations, a fact that has been much criticized as enabling the slaughter of maybe 200,000 victims, killed in the aftermath of Indonesia’s takeover of East Timor.  But he also held a range of positions that reassured his mainly liberal and working class base:  opposed to the death penalty, the Defense of Marriage Act, NAFTA, and a famous champion of reducing the government’s proclivity to classify everything as top secret.

But Daniel Patrick Moynihan will be forever linked to his first and most (in)famous foray into the nation’s conversation on race, which simultaneously revealed the possibilities for thoughtful social science to shape public policy and the risks of framing such research in language seeking to make such research dramatic and attention-getting in a glutted sea of white papers and task force reports whose issuance typically come and go without any serious notice.

Claude Levi-Strauss at 100

On Friday, November 28, Claude Levi-Strauss turned 100, an event that set loose a series of worldwide commemorations.  As one might expect, an intellectual of such enormous influence provoked competing reactions.  In London, the Guardian dismissed Levi-Strauss (“the intricacies of the structural anthropology he propounded now seem dated… [and] he has become the celebrated object of a cult’) while the Independent celebrated him (“his work, after going out of fashion several times, is more alive than ever”), both judgments issued on the same day.  French President Nicolas Sarkozy paid a personal evening visit to the Levi-Strauss apartments, and the museum he inspired in Paris, the Musee du Quai Branly, gave away free admission for a day in his honor (that day 100 intellectuals gave short addresses at the museum or read excerpts from his writings).  ARTE, the French-German cultural TV channel, dedicated the day to Levi-Strauss, playing documentaries and interviews and films centered on his lifework, and the New York Times reported that “centenary celebrations were being held in at least 25 countries.”

Levi-Strauss has not, for obvious reasons, made many public appearances of late.  His last was at the opening of the Quai Branly in 2006; not only did he inspire the museum intellectually but many of the exhibit objects were donated by him, the accumulation of his own worldwide life of travels.  In a 2005 interview given with Le Monde, he expressed some pessimism about the planet:  “The world I knew and loved had 2.5 billion people in it.  The terrifying prospect of a population of 9 billion has plunged the human race into a system of self-poisoning.”  In my own field of communication studies, I am not aware that he is widely read or remembered at all, even in seminars on mythology and narrative (two fields in which he made significant contributions), probably an unfortunate byproduct of Jacques Derrida’s sharp attack in two essays that are widely read by rhetorical scholars (“Structure, Sign and Play in the Discourse of the Human Sciences,” in Writing and Difference, Routledge, 1978 and “The Violence of the Letter:  From Levi-Strauss to Rousseau,” in Of Grammatology, Johns Hopkins UP, 1976).

For all I know Levi-Strauss remains must-reading in anthropology, the discipline he did so much to shape as an intellectual giant of the twentieth century.  But his wider absence from the larger humanities (which I mean simply as a reference to the extent to which he is read or cited across the disciplines) is, I think, unfortunate.  No intellectual of his longevity and productivity will leave a legacy as pure as the driven snow.  His campaign against admitting women to the Academie Francaise (he argued for what he saw as long tradition) was wrong and rightly alienating.  His attempt to systemize the universal laws of mythology, which formed what was for some an off-putting four-volume work, remains a brilliant and densely rich analysis of the underlying logics of mythological meaning-making.

But the trajectory of structuralism, and in turn poststructuralism and contemporary French social thought (including the research tradition shaped by Jacques Lacan, who founded his account of the Symbolic on Levi-Strauss’ work on kinship and the gift), cannot be understood without engaging his work, his engagements with Marxist dialectics, Malinowski, Roland Barthes, Jean-Paul Sartre, Paul Ricoeur and many others who respected his work even when they profoundly disagreed with it.  Lacan’s first 1964 seminar on “The Four Fundamental Concepts of Psychoanalysis” virtually begins by raising a Levi-Strauss-inspired question (Lacan wonders whether the idea of the pensée sauvage is itself capacious enough to account for the unconscious as such).  Today it is Foucault who is fondly remembered for pushing back Sartre’s temporally-based dialectical theory, but at the time Levi-Strauss played as significant a role (and his essays, which take on Sartre in part by deconstructing the binary distinction between diachronic and synchronic time, remain models of intellectual engagement).

Levi-Strauss has been a key advocate for a number of important ideas that have now become accepted as among the conventional wisdoms of social theory, and that absent his articulate forcefulness might still have to be fought for today:  the idea that Saussure and Jakobson’s work on language should be brought to bear on questions relating to social structure, the thought that comprehending the relationship of ideas within a culture is more important to intercultural understanding than anthropological tourism, the sense that cultural difference cannot be reduced to the caricature that modern peoples are somehow smarter or wiser than ancient ones or that modern investigators should inevitably disparage the “primitive,” the insight that the relationship between things can matter more than the thing-in-itself, and many more.

But the reasons to read Levi-Strauss are well justified on grounds that go beyond his interesting biography (including his sojourn in exile from the Nazis at the New School for Social Research in New York and public longevity as a national intellectual upon his return to France), his historical role in old philosophical disputes, or even the sheer eloquence of his writing (Tristes tropiques, written in 1955, remains a lovely piece of work and a cleverly structured narrative argument).  It is, I think, a mistake to dismiss Levi-Strauss’ work as presuming to offer a science of myth – the best point of entry on this point is the set of lectures he delivered in English for the Canadian Broadcast Corporation in the late 1970’s (published as Myth and Meaning in 1978), where his overview reveals, as if it was necessary, the layers of ambiguity and interpretation that always protected Levi-Strauss’ work from easy reductionism).

And the exchanges with Derrida and Sartre merit a return as well.  There is an impulse, insidious in my view, to judge Derrida’s claims as a definitive refutation when they signal a larger effort to push the logic of structuralism and modernism to its limits.  The post in poststructuralism is not an erasure or even a transcendence but a thinking-through-the-implications-of maneuver that lays bare both the strengths and limits of the tradition begun by Saussure.  Levi-Strauss developed a still-powerful account of how linguistic binaries structure human action but he was also deeply self-reflective as he interrogated the damage done to anthropological theory by its own reversion to binary logics (such as the elevation of literacy over orality, or advanced over primitive societies).  Paul Ricoeur, and Derrida himself, saw the debate with Levi-Strauss as a definitive refutation (Ricoeur, writing in his Conflict of Interpretation, set Derrida’s “school of suspicion” against Levi-Strauss’ “school of reminiscence”).  But the insights generated by principles that Derrida (and Levi-Strauss) rightly understood as provisional and even contradictory remain powerful, perhaps even more so at a time when poststructuralist logics seem to be running their course.

None of this denies the real objections raised against Levi-Strauss’ version of structuralism – its methodological conservatism or its tendency (offered in the name of scholarly description) to valorize or make invisible power arrangements that reinforce the tendency of one part of any binary to obliterate or repress its opposite.  But the richness of Derridean thought is enriched and not subverted by putting it back into conversation with Levi-Strauss.  To take just one example, CLS’s work on myth usefully presages Derrida’s own insights on the limits of inferring a “final” or “original” meaning.  The elements of myths circulate within the constraints of social structure to create endless transformations and possibilities of meaning best understood not through the logics of reference or mimesis but logics of context and relationship.  And the case Levi-Strauss articulated against phenomenology still holds up pretty well in the context of its reemergence in some quarters (in communication studies, phenomenological approaches are increasingly advocated as a way forward in argumentation theory and cinema studies).  The first volume of Structural Anthropology remains one of the most important manifestos for structuralism.

From the vantage point of communication, one of the intriguing dimensions of CLS’s work is his claim that modern societies are plagued by an excess of communication.  When first articulated, his concern related to the risk that too much cross-cultural exchange would obliterate differences, a view then current in the work of scholars like Herbert Schiller and the circa-1970s view that the allures of America’s entertainment culture was producing a one-way destruction of other societies.  But Levi-Strauss means something more too, and his argument is made intriguing in the light of his lifelong commitment to the idea that the deep grammars of cultural mythologies are universal.  For it is the interplay of universally shared experience and local variability that expresses the real genius of the human condition, and the twin threats of global groupthink and overcrowding are still not quite fully apprehended, even within the terms of the poststructuralist conversations he did so much to shape.

Michel Foucault, writing in Order of Things, says of Levi-Strauss that his work is motivated “by a perpetual principle of anxiety, of setting in question, of criticism and contestation of everything that could seem, in other respects, as taken for granted.”  Foucault’s sentiment is complicated and not intended, as I read it, as a simple compliment.  But it points to an aspect of his century-long work that should also attract continued interest.

SOURCES:  “In praise of Claude Levi Strauss,” (London) Guardian, 29 November 2008, pg. 44; John Lichfield, “Grand chieftain of anthropology lives to see his centenary,” (London) Independent, 29 November 2008, pg. 38; Steven Erlanger, “100th birthday tributes pour in for Levi-Strauss,” New York Times, 29 November 2008, pg. C1; Albert Doja, “The advent of heroic anthropology in the history of ideas,” Journal of the History of Ideas (2005): 633-650;  Lena Petrovic, “Remembering and disremembering: Derrida’s reading of Levi-Strauss,” Facts Universitatis 3.1 (2004): 87-96.

William Eggleston invented color

The Whitney in New York has just opened a major retrospective of William Eggleston’s long career as an innovator in photography (William Eggleston:  Democratic Camera, Photographs and Video, 1961-2008), which perhaps brings full circle a journey that has been mainly centered in the American south and the Mississippi Delta (Memphis most of all) but that in 1976, and connected with an exhibit at the Museum of Modern Art (MOMA), has had galvanizing force for the wider arts.

Although the MOMA had exhibited color photography once before and had shown photos in its galleries as far back as 1932, its decision to showcase Eggleston and his color-saturated pictures in 1976 was exceptionally controversial.  At the time the New York Times said it was “the most hated show of the year.”  “Critics didn’t just dislike it; they were outraged.  Much the way viewers were aghast when Manet exhibited Olympia, a portrait of a prostitute, many in the art community couldn’t figure out why Eggleston was shooting in color” (Belcove).  Eggleston’s subjects can be seen as totally mundane (as in the above) and his public refusal to illuminate how his main works are staged proved infuriating (and actually, to the contrary, Eggleston has long insisted that he never poses his subjects, arguing, astonishingly, that these are in every case single-shot images and that either he gets the shot or moves onto the next without regret).  Prior to Eggleston, art photography was most often black-and-white.  Thus, for students of the art and practice of photography, and given his enormous visual influence, one can say without complete hyperbole that William Eggleston invented color.

Well, maybe that is a little hyperbolic.  After all, those seeking the color founding might better retreat to the period of the “Cambrian Explosion” 543 million years ago, when the diversification of the species was sparked by the evolutionary development of vision; in that time, “color first arose to help determine who ate dinner and who ended up on the plate” (Finlay 389).  Or one might look to the late Cretaceous period – prior to that “plants did not produce flowers and colored leaves.”  Further elaborating this perspective, Finlay (391) writes that:

As primates gained superior color vision from the Paleocene to the Oligocene (65 to 38 million years ago), the world for the first time blossomed into a range of hues.  At the same time, other creatures and plants also evolved and settled into ecological niches.  Flowering plants (angiosperms) radiated, developing colored buds and fruits; vivid insects and birds colonized the plants, attracted by their tints and serving to disperse their pollen and seeds.  Plants, insects, birds, and primates evolved in tandem, with color playing a crucial role in the survival and proliferation of each.  The heart of these developments lay in upland tropical Africa, where lack of cloud cover and therefore greater luminance resulted in selective evolutionary pressure for intense coloration.

It states the obvious, but I’ll do it anyway, to note that colors, along with the human capacity to recognize and distinguish among them, transforms human experience.  Part of the reason why Aristotle so famously preferred drawing to color is that the latter can too easily overwhelm one’s critical capacities (for him this was evidenced by the fact a viewer in the presence of rich color has to step back, color blurring at close range, and in the process a necessary distancing will inevitably divert audiences from attending to the artistic details present in good drawing).  Plato had disdained color too, thinking it was merely an ornamental, ephemeral and surface distraction, a view oddly recalled later by Augustine, who warned against the threat posed by the “queen of colors” who “works by a seductive and dangerous sweetness to season the life of those who blindly love the world” (qtd. in Finlay, 400).  It was only in the 12th century that Christians came fully around to color, at about the time stained glass technology was undergoing fast refinement; suddenly colored lights were seen as evoking the Divine and True Light of God.

But for centuries color was dismissed as feminine and theoretically disparaged since it “is impossible to grasp and evanescent in thought; it transcends language, clouds the intellect, and evades categorization” (Finlay, 401).  Color was thus seen as radically irrational by the thinking and professing classes – Cato the Elder said that colores floridi (florid colors) were foreign to republican virtue – all of this an interesting contrast to the Egyptian kings who saturated their tombs with gorgeous coloration and to the Greeks who ignored Aristotle’s warnings and painted their Parthenon bright blue and their heroic marble sculptures right down to the red pupils we would today prefer to digitize out since they apparently evoke the idea of Satanic possession.

The history of color is regularly bifurcated by scholars into work emphasizing chromophilia (the love of color) and chromophobia, which by contrast has often reflected an elite view that color is garish and low class.  Wittgenstein concluded that the radically subjective response to color could never be adequately specified in a manner adequate to philosophy:  “there is merely an inability to bring the concepts into some kind of order.  We stand there like the ox in front of the newly-painted stall door” (qtd. in Finlay, pg. 383).

In the context of early film production and the industry’s emerging use of color and then Technicolor, colors were seen by some as a “threat to classical standards of legibility and coherence,” necessitating close control:

For instance, filmmakers monitored compositions for unwanted color contrasts, sometimes termed visual magnets, that might vie for attention with the narratively salient details of a scene.  Within a few years the body of conventions for regulating color’s function as a spatial cue had been widely adopted.  The most general guideline was that background information should be carried by cool colors of low saturation, leaving warm, saturated hues for the foreground.  Narrative interest should coincide with the point of greatest color contrast. (Higgins)

The ongoing power of such conventions has recently led Brian Price, a film scholar at Oklahoma State University, to argue that the imposition of saturated and abstracted color in recent films made by Claire Denis and Hou Hsiao-Hsien exemplify a resistive threat to globalized filmmaking and its industrial grip on the world’s imagination.

A paradox in Eggleston’s work is that although his subjects – Elvis’ Graceland, southern strip malls, the run down architecture produced as often by the simple ravages of time and nature as of neglect – are dated and immediately evocative of a completely different though not wholly lost and variously tempoed time, his photographs seem timeless.  Like the man himself, described by one journalist as “out of place and out of time,” Eggleston captures elements of modern life that persist and his attention to the formalistic properties of color and framing make his work arresting even for those uninterested or unimpressed by the odd assemblages of southern culture who constitute his most interesting subjects.  This paradox, in turn, can produce a sense in the viewer of vague dread, as if the contradictions inhabited by the idea of serendipitous composition reveal dangers of which we are customarily unaware.  At the same time, because Eggleston has never seemed interested in documentary reportage and has defaulted to literal photographs that instead accentuate the commonplace, he “belongs to that rare and disappearing breed, the instinctive artist who seems to see into and beyond what we refer to as the ‘everyday’” (O’Hagan).

Technically speaking, Eggleston beat others to the punch because his personal wealth enabled him to produce very high quality and expensive prints of his best work; another benefit of this wealth may be that, as Juergen Teller has put it, “he has never had the pressure of being commercial.”  The dye-transfer print process he has used since the 1960’s (Eggleston resists the shift to the digital camera and image manipulation, simply noting that it is an instrument he does not know how to play) was borrowed from high-end advertising.  And although rejected early on and in some quarters – the conservative art critic Hilton Kramer notoriously described his 1976 New York exhibit as “perfectly banal” – he has been honored late in life as a prophet in his own time – a lifetime achievement award from the Institute of Contemporary Photography and another from Getty, and other honors from the National Arts Club and others to numerous to mention.  Eggleston seems immune to the critiques whether hostile or friendly, a fact reflected in the details of his mercurial and sometimes weird personal life but also in his refusal to talk talk talk about his work:  “A picture is what it is, and I’ve never noticed that it helps to talk about them, or answer specific questions about them, much less volunteer information in words.  It wouldn’t make any sense to explain them.  Kind of diminishes them.”

The distinctive Eggleston aesthetic has influenced David Lynch (readily evident in his Blue Velvet), Gus Van Sant (e.g., Elephant, an explicit homage), Sofia Coppola (the Virgin Suicides; “it was the beauty of banal details that was inspirational”), the band Primal Scream (his “Troubled Waters” forms the cover art for Give Out But Don’t Give Up) and many others.  David Byrne is a friend and Eudora Welty was a fan.  Curiously, despite his influence on avant-garde cinema and his own efforts at videography, Eggleston professes faint interest in film, although he is said to like Hitchcock.

Finlay has noted that “Brilliant color was rare in the premodern world.  An individual watching color television, strolling through a supermarket, or examining a box of crayons sees a larger number of bright, saturated hues in a few moments than did most persons in a traditional society in a lifetime” (398).  What was true of premodernity was also true of photography wings in the world’s major art museums.  Until William Eggleston.

SOURCES:  Holland Cotter, “Old South Meets New, in Living Color,” New York Times, 6 November 2008; Sean O’Hagan, “Out of the Ordinary,” The (London) Observer, 25 July 2004; Rebecca Bengal, “Southern Gothic: William Eggleston is Even More Colorful than His Groundbreaking Photographs,” New York Magazine, 2 November 2008; Julie Belcove, “William Eggleston,” W Magazine, November 2008; Scott Higgins, “Color Accents and Spatial Itineraries,” Velvet Light Trap, no. 62 (Fall 2008)L 68-70; Brian Price, “Color, the Formless, and Cinematic Eros,” Framework 47.1 (Spring 2006): 22-35; Jacqueline Lichtenstein, The Eloquence of Color:  Rhetoric and Painting in the French Classical Age, trans. Emily McVarish (Berkeley:  University of California Press, 1993); Robert Finlay, “Weaving the Rainbow:  Visions of Color in World History,” Journal of World History 18.4 (2007): 383-431; Christopher Phillips, “The Judgment Seat of Photography,” October 22 (October 1982): 27-63.

The other Williams Ayers

Driving to work yesterday I heard one of Atlanta’s conservative talk radio hosts announce with a mixture of pride and wistfulness that, as a concession to Barack Obama’s victory, he had thrown out all his “research” on William Ayers, whose violent past he had been preaching for months.  Now that Obama has been chosen by the voters to lead the nation, the talk show host noted, it was time to move past Ayers and Jeremiah Wright and onto larger topics.  At the same time, though, while Sarah Palin has been insisting that the association (however modest) still matters, Ayers himself has emerged into the public spotlight, having given some recent interviews (he was on Good Morning America the other morning) and published some op-ed pieces.

As the election unfolded, only passing notice was typically given to the other/older William Ayers, the University of Illinois (Chicago) professor of education.  Now that November 4th has passed, and accepting for the moment the impulse to bracket his past to better understand his influence today as an advocate for educational reform, I’ve been reading some of his work on social justice pedagogy.  It was this work, actually, that led him to cross paths with Obama, since their mutual interest in school reform led both to agree to serve on the same Chicago board of directors, an association that obviously led Obama’s critics to question the wisdom of his political and intellectual alliances.

Ayers has a way of getting right to the point, a trait much on display in the recent interviews but which also makes him an interesting writer.  One book review he authored starts:  “Drawing on traditional methods and straightforward approaches… Vonovskis fails to add anything new to the story of the origins of Head Start despite constant and irritating assertions to the contrary.”  And an essay co-authored with Michael Klonsky begins, “Each day, children in Chicago are cheated out of a challenging, meaningful, or even an adequate education…  Despite the well-publicized crime rate in Chicago’s poor neighborhoods, the greatest robbery is not in the streets, but in the schools.”  But Ayers’ purpose is not just attention-grabbing or op-ed-style hyperbole, for he quickly moves to back up such provocative claims by the presentation of truly appalling data about urban education.  The Chicago research, which appeared in 1994, noted that as of that year, for instance, “reading scores in almost half of Chicago’s schools are in the lowest 1% of the nation.”

Ayers’ work in Chicago does partly mirror the logic of his anti-war activism, which was animated by the view that one must deal with criminal negligence by use of a proportionally urgent response (this was the argument he made on GMA in justifying his participation in anti-Vietnam War insurgency; his view was that what he saw as America’s murderous policies in southeast Asia were so monstrous that they demanded even the use of violent opposition).  In the context of education reform, this has led to the mobilization of what might best be considered a social movement, organized to provide tangible opposition to schooling bureaucracies.  And this, in turn, leads to a wide-scale systemic perspective that attends as much to the macro-allocation (or misallocation) of educational funds as to the local dynamics of this or that classroom.  Schools in Illinois, as elsewhere, are funded by property taxes, and because urban property values tend to be lower they generate less revenue than ends up available in the richer suburbs.  In 1992, Illinois voters narrowly rejected a statewide constitutional amendment to provide funding equalization (a constitutional amendment requires 60% support, while this one received 56%).

The passions elicited by the issue of educating children run deep.  Ayers recounts the firestorm evoked when, in 1988, then-governor of Illinois Bill Thompson resisted higher funding for Chicago schools – he didn’t want to throw more money into a “black hole.”  When one of Chicago’s representatives in the state legislature accused Thompson of having made a racist comment, pundits accused the senator of playing the race card.  But such back-and-forths are not surprising given the complex history of racial politics that has characterized the city’s political history, not to mention the long period of conflict between the city and its teacher union that led to a regular cycle of walkouts in the 1980’s and ‘90’s.  One can gather some sense of Ayers’ fuller indictment in the following passage, also written in the mid-1990’s:

Returning to Chicago [from a discussion of schooling in South Africa], a similarly persuasive argument can be made that the failure of some schools and some children is not due to a failure of the system.  That is, if one suspects for a moment the rhetoric of democratic participation, fairness, and justice, and acknowledges (even tentatively) that our society, too, is one of privilege and oppression, inequality, class divisions, and racial and gender stratifications, then one might view the schools as a whole as doing an adequate job both of sorting youngsters for various roles in society and convincing them that they deserve their privileges and their failures.  Sorting students may be the single, brutal accomplishment of U.S. schools, even if it runs counter to the ideal of education as a process that opens possibilities, provides opportunities to challenge and change fate, and empowers people to control their own lives.  The wholesale swindle in Chicago, then, is neither incidental nor accidental; rather, it is an expression of the smooth functioning of the system.

The movement that emerged as a reaction to the frustrating situation in Chicago was in large measure centered on the idea of accountability, a rhetorical rubric that can accommodate both conservatives (who might prefer to emphasize how schools fail to respond to or engage the interests of parents) and liberals (who might prefer to emphasize the need for greater investments, paired with oversight better able to hold bureaucracies to account) both.  Emerging as it did under the leadership of Mayor Harold Washington, the mobilization of parents and educational reformers brought (Ayers and Klonsky argue) African-American parents to the forefront, along with the traditional themes of civil rights organizing (grassroots activity, decentralization, desegregation, community empowerment).  But they were also assisted by the then-recent creation of academic research activity that provided concrete data able to call attention to the true problems.  Early on, Mayor Washington was able to bring together mainly minority parents and white business leaders, all of whom shared concerns about poor schooling, but that coalition was fragmented when the funding issue percolated to the top of the reform agenda (community leaders favored more equitable tax policies and greater funding, while many in the business community were opposed).

Starting with the local reflects an ongoing theme in Ayers’ work, and in an essay he wrote in 1988, it becomes an explicit focus of his account of his past.  Ayers wrote:  “My experience with making change leaves me unimpressed with theories of change, big theories and little theories alike.  Big theories are often compelling because of their bold self-assurance and their tidy certainty…, [but] too often the self-activity of people is lost in a kind of determinism…  Small theories of change promise a different kind of certainty, but they fail as often for missing the larger context…”  Such a view, in turn, has shaped Ayer’s subsequent work on education as social justice, in which he repeatedly insists he is not seeking airy abstraction but on-the-ground changes for children.

Ayers’ departs from social justice accounts of education that see education as a mechanism for improving students’ economic and social prospects.  For Ayers such an approach reflects a certain naivete, since it rests on a basic endorsement of the overall forces and institutions that shape society and often constrain progress even for the well educated (the emphasis in such an approach can too fully rest on the idea of equipping under-educated students for society, without enabling changes in the political and social system that will make the resulting educated citizens more welcome).  Ayers thus also argues that social justice education has to be politically empowering even as basic life skills are inculcated, where schools might be imagined as also fostering real political agency.

The challenge, of course, is that education is complicated and the dynamics of successful teaching cannot be reduced to axiomatic rules teachable in college education classrooms.  In Teaching Toward Freedom, his 2004 book, Ayers (channelling Walt Whitman) cites the following as offering a more hopeful (and explicitly poetic) view of the well formed citizen:

Love the earth and the sun and the animals, despise riches, give alms to everyone that asks, stand up for the stupid and the crazy, devote your income and labor to others, hate tyrants, argue not concerning God, have patience and indulgence toward the people, take off your hat to nothing known or unknown or to any man or number of men, go freely with powerful uneducated persons and with the young and with the mothers of families, re-examine all you have been told at school or church or in any book, dismiss whatever insults your soul, and your very flesh shall be a great poem.

SOURCE:  William Ayers, “The Republican’s Favorite Whipping Boy, Former Student Radical William Ayers Tells What it Was Like to Be Painted as a Symbol of Evil by McCain and Palin,” Montreal Gazette, 8 November 2008, pg. B7; Colin Moynihan, “Ex-Radical Talks of Education and Justice, Not Obama,” New York Times, 27 October 2008, pg. A22; William Ayers and Michael Klonsky, “Navigating a Restless Sea:  The Continuing Struggle to Achieve a Decent Education for African American Youngsters in Chicago,” Journal of Negro Education 63.1 (1994): pgs. 5-18; Ayers, “The Shifting Ground of Curriculum Thought and Everyday Practice,” Theory into Practice 31.3 (Summer 1992): pgs. 259-263; Ayers, “Problems and Possibilities of Radical Reform:  A Teacher Educator Reflects on Making Change,” Peabody Journal of Education 65.2 (Winter 1988): pgs. 35-50;  Emery Hyslop-Margison, “Teaching for Social Justice,” Journal of Moral Education 34.2 (June 2005): pgs. 251-256; John Pulley, “Former Radicals, Now Professors, Draw Ire of Alumni at Two Universities,” Chronicle of Higher Education, 16 November 2001, pg. A32.

Follow

Get every new post delivered to your Inbox.