Home » History

Category Archives: History

Interpreting ossuary boxes

Roughly seven years ago the discovery of a 2000-year old bone box (or, ossuary) which is engraved with the words James, Son of Joseph, brother of Jesus, was announced, setting in motion a media, scholarly, and now judicial frenzy.  There is not much doubt that the 20-inch long box is about the right age to be from the period when Jesus lived; the controversy has to do with whether the inscription was added later.  The editor of the Biblical Archeological Review (BAR first headlined the find in 2002 in an essay written by the Sarbonne scholar André Lemaire) has written a book defending the authenticity of the find, which he says makes this one of the greatest archeological finds of all time since it would be the only contemporaneous evidence that Jesus lived and that the New Testament naming of his (step-)father and brother is accurate.  By contrast, Nina Burleigh has a new book out (Unholy Business:  A True Tale of Faith, Greed and Forgery in the Holy Land, Harper Collins) arguing the whole thing is, as the title implies, a gigantic hoax.

The antiquities collector who sprang the find on the world is Oded Golan, who says he was sold the box by an Arab antiquities dealer; he can’t remember who the man was.  An investigation was subsequently undertaken by the Israeli Antiquities Authority, which pronounced the inscriptions a fraud (their Final Report is available on their main website); soon thereafter Golan and three others were arrested and, for the last almost four years, have been on trial for taking valuable historical artifacts and adding fake lettering in a scheme to make them massively more valuable.  Golan denies the charges.

The case is obviously complicated, and pretty interesting.  Golan is accused of also faking a tablet he claims came from the first Solomon Temple.  The ossuary, if confirmed, might rock the world of Christian scholarship (more on that in a moment); the Jehoash tablet, if confirmed, might rock the world of Judaism by proving the existence of Solomon’s Temple on the historically contested Al Aqsa Temple Mount.

A lot of the skepticism derives from the fact that the finds just seem too good to be true.  The tablet contains sixteen full lines of text, when similar finds from the right period are lucky to include a smattering of textual fragments.  Burleigh note that when the authorities searched Golan’s house, they found little baggies of ancient dirt and charcoal, along with carving tools one would use to fake age an object.  During one search, as the Toronto Star reported it, “the James Ossuary was found sitting atop a disused toilet, an odd place, police felt, for a box purported to have once contained the DNA of Jesus’ family.”

The Israel Antiquities Authority sees the case as open and shut.  While some have argued that scientifically valid tests of the stone patina verify the authenticity of the engraved lettering, the panel of experts convened by IAA judged the inscription a fraud.  In part their argument was based on a finding that the inscription cut through the old patina (implying it was of recent origin).  Parts of the inscription, they argued, were recently baked on; in that more recently applied inscription patina (the part that seems to connect the box to someone named Jesus), they found trace elements that wouldn’t have existed in ancient Jerusalem but are found today in chemically treated tap water.

But under intensive question-and-answer in the lawsuit, the case has weakened – one expert from Germany said the IAA had contaminated the key evidence and another (Ada Yardeni) said she would leave her profession if the ossuary turned out to be a fake.  Opponents of the IAA conclusions argue that their objectivity cannot be trusted given IAA’s strong opposition to artifacts brought to light via the commercial antiquities trade.  The testimony has been so conflicted that two months ago the judge actually suggested the prosecution drop the charges against Golan; he said it seemed unlikely to him a conviction could be achieved (which in turn led Hershel Shenks, the BAR editor, to issue a report that the find had been “vindicated” – this month writing, the “forgery case collapses”).  Burleigh is frustrated because a possible key witness is an Egyptian who says he used to forge for Golan.  But Egypt won’t extradite the man and he doesn’t seem interested in testifying, and so his story likely won’t be heard.  Defenders of the box’s authenticity argue Burleigh is just trying to sell her book, and the book’s thesis blows up if the find proves genuine (and so, they insinuate, she’ll say anything to discredit it).

The whole thing got even wilder earlier this year when a documentary film produced by James Cameron (yes, the Titanic guy) was released.  Directed by Simcha Jacobovici, The Lost Tomb of Jesus, which has by now screened around the world (Jacobovici has also co-authored a book on the subject called The Jesus Tomb and the documentary aired under the title The Jesus Family Tomb on the Discovery Channel), argues that the James ossuary and others found nearby establish (at a high level, they say, of statistical probability) that what had been found was the final burial grounds of Jesus’ family.  The statistical part is interesting – the expert quoted in the film did calculations given a series of contingencies laid out by the film’s director.  The statistician is credible (Andre Feuerverger, from the University of Toronto) and the calculations have been judged serious and methodologically sophisticated by a peer-reviewed forum in a leading statistics journal, but the original parameters are highly disputed (especially given how common the names Mary, Jesus, Joseph, and James were back then).

Stephen Pfann, from the University of the Holy Land, isn’t buying it:  “What database serves as the basis for establishing the probability of this claim?  There are no surviving genealogies or records of family names in Judea and Galilee to make any statement concerning frequency of various personal names in families there.”  Joe Zias, former curator of archeology at the Rockefellar Museum in Jerusalem, quoted in a March 2007 Newsweek article, was even blunter: “Simcha has no credibility whatsoever.  He’s pimping off the Bible…  Projects like these make a mockery of the archeological profession.”

Smart people got involved in the film (among them Princeton’s James Charlesworth and the University of North Carolina [Charlotte]’s James Tabor), but the film still reaches pretty far.  Based on a fourth ossuary from the same tomb (which some now aim to turn into a mega-tourist site), (here quoting a summary by David Horovitz in the Jerusalem Post) the filmmakers:

 …point to Ossuary 701… inscribed “Mariamne,” who they say is identified as Mary Magdelene in the 4th century text, The Acts of Philip.  And since Mary Magdalene is in the Jesus family tomb, and ultra-modern testing has established, astoundingly, that her bone-box and Jesus’ contained DNA of non-blood relatives, she must have been Jesus’ partner, they reason.  And since there’s a “Judah son of Jesus” in the tomb too (Ossuary 702) they dare to suggest he was most likely their son.  

Why, it’s the Da Vinci Code, all over again!  Burleigh half jokingly predicts we’ll soon see Solomon’s crown and Abraham’s sandals appearing on the antiquities market.

The case, beyond its intrinsic interest, has implications for how knowledge is created and distorted and popularized.  Some believers eager for evidence confirming their faith prove gullible to media mythmakers who popularize (and sometimes grotesquely distort) the scientific basis for their claims.  And the scientists get hauled into courts, where the standards of evidence vary dramatically from the tests of the laboratory or the peer review publication process.  Two sides get ginned up, science goes on trial, and (as Burleigh puts it) “the subjective underbelly of the science is… exposed…, big time” (qtd. by Laidlaw, Toronto Star, 11/4/08).  In cases of ambiguity, either fraud is perpetuated or doubt cast on potentially astonishing discoveries.  The debate rages on forever, creating cottage industries of scholarly blood feud.  It is this very cycle that accounts for the fact that Holy Family tombs have now been “authenticated” (as the Newsweek report put it) beneath the Dormition Abbey in Jerusalem and at another site in Ephesus (the Catholic Church says Mary was buried both places), the rock on which the Church of the Holy Sepulchre was erected in Jerusalem (Constantine said that was where Jesus was laid to rest), and a tomb in Safed (where last year Tabor said he found a Jesus tomb).

Stay tuned.  The Golan trial gets going again later this month.

SOURCES:  “’Jesus box’ may not be a fake after all,” Daily Mail (London), 30 October 2008, pg. 11; Stuart Laidlaw, “Forgery of antiquities is big business,” Toronto Star, 4 November 208, pg. L01; David Horovitz, “Giving ‘Jesus’ the silent treatment,” Jerusalem Post, 2 March 2007, pg. 24; Nina Burleigh, “Faith and fraud,” Los Angeles Times, 29 November 2008, pg. A21; “Forgery case collapses,” Biblical Archeology Review, January/February 2009, pgs. 12-13; Lisa Miller and Joanna Chen, “Raiders of the lost tomb,” Newsweek, 5 March 2007, pg. 60; Nicole Gaouette, “What ‘Jesus hoax’ could mean for Mideast antiquities,” Christian Science Monitor, 19 June 2003, pg. 7.

Publishing the papers of the U.S. founders

More than a half century ago, the Congress committed to producing definitive editions of the papers of the American founders – Alexander Hamilton, John Adams, George Washington, James Madison, Thomas Jefferson, and Benjamin Franklin in particular.  The first volume (which happened to be volume one of the Jefferson papers) was published in 1950, while Harry Truman was president.  Since then only the Hamilton papers have been completed.  As Senator Patrick Leahy (D-VT) put it in congressional hearings held last February:

According to the National Historic Publication and Records Commission [NHPRC], the papers of Thomas Jefferson will not be completed until 2025, the Washington papers in 2023, the papers of Franklin and Madison in 2030, and the Adams papers in 2050.  That is a hundred years after the projects began.  We spent nearly $30 million in taxpayer dollars in Federal taxpayer projects, and it is estimates another $60 million in combined public and private money is going in here.  One volume of the Hamilton papers costs $180.  The price for the complete 26-volume set of the papers is around $2,600.  So… only a few libraries [have] one volume of the papers, and only six percent [have] more than one volume.

The challenge, of course, is that everyone wants these collections, which have been often described as American Scripture, to be academically accurate, definitively comprehensive, and available yesterday.  But the imperatives of accuracy and speed work at cross purposes.  Some sense of why it takes so long to pull together and confirm the impossibly numerous details was conveyed in a story told by the historian David McCullough, who testified at the hearings.  McCullough, now at work on a Jefferson project, wanted to know the exact contents of the eighty or so crates Jefferson shipped back to Virginia while he was doing diplomatic work in France, information he rightly felt might convey some sense of Jefferson’s thinking.  The answer was to be found in volume 18 of the published papers, “the whole sum total in a footnote that runs nearly six pages in small type.”  McCullough has proposed that the national investment in the work of editing be doubled, so that the papers can be published more speedily but at no loss of historical quality.

The complications of doing this work are legion.  The papers of contemporary presidents are routinely collected and published soon after administrations end, but it wasn’t until 1934 and the founding of the National Historical Publications Commission, the precursor to today’s NHPRC, that a serious effort was made to comprehensively collect 18th century documentation, often scattered in private collections.  Although 216 volumes have now been published and praised, the frustration of the anticipated 2049 completion date has resulted in a drumbeat of criticism.  Private funding has been mobilized (the Pew Charitable Trust was the main original funder and has been persistent in directing funds over the years, including a failed 2000 challenge grant of $10 million – more on that soon), and the pace of publication is accelerating, but these final deadlines remain far off.

Rebecca Rimel, president of Pew, argues that there has been too little accountability for funds already spent – “there has never been a full accounting of the Founding Fathers Project.  There has been a lack of performance metrics” able to measure progress over time, she argues (11).  Pew has a special reason for frustration because they made the funding they coordinated contingent on production of such information, and they say it has never been forthcoming.  The criticism was reiterated in a more particular way by Deanna Marcum of the Library of Congress, who expressed the concern that university project work is spending too much of the funding to float graduate student stipends and on connected graduate programs, sometimes at the expense of faster methods of completion (37).  Stanley Katz has responded to this critique by noting that the expenditures of the projects are held tightly accountable to the reporting processes at NHPRC and NEH, in ways no different than any other funded project supported by those agencies.

The scholarly challenges of doing this work are also enormous.  To assure that consistently high standards of annotation are used in all the collections, very complex protocols of verification and citation are in place.  When one hears that a given project may “only” be producing one or two new volumes a year, it is easy to forget that each of these volumes may run to 800 pages with a large number of small print footnotes, and the Washington papers alone run to 27,347 pages.  Ralph Ketcham, an emeritus historian at Syracuse University who has spent his entire career on these projects (first working on Franklin and now on Madison), noted that the longevity of many of the Founders adds additional challenges – “It’s not surprising,” he noted, “that Alexander Hamilton’s papers are the only ones that have been completed.  The chief editor of the Hamilton papers, Cy Surrett, emphasized long ago that he thought he might dedicate his volumes to Aaron Burr, who made completion of the task possible” (14).  Sometimes this longevity results in vast collections of material – if the microfilmed papers connecting to the Adams papers were stretched out (the collection includes the presidential papers but also the materials produced by Henry and Charles Francis Adams) it would extend more than five miles long (McCollough, pg. 20).  The actual papers, when not in the custodial care of the Library of Congress, have to be transcribed and proofread on-site at collections often unwilling to let them physically travel.  To take just one example, the Jefferson papers are geographically dispersed over 100 different global depositories (Katz, pg. 18).

Fundraising has always been a challenge despite recent Congressional support.  The projects were intended from the outset to be funded privately, although public funds have also been allocated (the National Endowment for the Humanities started providing project grants in 1994).  Stanley Katz, a Princeton professor and former president of the Organization of American Historians, chairs NHPRC’s fundraising operation, whose major purpose is to do fundraising for all the Founders projects with the goal of freeing scholars at work on annotation from that burden, and although the organization has raised millions, many more are needed.  Although federal funding was restored after considerable lobbying, last year’s Bush budget proposal recommended zeroing out the NHPRC altogether.  And the story of the finally failed Pew matching grant, which imposed a probably impossible challenge, is also instructive:  Pew (this according to Katz, pg. 28) gave Founding Fathers Papers, Inc., nine months to come up with the requisite 3-to-1, $30 million match.  When they couldn’t raise that amount of money so quickly, the Pew match was withdrawn.  The model of creating so-called “wasting funds” (large endowments designed to spend down to zero with the completion of a project) makes sense (the strategy was used to complete the Woodrow Wilson papers and is a solution to the threats posed by funding uncertainty), and the Pew impulse to put tight timeframes on creating such funds also makes sense.  But too optimistically calibrated, overly-fast timetables can produce wasted effort and final funding failure.

Katz has also warned against the temptation of thinking the projects can simply be scaled up to speed publication:  “These are rather extraordinary works of scholarship.  This a craft skill, this is not an industrial skill.  It can’t be scaled up in the way that industrial skills can” (12).  Progress has been expedited by splitting up projects so that different parts can be simultaneously worked on; this is the strategy now in use with the Jefferson and Madison papers.   But because this is the case for most of the series in process, the marginal possibilities for accelerating production are not likely as great as one might imagine.

A common refrain is to call attention to the presumed absurdity of continuing the commitment to expensive hard copy printing, when many imagine the papers could be scanned, thrown up on the Worldwide Web, and annotated perhaps by the collaborative wikipedia-type work of a preselected group of scholars.  In fact, this is already well underway, though the new commitments add major new work to existing teams.  Allen Weinstein, the U.S. Archivist, has committed to online dissemination, and digital commitments go back all the way to 1988, when agreements were made with the Packard Humanities Institute.  Packard continues to plug away along with the University of Virginia Press (the electronic imprint is called Rotunda).  The University of Virginia work also received major support from the the Mellon Foundation.  Rotunda, which is receiving no public funds for its work (31), has already posted the papers of Washington and Dolley Madison, with Adams, Jefferson, Ratification, and James Madison papers slated for online publication by the end of 2009.

But that solution, for anyone who has struggled to put up a respectable website, is a lot more complicated than it may seem.  For one thing, unlike the recent NEH initiative to digitize American historical newspapers, which can be electronically scanned, the handwritten papers of the founders have to be keyed in by hand and then verified one at a time, an exceptionally labor-intensive process.  The publication arrangements that have been made with major university presses makes it a challenge to place unannotated material on a website, which would seriously subvert the investments those presses have made in projects in anticipation of a return on investment with publication.  For another, nationally-sanctioned authoritative editions need to be handled with great care and with sensitivity to the fast changing environments of digital presentation, so that money will not be wasted investing in formats that will soon be judged unworthy of the material.  Still, the Library of Congress, which has proprietary control over many of the materials, has already begun significant digitization connected with its American Memory Project (e.g., all the Washington, Jefferson, and Madison papers are available online).  Their position is that they can do the job given more money.

And thus the brilliant, historically incomparable Founding papers annotations roll out, one expensive volume at a time, inexorably researched and in a seemingly never-ending quest for financial support, so that their educational potential for scholars, citizens, and students will not be delayed for yet another half century.

SOURCE:  The Founding Fathers’ Papers:  Ensuring Public Access to Our National Treasures, Hearings before the Senate Judiciary Committee, S. Hrg. 110-334 (Serial No. J-110-72), 7 February 2008.

When social science is painful

The latest issue of the Annals of the American Academy of Political and Social Science (#621, January 2009) is wholly focused on the report authored in 1965 (read it here) by Daniel Patrick Moynihan focused on the status of black families, “the most famous piece of social scientific analysis never published” (Massey and Sampson, pg. 6).  The report arose out of Moynihan’s experience in the Kennedy and Johnson administrations working on poverty policy; his small group of underlings included Ralph Nader, working in his first Washington job.  Inspired by Stanley Elkins’ work on slavery (a book of that name argued that slavery set in motion a still-continuing tendency to black economic and social dependency), Moynihan’s group examined the ways in which welfare policy was, as he saw it, perpetuating single-family households led mainly by women, and at the expense of social stability and racial progress.  [In what follows I am relying almost totally on the full set of essays appearing in the January 2009 AAPSS, and the pagination references that follow are to those articles.]

Moynihan was writing in the immediate aftermath of passage of the 1964 Civil Rights Act, and a principle theme of the report is that the eradication of legal segregation would not be enough to assure racial equality given larger structural forces at work.  Pressures on the black family had produced a state of crisis, a “tangle of pathology” that was reinforcing patterns of African-American poverty, he wrote.  Moynihan’s larger purpose was to recommend massive federal interventions, a goal subverted, unfortunately, by the report’s rhetorical overreaching (e.g.:  matriarchy in black families were said to prevent black men from fulfilling “the very essence of the male animal from the bantam rooster to the four star general… to strut”).  The solution, in his view, was to be found in a major federal jobs program for African American men.

The report was leaked to the press and was, by and large, immediately condemned, first because it seemed to provide aid and comfort to racists in its emphasis on out-of-wedlock births as a demographic pathology, and second because it seemed to many readers a classic case of “blaming the victim.”  In fact, the term “blaming the victim” may have its genesis in William Ryan’s use of the phrase to critique Moynihan in the Nation.  I think it likely that cultural salience of these critiques was later reinforced by a memo he wrote to Richard Nixon advocating the idea that “the issue of race could benefit from a period of ‘benign neglect,’” a locution he came to regret since that one soundbite came to dominate the actual point of the memo, better encapsulated in this perspective:  “We need a period in which Negro progress continues and racial rhetoric fades” (contrary to the impression given by the benign neglect comment, he was actually trying to be critical of the hot and racially charged rhetoric coming out of Vice President Agnew).  Moynihan’s report proved divisive in the African American community, endorsed on release by Roy Wilkins and Martin Luther King, Jr., but condemned by James Farmer.  By the time the report itself was more widely read its reception was distorted by the press frame, and a counter-tradition of research, celebrating the distinctiveness of black community formation, was well underway.

Read by today’s lights the Moynihan report has in some respects been both confirmed and its critics also partly vindicated too.  The essays in this special issue offer many defenses.  Douglas Massey (the Princeton sociologist) and Sampson (chair of sociology at Harvard, both writing in the introduction at pgs. 7-8) defend the report against the accusation of sexism:

Although references to matriarchy, pathological families, and strutting roosters are jarring to the contemporary ear, we must remember the times and context.  Moynihan was writing in the prefeminist era and producing an internal memo whose purpose was to attract attention to a critical national issue.  While his language is certainly sexist by today’s standards, it was nonetheless successful in getting the attention of one particular male chauvinist, President Johnson, who drew heavily on the Moynihan Report for his celebrated speech at Howard University on June 4.

Ironically, though, the negative reactions to the leaked report (which suffered since the report itself was not publicly circulated, only the critical synopses) led Johnson himself to disavow it, and no major jobs program for black men was forthcoming as part of Great Society legislative action.  Moynihan left government soon afterward and found the national coverage, a lot of which attacked him as a bigot, scarring and unwarranted given the overall argumentative arc of the report.  Only when serious riots reerupted in 1968 did jobs get back on the agenda, but the watered down affirmative action programs that resulted failed to transform the economic scene for racial minorities while proving a galvanizing lightning rod for conservative opponents (Massey and Sampson, 10).  The main policy change relating to black men since then has been sharp increases in rates of incarceration, not rises in employment or economic stability, a phenomenon which is the focus of an essay by Bruce Western (Harvard) and Christopher Wildeman (University of Michigan).

Several of the contributors to the special issue mainly write to insist that Moynihan has been vindicated by history.  His simple thesis, that in subgroups pressures tending to disemploy males will in turn fragment families and produce higher incidences of out-of-wedlock birth, divorce, all at the main expense of women and children, is explicitly defended as having been vindicated by the newest data.  James Q. Wilson writes that the criticism the report received at the time “reflects either an unwillingness to read the report or an unwillingness to think about it in a serious way” (29). Harry Holzer, an Urban Institute senior fellow, argues that the subsequent trends in black male unemployment have only intensified since the 1960’s, thereby reaffirming the prescience of Moynihan’s position and strengthening the need for a dramatic federal response (for instance, Holzer defends the idea that without larger educational investments the destructive perceptions of working opportunities will produce perceptual barriers to cultural transformation).  The predicate in Ron Haskins (of the Brookings Institution) essay is announced by its title:  “Moynihan Was Right:  Now What?” (281-314).

Others argue that the Moynihan claims, which relied on the assumption that only traditional family arrangements can suitably anchor culture, ignore the vitality of alternative family forms that have become more common in the forty years since.  Frank Furstenberg notes that “Moynihan failed to see that the changes taking place in low-income black families were also happening, albeit at a slower pace, among lower-income families more generally” (95).  For instance, rates of single parenting among lower-income blacks have dropped while increasing among lower-income whites.  Linda Burton (Duke) and Belinda Tucker (UCLA) reiterate the criticism that the behavior of young women of color should not be pathologized, but is better understood as a set of rational responses to the conditions of cultural uncertainty that pervade poorer communities (132-148):  “Unlike what the Moynihan Report suggested, we do not see low-income African American women’s trends in marriage and romantic unions as pathologically out of line with the growing numbers of unmarried women and single mothers across all groups in contemporary American culture.  We are hopeful that the uncertainty that is the foundation of romantic relationships today will reinforce the adaptive skills that have sustained African American women and their families across time” (144).  Kathryn Edin (Harvard) et al., criticize Moynihan’s work for diverting research away from actual attention to the conditions of black fatherhood, which in turn has meant that so-called “hit and run” fathers could be criticized in ways that have raced far out of proportion to their actual incidence in urban populations (149-177).

The lessons drawn by the AAPSS commentators from all this for the practice of academic research are interesting.  One drawn by Massey relates to the “chilling effect on social science over the next two decades [caused by the Moynihan report and its reception in the media].  Sociologists avoided studying controversial issues related to race, culture, and intelligence, and those who insisted on investigating such unpopular notions generally encountered resistance and ostracism” (qtd. from a 1995 review in Massey and Sampson, 12). Because of this, and the counter-tendency among liberal/progressive scholars to celebrate single parenting and applaud the resilience of children raised in single-parent households, conservatives were given an ideological opening to drumbeat media reports about welfare fraud, drug usage rates, and violence, and to pathologize black men, an outcome M/S argue led to a conservative rhetoric of “moralistic hectoring and cheap sermonizing to individuals (“Just say no!”).  Not until William Julius Wilson’s The Truly Disadvantaged (1987), did the scholarly tide shift back to a publicly articulated case for social interventions more in tune with Moynihan’s original proposals – writing in the symposium WJW agrees with that judgment and traces the history of what he argues has been a social science abandonment of structural explanations for the emergence of poverty cultures.  The good news is arguably that “social scientists have never been in such a good position to document and analyze various elements in the ‘tangle of pathology’ he hypothesized” (Massey and Sampson, pg. 19).

The history of the report also calls attention to the limits of government action, a question with which Moynihan is said to have struggled for his entire career in public service.  Even accepting the critiques of family disintegration leaves one to ask what role the government might play in stabilizing family formations, a question now controversial on many fronts.  James Q. Wilson notes that welfare reform is more likely to shift patterns of work than patterns of family, since, e.g.,  bureaucrats can more reasonably ask welfare recipients to apply for a job than for a marriage license (32-33).  Moynihan’s answer was that the government’s best chances were to provide indirect inducements to family formation, mainly in the form of income guarantees (of the sort finally enacted in the Earned Income Tax Credit).  But asked at the end of his career about the role of government, Moynihan replied:  “If you think a government program can restore marriage, you know more about government than I do” (qtd. in Wilson, 33).

Moynihan was an intensely interesting intellectual who thrived, despite his peculiarities, in the United States Senate (four terms from New York before retiring and blessing Hillary Clinton’s run for his seat), as he had earlier serving as Nixon’s ambassador to India and Ford’s representative at the United Nations.  At his death in 2003, a tribute in Time magazine said that “Moynihan stood out because of his insistence on intellectual honesty and his unwillingness to walk away from a looming debate, no matter how messy it promised to be.  Moynihan offered challenging, groundbreaking – sometimes even successful – solutions to perennial public policy dilemmas, including welfare and racism.  This is the sort of intellectual stubbornness that rarely makes an appearance in Washington today” (Jessica Reaves, Time, March 27, 2003).  His willingness to defend his views even when deeply unpopular gave him a thick skin and the discipline to write big books during Senate recesses while his other colleagues were fundraising.

Moynihan’s intellectualism often put him at odds with Democratic orthodoxy, and maybe on the wrong side of the issue – he opposed the Clinton efforts to produce a national health insurance system, publicly opposed partial birth abortion (“too close to infanticide”), was famously complicit in pushing the American party line at the United Nations, a fact that has been much criticized as enabling the slaughter of maybe 200,000 victims, killed in the aftermath of Indonesia’s takeover of East Timor.  But he also held a range of positions that reassured his mainly liberal and working class base:  opposed to the death penalty, the Defense of Marriage Act, NAFTA, and a famous champion of reducing the government’s proclivity to classify everything as top secret.

But Daniel Patrick Moynihan will be forever linked to his first and most (in)famous foray into the nation’s conversation on race, which simultaneously revealed the possibilities for thoughtful social science to shape public policy and the risks of framing such research in language seeking to make such research dramatic and attention-getting in a glutted sea of white papers and task force reports whose issuance typically come and go without any serious notice.

Neil Armstrong’s sublime silence

Over the holiday I had a chance to watch Ron Howard’s elegant documentary about the US-USSR race to the moon, a film that interviewed nearly all those who still live and walked on the moon.  All, that is, but Neil Armstrong, the very first human being to step foot on the lunar surface.  If human beings are still around in 5000 years, and barring a catastrophic erasure of human history, Neil Armstrong’s name will still be known and his serendipitous selection to be the first astronaut to step outside the lunar module at 2:56 UTC July 21, 1969, will still be celebrated as an astonishing feat of corporate (by which I simply mean massively collective) scientific enterprise, and the one line first spoken from the moon’s surface – “That’s one small step for [a] man, one giant leap for mankind” – will still be recited.  Since more than two-thirds of the world’s population had not yet been born in 1969, perhaps my thought is a naive one; I hope not.

Armstrong has been accused of being a recluse (historian Douglas Brinkley famously described him as “our nation’s most bashful Galahad”), but that descriptor doesn’t quite work.  After all, now 78 years old, Armstrong followed up his service to NASA by doing an ASO tour with Bob Hope and then a 45-day “Giant Leap” tour that included stops in Soviet Russia.  For thirteen months he served as Deputy Associate Administrator for Aeronautics at DARPA, and then taught at the University of Cincinnati for eight years.  More recently he has served as a technical consultant on two panels convened to report on space disasters (in the aftermath of the Apollo 13 and Challenger explosions; NA vice-chaired the Rogers Commission investigating the latter).  Armstrong has spoken selectively at commemorative events, including at a White House ceremony recalling the 25th anniversary of the moon walk, at a ceremony marking the 50th anniversary of NASA just a couple months ago, and the opening of a new engineering building at Purdue University (his alma mater) named after him in 2007.

So, no, Neil Armstrong is not a recluse in the sense we typically ascribe to monks or the painfully shy.  He is willing to be interviewed (he does seem to be tough on his own performances, which may explain some of his selectivity in accepting offers – after a 60 Minutes profile in which he participated, he gave himself a C-minus).  He gives speeches.  He has been happy to offer commentary on public policy subjects relating to outer space.  But what he has refused to do is endlessly reflect on what he did that July day.  And I admire him for this, not because others who have been forthcoming and talkative about the experience are to be criticized – their stories are compelling and their histories worth recalling and Aldrin and Lovell and the others have been important ambassadors and salesmen for space exploration – but because what Armstrong did, and the event in which he so memorably participated, would be diminished by more talk.

The recognition of this fact is the brilliance of the one line he so famously spoke, which remains a masterfully succinct response to a world historical moment.  Speech was required – the first man to step on the moon had to say something, after all – but too much yammering would have undermined the collective majesty of the moment, and excessive talk after the fact would have done the same.  Can you imagine a thousand years from now school children watching hours upon hours of the alternative, Neil Armstrong in a hundred oral history interviews?  Were you sweating?  Did you burp in your space helmet?  Were your space boots chafing?  As you jumped off the last step did you think you would be swallowed up?  Did you get verklempt?  How do people pee in space?  How did the event compare with taking your marriage vows?  To whom were you dedicating the experience?  Did you hear God’s voice?  If you were, in that moment, a tree, what kind of tree would you have been?

Ugh.  No thank you.  I don’t want to know the infinite and microscopic details and I don’t think they matter one whit.  The deeply powerful impression created by watching that grainy black and white event on a small television, for me as a child three days short of my eighth birthday, remains indelible – pay attention!  watch this!  look out the window – do you see the moon? – those people on the television are actually up there – one small step…  It was late at night (close to 11:00 EST in the United States) and I was getting tired and grumpy – why weren’t we going home yet? – but when the moment came I and the other 450 million estimated to have also watched the landing live (some estimates range as high as one billion) sat completely absorbed by what we were seeing and held our breaths to see later if the landing vehicle would escape the moon’s atmosphere.

And Neil Armstrong, at some deeply personal level, understands all this in a way that may be best analogized to the disappearance of musicians and celebrities who leave the stage and never reappear again.  In the television context, think Johnny Carson or Lucille Ball, who knew they could only subvert the quality of their life’s work in public by agreeing to appear in “comeback specials” and all the rest.  (This is why DVDs with nonstop director’s commentary are so often, in my view, a terrible mistake – let the work make its own impression.)  And so Armstrong, since 1994 or so, has stopped signing autographs (he found out they were simply being sold for profit and decided he didn’t want to be involved, paradoxically of course only increasing their value).  He also hasn’t been arrested shoplifting or been accused of harassment or even, so far as I know, been caught speeding, any of which would also have diminished his most public visible moment of achievement in the space program.

In the words of one writer, “Neil Armstrong achieved one of the greatest goals in human spaceflight but then did not go on to proselytize the faith…  For True Believers in the Cause, this is apostasy, and they resent him for it.”  Thomas Mallon, writing in the New Yorker, seemed to criticize Armstrong (the implicit assumption was that he’s too litigious) because he sued his barber – turns out the guy was cutting his hair and then selling it online.  I think Armstrong was right:  the hair thing was cheap and exploitive and diminished the work.

When Armstrong agreed to participate in the writing of a biography, which appeared in 2006 (James Hansen, First Man: The Life of Neil A. Armstrong, Simon and Schuster), there was a lot of speculation that at last its subject was prepared to go onto the couch, if only to debunk the stories that implied there was something creepy about his reluctance to talk all the time to reporters.  In reading the book I am struck by the good choice Armstrong made in settling on a collaborator – Hansen’s book is saturated with information (almost four hundred pages before we even get to Apollo 11), but the information is crisply organized. Hansen refuses the temptation to plant thoughts, speculate endlessly about feelings, and so on, and if pressed Armstrong to undergo psychoanalysis that doesn’t come across in the narrative.  Some have criticized the short final section (covering the years post-moon landing) as less interesting, and others have found fault in the fact that the book reveals Armstrong’s occasionally interpersonal coldness and the toll his career took on his family life.  Only in reading that Armstrong didn’t take souvenirs on the mission for his two sons did I start to think this is too much information.  But I found myself wondering if his notorious interpersonal coolness is also the reason he made a perfect astronaut – ice in the veins, cool under pressure, and all that.

Neil Armstrong is no Superman.  He was one of a thousand military men who might have served as the public face of the mammoth and expensive engineering triumph that achieved spaceflight, and had he come down with the flu it would probably be Buzz Aldrin we most remember today.  And so my point is not to celebrate the relative silence because it creates a mythology.  To the contrary, what I admire about Armstrong’s long refusal to be daily feted and interrogated about July 21 is that as he recedes, the work is allowed to dominate the scene.  In the eloquence of his one first sentence spoken from the lunar surface, and in his silence on that experience since, the sublime accomplishment of this supreme national effort is best recollected.

Oh, and one other thing:  Armstrong donated the royalties from the biography to Purdue, to be used to build a space program archive there.

Perfect.

How free trade regimes collapse

Under circumstances of international economic duress, free trade is especially jeopardized:  democratically elected officials, even those committed in principle to unfettered commerce as the best-available engine of economic growth, will cede to local demands for protection.  Desperate to preserve market share, governments will be tempted to raise tariffs that make imports more expensive and locally produced goods cheaper, or they will be persuaded that economic exigency warrants temporary protections that will likely only induce retaliation elsewhere.  Thus are set in motion cycles of retaliatory protectionism like the one perpetuated by the notorious Smoot-Hawley tariffs (the Tariff Act of 1930) now believed to have worsened the deep economic depression of the late 1920’s and 1930’s.

When Herbert Hoover signed the Smoot-Hawley Act in June 1930, Thomas Lamont, a J.P. Morgan partner and Hoover adviser, begged him not to (along with a thousand economists who also petitioned against the bill):  “I almost went down on my knees to beg Herbert Hoover to veto the asinine Hawley-Smoot Tariff.”  The legislation jumped duties on almost 900 American imports.  Decades later, debating the merits of NAFTA with Ross Perot on the Larry King CNN show, then-Vice President Gore presented Perot with a framed picture of the two Congressmen.

While free trade regimes are regularly defended by economists – the intellectual commitment remains strong despite work done over the last several decades showing that nascent markets require or at least have inevitably benefited from protective regulatory regimes and that regimes today most adamant in their declared support for free trade (like the United States and Japan) provided intensive protection for their long-dominant manufacturing sectors – the arguments for and against protectionism are alive again thanks to the severity of the current economic downturn.

The uncertain signals sent during the campaign by Barack Obama (he said we couldn’t hide from the world economy but also that NAFTA should be renegotiated) have been reinforced by his early Cabinet picks.  As Clive Cook put it last week, “Mr. Obama’s US trade representative (his chief international negotiator) will be Ron Kirk, a former mayor of Dallas, a leading proponent of NAFTA and a long-time supporter of liberal trade.  His appointment disappoints the president’s supporters on the left of the party.  The new labour secretary has them applauding, however:  she is Hilda Solis, an ally of the unions, a leader in Congress of opposition to the Central American Free Trade Agreement and a forthright critic of orthodox liberal trade.”  Larry Summers, director-designate of the National Economic Council, is a big-time free trader; Bill Richardson, picked to be the next Commerce Secretary, kept calling for “fair” trade on the campaign trail.

The debate over the consequences of free trade, especially when broadened beyond the technical mathematical models and into the domain of distributive politics, gets complicated fast.  Even the strongest advocates of free trade agree that codifying it sets in motion significant sectoral dislocations that imperil communities and work at odds with local social justice imperatives.  President Clinton tried to square this circle by arguing that trade needed to be promoted, but that such promotion also needed to include strengthened social security networks to help those dislocated by the vagaries of global capital, a position that has become today’s trade realpolitick on both sides of the political aisle.  This position has enabled the ongoing negotiation of global trade instruments, both on a regional and national basis (such as the agreement now being advocated by President Bush negotiated with Colombia) and within the ongoing framework of what started as the Global Agreement on Tariffs and Trade (GATT) and has since evolved into the World Trade Organization legal regime.

The common expectation is that the global trade talks most recently stalled in the so-called Doha Round will be even more imperiled by the bad economy.  Clive Crook:  “With unemployment rising, wages under pressure and no firm countervailing push from the administration, protecting jobs (or claiming to, at any rate) is likely to be a higher priority than liberal trade.  The prospects for widening the opportunities for international commerce look grim.”  The Doha failure was disconcerting even to some economic progressives since global poverty was explicitly on the agenda – “while idealistic in its goal, [Doha] set out in 2001 to develop a new platform for global cooperation that would depart from traditional aid and development programs” (MacBain, 39-40).  The World Bank had estimated that a Doha agreement might have brought as many as 32 million persons out of extreme poverty.

Just this month, the World Bank projected that the total volume of global trade is likely to fall in 2009 by 2%, the first actual drop since 1982 (the estimate might be conservative given December reports, based on year-to-date data through November, showing roughly twenty percent drops in exports from Taiwan, Chile, and South Korea).  Several countries, including Russia and India, have already announced tariff increases, although jumps so far haven’t posed a major threat because they only undo tariff cuts previously announced that went lower than international law had required (and so the new increases don’t violate WTO protocols).  But even this path to higher tariffs poses dangers:  “If all countries were to raise tariffs to the maximum allowed, the average global rate of duty would be doubled, according to Antoine Bouet and David Laborde of the International Food Policy Research Institute in Washington, DC.  The effect could shrink global trade by 7.7%” (Economist, “Barriers”).  As the magazine editorialized, even “a modest shift away from openness – well within the WTO’s rules – would be enough to turn the recession of 2009 much nastier.”

In such an environment two prospects seem especially likely (in addition to the third:  mounting outright protectionism and the risk of reprisals).  One is that global economic giants like the European Union, China and the United States will continue to sidestep global framework talks by cutting one-on-one-deals with specific trading partners.  The problem with that approach is that side deals can complicate wider talks; local arrangements thus undermine international ones.  And one-on-one negotiations are more easily dominated by this-or-that industrial lobby, where final arrangements end up sideskirting fundamental distortions in trading for the benefit of entrenched corporate interests on both sides.

The other prospect may be the most insidious, and has already been set in motion thanks to bipartisan support.  This approach provides protection not by taxing imports but by subsidizing exports.  The latest automobile bailout is a classic example of this sort of non-tariff trade barrier; American cars have been given a massive $18 billion economic benefit relative to the car companies manufacturing in other nations.  For now the huge investments being made in national banking and manufacturing and infrastructural development have not triggered serious retaliation since everyone is doing it and all are agreed that bailouts are needed to avert far greater economic catastrophe (at least this is the rhetorical bludgeon that has been used so far to enact gargantuan packages).

These subsidies are not new – in the United States, the 2008 Farm Bill promised another $20 billion in help for cash-crop producers.  But individual ad hoc bailouts acquire an accelerating logic, and turn into subsidy cycles that are hard to resist – China is now talking about domestic steel subsidies and has already put in place more than 3,000 tariff rebates established to promote Chinese products.  Indonesia has raised tariffs on 500 products this month; France has started a national fund to protect French companies from being bought out by foreigners; and Russia has imposed a tax on imported pork and cars (Faiola and Kessler).  And “there are other, more subtle, means of protection available.  Marc Busch, a professor of trade policy at Georgetown University in Washington DC, worries that health and safety standards and technical barriers to trade, such as licensing and certification requirements, will be used aggressively to shield domestic industries as the global downturn drags on” (Economist, “Barriers”).

Over time these localized subsidies unravel both the legal architecture of global trade and the political good will necessary to sustain it.  Part of the reason the Doha Round failed was an inability to come to terms on long standing trade subsidies, such as American and EU cash support for their agricultural sectors and other smaller but flash-making provocations.  When America refuses to import Chinese toys we say it’s on account of safety but they see it as a trade barrier.  When France throws up obstacles to importing American wine they say they are simply protecting their national culture, but we cry foul.  South Africa insists that Italian mining companies doing business outside Pretoria adhere to affirmative action mandates, which Italy says is an impediment to international commerce.  Europe threatens to prohibit the import of American cars because they pollute too much, and we cry protectionism.  And so on.  Patterns of reasonable protection are thus made acrimonious, and nations unable to throw cash at their favored industries consider reverting to more traditional forms of tariff protection.  And this is how trade wars are ignited.

It has to be conceded that the political/rhetorical threat of trade war! is too easily bandied about, and liberals have long rightly complained that economic justice policies are inevitably thrown under the bus when such Great Depression talk looms.  As I read a recent column by Jeff Immelt, CEO of General Electric, laying out a case for why “business and government leaders must reset the debate, re-establishing why interdependent economies and healthy competition are good for the world,” and then proposing six “GE” principles to take charge of such a debate that try to have it both ways (protectionism must be resisted and global trade must be fair), I admit to skepticism.  And even a recent essay in the free-trade-leaning Economist admitted recently that “few economists think the Smoot-Hawley tariff was one of the principal causes of the Depression.”

But the history of the Smoot-Hawley protections is cautionary nonetheless.  The bill was not supposed to be so draconian, but started as a much more modest effort to provide some quick help to American agriculture.  “With no obvious logic – most American farmers faced little competition from imports – attention shifted to securing for agriculture the same sort of protection as for manufacturing, where tariffs were on average twice as high” (Economist, “Battle”).  In a nearly six-month conference reconciliation process, the bill was quickly larded up – Robert LaFollette, the Wisconsin progressive, said the bill was “the product of a series of deals, conceived in secret, but executed in public with a brazen effrontery that is without parallel in the annals of the Senate.”

While the actual economic costs associated with the bill’s 890 tariff hikes were modest, the deal soured international comity – the League of Nations (which of course the United States had never joined) was negotiating a “tariff truce” which fell through in the resulting acrimony.  Even in a climate like today’s, where product manufacturing is deeply interdependent and reliant on multinational industrial networks, political disputes can easily escalate.  British prime minister Gordon Brown has already given major speeches implicitly connecting the car bailout to protectionism, the German automaker Opel has used the bailout to argue for 1-billion euros in credit guarantees, and the EU recently agreed to a $50 billion package of support that will help European automakers meet newly toughened environmental standards.  And regimes of free trade, deeply imperfect as they are, may thus give way to even more destabilizing nationalistic free-for-alls.

SOURCES:  Clive Crook, “Obama has to lead the way on trade,” Financial Times, 22 December 2008, p. 9; Jeff Immelt, “Time to re-embrace globalisation,” Economist/World in 2009, p. 141; “The battle of Smoot-Hawley,” The Economist, 20 December 2008, pgs. 125-126; “Barriers to entry,” The Economist, 20 December 2008, pg. 121; “Farewell, free trade,” The Economist, 20 December 2008, pg. 15; Louise Blouin MacBain, “Doha’s good deeds,” World Policy Journal (Summer 2008): 39-43; Anthony Faiola and Glenn Kessler, “Trade barriers toughen with global slump,” Washington Post, 22 December 2008, p. A01; Jemy Gatdula, “Trade tripper: Cars, plans, and bailouts,” Business World, 28 November 2008, pg. S1/5.

How global warming imperils our history

C. Brian Rose, president of the Archeological Institute of America, introduced the November/December 2008 issue of Archeology with an editorial that begins as follows:

Global warming is real and it is one of the gravest threats facing our shared cultural heritage.  According to the National Oceanic and Atmospheric Administration, the ten warmest years have all occurred since 1995, and the UN’s Environment Program notes that the world’s glaciers are receding at a record pace.  This situation brings a cascade of problems that are having a catastrophic impact on archeological sites.  Melting of ice and permafrost endanger most frozen sites on the continents, while rising sea levels promote the erosion and submergence of others…  Examples in recent years include Ötzi, the late Neolithic herdsmen discovered in the Italian Alps; the 550-year-old Native American hunter whose body was recovered from a melting glacier in British Columbia; and the Inca human sacrifices found on Andean peaks.  Similarly endangered are the frozen burials of Eurasian nomads… Remains of 5,000 year old stone houses built by Neolithic farmers and hunters at Skara Brae, Orkney, may have to be dismantled and moved inland for protection.  Portions of the ruins of Nan Madol, an ancient political and religious center on the Pacific island of Pohnpei in Micronesia, may soon be submerged.

In the context of the larger consequences of global climate change, these effects on the historical record may seem incidental or modest, but of course the losses might be permanent, and as Rose has noted, not that difficult to document now.  He calls for a UNESCO and NASA and ESA program to do fast satellite imaging to map glaciers, since the ultraviolet readings can lead investigators to burial sites.

Every other year the World Monuments Fund releases a “world monuments watch list” to call attention to endangered sites.  For the first time in 2008, global climate change is named as a cause of urgent concern, noting that “several sites… are threatened right now by flooding, encroaching desert, and changing weather patterns.”  Two examples:  (a) Herschel Island, Canada, “home to ancient Inuit sites and a historic whaling town at the edge of the Yukon Territory that are being lost to the rising sea and melting permafrost in this fastest-warming part of the world”; and (b) Sonargaon-Panam City, Bangladesh, “a former medieval trading hub and crossroads of culture, whose long-neglected and deteriorating architecture is increasingly threatened by flooding in this low-lying country, one of the most vulnerable to the impacts of global warming.”  The dangers, because they are likely to approach gradually, are easy to ignore, and in the context of archeological sites where the main evidence is already obscured and not in plain site, awful losses might be occurring without anyone even knowing it.

Despite such warnings, there is little evidence of policy action to move in ways that would conserve historical preservation sites, perhaps not surprising given the lack of action on climate change’s broader consequences.  A recent study published by the journal Climate Change notes in Great Britain, where some emphasis has been placed on cataloging climate effects, “lack of a widespread consideration of heritage has resulted in a relatively low profile more generally for the subject.”  A 2005 UK Environment Agency report organized to set “a science agenda… in light of climate change impacts and adaptation” never mentions heritage preservation.

The danger does not simply derive from changing levels in oceans and rivers.  A 2006 “Heritage at Risk” report argues that climate change is partly responsible for the summer of 2007 fires that were among “the largest catastrophes in the Mediterranean in the last century.”  Warming was at fault because it made fires more common and intense; research reported by the Athens Observatory notes that global warming also changes soil humidity levels, and this also contributes to fire susceptibility.  While climate change is not the only cause of fires, their 2007 severity raised alarms in the historical preservation community, especially given damage to “our cultural heritage in the Peloponnese.  This included the Arcadian landscapes, Byzantine churches and monasteries, Apollo Epicurius at Bassae (a World Heritage Site), the Antiquities in Ilieia and especially the archeological site of Olympia (also a World Heritage Site).  There was damage to the area surrounding the Olympia archeological site.  The Kladios stream, a tributary of the Alpheios River, was burnt to a great extent, whereas the Kronios Hill was burnt entirely.  The park and the surroundings of the International Olympic Academy were destroyed.  Furthermore, some slopes near the ancient stadium were also burnt.”

The Centre for Sustainable Heritage at University College London released (in 2005) a major report on these issues, Climate Change and the Historic Environment, authored by May Cassar.  The document summarizes a comprehensive effort to catalog the risks, but for me most compelling starts by quoting Titania’s “weather speech,” a part of A Midsummer Night’s Dream (Act II Scene I), which eerily anticipates the threat, and may even have been prompted by the “meteorologically turbulent time when Shakespeare was writing his play” (Cassar):

                …the spring, the summer,
                The chiding autumn, angry winter, change
                Their wonted liveries, and the mazed world
                By their increase, now knows not which is which:
                And this same progeny of evil comes
                From our debate, from our dissension;
                We are their parents and original.

SOURCE:  A.J. Howard et al., “The impact of climate change on archeological resources in Britain: A catchment scale assessment,” Climate Change 91 (2008): 405-422; May Cassar, Climate Change and the Historic Environment (London: English Heritage and the UK Climate Impacts Programme, 2005).

On the relevance of Lionel Trilling

I am aware of no specific anniversary that has prompted the spat of recently revitalized interest in the life work of Lionel Trilling, the legendary Columbia University professor and author most famously of The Liberal Imagination (1950).  But suddenly his writing has sprung back into intellectual circulation:  the first third of an unfinished novel, The Journey Abandoned, has been published this year, and New York Review Books has just reissued The Liberal Imagination.  Read by today’s lights, which is to say to read it outside the culturally dominant frame of the Cold War and American anti-communism that shaped its production and Trilling’s world view, it is hard to imagine what made it a national bestseller (more than 100,000 copies were sold in paperback).  All the essays had previously appeared in print, many in the Partisan Review to which Trilling was long attached, and many of the essays engage particular novelistic texts in ways one would assume rather inaccessible to the wider reading public.  Still, I have found myself attracted to Liberal Imagination (and have been recently reading my way through it), in part because of the way it has been described as a “monument of humanism” (McCarter) but also just to gain purchase on the basis of his enormous influence in American literary critical circles.

Louis Menand’s introduction to the new reprint, which has been strongly attacked by Leon Wieseltier (a Trilling student) as misconstruing Trilling’s sense of the relationship between art and literature and thereby demeaning the sense of urgency Trilling saw in the literary critical enterprise, nonetheless rightly calls attention to a combination of humbled arrogance I find attractive in his work.  Trilling did not mainly want to be remembered as a critic (he wished most of all to be considered a novelist); in fact, because he only knew the English language he expressed the concern that he was not even properly a scholar.  “But,” writes Menand, “although he may not have wanted what he had, and he may not have understood entirely why he had it, he appreciated its value and tended it with care.”  The result is deeply polished prose that, if it fails, likely does so because Trilling’s work is saturated by the expression of dialectical tendencies that can become sources of frustration when one seeks to finally understanding his position, more than any sense of overweening arrogance in his compositional style.

The central theme of the book, which was also a central problematic of Trilling’s lifetime critical production, strikes me as possessing a profound continuing relevance even if Trilling’s own position reads as less coherent than it would have more than a half century ago.  Trilling was concerned to specify and sometimes to ambiguate the relationship between literature and liberal politics.  Liberalism, whose ideological impulses (and this is true of all ideological formations) can lead to an inevitable oversimplification of the human condition (in the case of liberalism by reducing the aim of all politics to the attainment of equality and freedom, which when applied risk doing violence to the rougher edges of the polity that should by liberalism’s own lights be tolerated), required reflective challenge if it was to survive without lapsing into empty and dangerous dogma.  Because conservatism seemed to Trilling an unavailable corrective in producing morally mature individuals (as he famously put it in the preface, “In the United States at this time liberalism is not only the dominant but even the sole intellectual tradition.  For it is the plain fact that nowadays there are no conservative or reactionary ideas in general circulation.”), it fell to the novelist to interrogate the tendency to empty certitude to which liberalism in all its American variations was prone.

Why literature?  Because great novels (and for Trilling this mainly meant stories to some extent historically distant from contemporary culture) offer representations that invite critical speculation and open ethical vistas.  This is so because the novelist situates moral and political struggle within characters, imagined persons who make ideological abstractions concrete and on account of their embodiment reveal the limits of theory (Donald Pease has suggested that Trilling’s main contribution was to “elevate the liberal imagination [and the liberal anticommunist consensus] into the field’s equivalent of a reality principle”).  Literature, Trilling wrote, is “the human activity that takes the fullest and most precise account of variousness, possibility, complexity, and difficulty.”  And all this is accomplished in a manner assured to interest and engage readers able to connect emotionally to vivid and rich scenes of imagined human interaction.  The novel thus possesses the twin capacity to enact moral ambiguities while also attracting the interest of audiences more numerous that those who would ever read theology or philosophy or other theory. (Ironically, perhaps, John Vernon criticized Trilling’s later writing as suffering because it offered a wholly disembodied and thus cold analysis, which is to say his criticism lacked the formal virtues of the novel he so regularly praised).

Trilling did not believe that literature always apprehends or represents or has some unique insight into the Truth.  He understood that not all writers see themselves as working in explicit opposition to liberalism, which for him was beside the point since any rich ethical interrogative novel poses an useful if implicit challenge to ideological certitude.  Nor did he believe that writers have (either on account of their separation from the wider culture or their innate madness) special access to privileged knowledge.  He simply believed that writers who attempt to offer richly plotted stories recognizable to their readers will necessarily induce critical analysis and reflection.  As Menand notes, referring to Trilling’s famous essay “On the Teaching of Modern Literature,” Trilling

…had come to believe that “art does not always tell the truth or the best kind of truth and does not always point out the right way, that is can even generate falsehood and habituate us to it, and that, on frequent occasions, it might well be subject, in the interests of autonomy, to the scrutiny of the rational intellect.”  …Humanism might be a false friend. This willingness to follow out the logic of his own premises, to register doubts about a faith for which he is still celebrated by people who are offended by attempts to understand books as fully and completely implicated in their historical times, is the finest thing about his work.

Along with mass culture, literary criticism can too easily become a culprit in degrading the complexity proper to a well-functioning liberalism as well, for if the critic tries to ignore the broader culture and its history altogether (and this was the major shortcoming Trilling saw in the work done under the name New Criticism), or insists on applying the strictures of scientific covering laws or a predetermined ideology, all the richness of the realist novel is erased, thereby simply opposing liberalism’s potential platitudes with the verities of alternatively over-basic theories of collective life.

In judging the contemporary relevance of Trilling’s case for high literary culture one immediately wonders if a position so intimately connected to 1950’s hyper-ideological Cold War culture makes sense given today’s arguably post-ideological times.  Here is the case made by McCarter:

The “Stalinist-colored” ideas that Trilling sought to rebuke are now tough to spot, unless you’re a Fox News contributor.  But even as some liberal excesses have receded, the book has lost none of its urgency.  For it celebrates something that is imperiled in our high-speed, always-on media culture:  imagination itself.  Trilling foresaw the threat:  “The emotional space of the human mind is large but not infinite, and perhaps it will be pre-empted by the substitutes for literature – the radio, the movies, and certain magazines,” he wrote, prophetically.  A shrinking national attention span and eroding reading habits aren’t just bad news for liberal politics.  The moral imagination excited by good books, he argues, teaches us sympathy and a respect for variety:  the waning novel leads to “our waning freedom.”

Such a position is not altogether self-evident, especially given the manner by which popular culture has been vigorously defended in the last quarter-century (or more) as enabling vernaculars both of understanding and potential resistance to the stultifications of ideology.  To specify the point by asking a rather mundane question: why is it that the nation’s critical faculties are raised by reading an E.M. Forster novel (a writer Trilling praised) but not by seeing A Room With a View in the cinema multiplex?  I have not encountered a fully elaborated critique of popular cultural mass mediation so far in Trilling, but can imagine some lines of argument he might attempt.  He might first call to mind his often articulated view that the historical distance created by great novels is required to counteract the tendency to revert to current ideological accounts, possibilities subverted by necessarily simple film or journalistic treatments that translate rich novels into the contemporary vernacular.

Trilling might also evoke the long-standing case against mass culture as inevitably inclined to conformity and utopianism, versions of which often start with the view that, organized as they are by the desire for lowest-common-denominator mass audiences and controversy shyness (since controversy can be a stigma that suppresses profits), mass cultural artifacts will inevitably lapse into intellectual quietism or outright boosterism for self-satisfying verities.  As Hersch puts the potential case, “while literature encourages critical reflection, mass culture produces a predetermined emotional and intellectual response in the reader, discouraging and atrophying the ability to think independently.  Such pseudo-literature encouraged passivity, paving the way for totalitarianism.”  Agree or disagree, it should be noted that this view of mass culture may have contributed to Trilling’s own late-in-life pessimism even regarding the capacity of literature to break through, since (again quoting Hersch), “in a conformist culture, literature presents minority views that are likely to be scorned by the majority” (99).

Even conceding Trilling’s case, which many thoughtful observers of contemporary culture would never do (Herbert Gans and Raymond Williams would stand near the head of a long line), LT is often attacked for his tendency to read liberalism as wholly shaped by a now nonexistent monolithic middle class (that if it existed in the 1950’s certainly does not today, a point that underwrites part of Cornel West’s critique), which given current conditions of fragmentation does not exist in any meaningful way and probably cannot be rearticulated.  Another common criticism is that in developing his case for interrogating liberalism Trilling only paved the way for neoconservatism (a cottage industry continues to debate whether Trilling was a closet case neoconservative:  his wife Diana has adamantly refused the possibility, while Irving Kristol has claimed that LT was simply a neocon lacking the courage to say so in print).

Both arguments, it seems to me, miss the deeper commitment in Trilling’s work to a messy and complex humanism, and his recognition that for societies to proceed thoughtfully requires both a sense of common vision and purpose and also an always acknowledged sense that ideologies cannot be permitted, in the name of such commonalities, to erase or suppress what he called the “wildness of spirit which it is still our grace to believe is the mark of full humanness.”  As Bender has argued, “Trilling’s very middle classness – by providing the perspective of distance – ends up, however paradoxically, providing contemporary American culture with a radical challenge, urging critics to find some space among nostalgia, politicized group identities, and specialized academic autonomy for the creation of a public culture” (pg. 344).

SOURCES:  Lionel Trilling, The Liberal Imagination: Essays on Literature and Society, intro. by Louis Menand (New York:  New York Review Books, 2008 [1950]); Jeremy McCarter, “He Gave Liberalism a Good Name,” Newsweek, 6 October 2008, pg. 57;  Leon Wieseltier, “The Shrinker,” New Republic, 22 October 2008, pg. 48; Louis Menand, “Regrets Only: Lionel Trilling and His Discontents,” New Yorker, 29 September 2008, pgs. 80-90; Russell Reising, “Lionel Trilling, The Liberal Imagination, and the Emergence of the Cultural Discourse of Anti-Stalinism,” boundary2 20.1 (1993): pgs. 94-124; Donald Pease, “New Americanists: Revisionist Interventions into the Canon,” boundary2 17 (1990);  Cornel West, The American Evasion of Philosophy (Madison: University of Wisconsin Press, 1989); Thomas Bender, “Lionel Trilling and American Culture,” American Quarterly 42.2 (June 1990): pgs. 324-347; John Vernon, “On Lionel Trilling,” boundary2 2.3 (Spring 1974): pgs. 625-632; Charles Hersch, “Liberalism, the Novel, and the Self:  Lionel Trilling on the Political Functions of Literature,” Polity 24.1 (Fall 1991): pgs. 91-106; Robert Genter, “’I’m Not His Father’: Lionel Trilling, Allen Ginsberg, and the Contours of Literary Modernism,” College Literature 31.2 (Spring 2004): pgs. 22-52; T. H. Adamowski, “Demoralizing Liberalism:  Lionel Trilling, Leslie Fiedler, and Norman Mailer,” University of Toronto Quarterly 73.3 (Summer 2006): pgs. 883-904.

Remembering the radio “War of the Worlds”

Seventy years ago this week Orson Welles and his Mercury Theatre on the Air performed a radio broadcast version of H.G. Wells’ War of the Worlds which immediately became a legendarily contested example of the power of mass mediated communication.  The broadcast, enlivened with simulated but realistic-sounding journalistic reporting, told the story of a Martian invasion that was presented as actually underway in Grover’s Mill, New Jersey.  An absence of commercial interruptions helped convince some listeners that the drama was in fact a nonfictional account, and the ensuing reports of panic – the New York Times front page headline read “Radio Listeners in Panic, Taking War Drama as Fact” – were judged by one scholar to have affected as many as 1.7 million listeners (the portion of the six million estimated to have heard the broadcast) who “believed it to be true,” of whom 1.2 million were said to be “genuinely frightened.”

The War of the Worlds radio broadcast reached far fewer people than the number who read about it in the more than 12,000 newspaper accounts published afterward.  Some argue that the disproportionate attention was the result of poor reporting practices and an implicit arrogance among newspaper reporters about the delusional power of the newer broadcast media; the net effect of the self-interested accounts was thus a widespread exaggerated sense of the actual panic.  Others have noted that while the contemporary accounts do establish that many listeners were genuinely frightened, far fewer actually acted on their fear in ways confirming a sense that an invasion was actually underway.

A sense lingers to this day that the radio listeners panicked by the broadcast were rubes or hopelessly naive (for Hitler the incident confirmed American “decadence”).  A recent reenactment staged as a theatrical production in Washington, DC, planted panicky rubes in the audience who would leap to their feet and act frightened.  But this sense is not quite fair given the care given to trick the listening audience.  Welles is alleged to have timed the script so that listeners of the much more popular NBC competitor (the Chase & Sanborn show) on at the time would turn the dial to CBS timed to miss the opening disclaimers.  And the tale was specifically manipulated to take advantage of the broadcast norms of the time, during which news interruptions were taken seriously and had not been parodied in this way.  And of course many of those panicked were less spooked by having heard the radio show and judged it real than having been told about it through the rumor mill.  And elsewhere random coincidences contributed to misperception; in Concrete, Washington the local power went out right at the moment of highest drama, and so listeners seemed to have found confirmation of the radio drama as their lights flickered out.  The sense of embarrassment aroused in the aftermath of the national broadcast and its unmasking as pure fiction is sometimes said to have led Americans to first downplay news reports coming out of Pearl Harbor.

The irony, of course, is that media spoofs continue to sometimes trigger panic to this day, making it hard to sustain the narrative that we get it in ways our parents did not.  Eleven years after the original broadcast, a Spanish language version of WOTW was produced on Ecuadorean radio that reportedly set off widespread panic.  When the broadcast’s fictional nature was revealed, angry crowds actually mobbed Radio Quito and six people died in the volatile aftermath.  In the early 1970’s a Buffalo (NY) radio station updated the script and induced some sense of panic when they described scenes of a Martian invasion of Niagara Falls.  But apart from the WOTW episodes, one might point to the hoaxes perpetrated by broadcasts in Estonia (1991; which set in motion a brief currency crisis), Bulgaria (1991; which triggered panic about nuclear safety), Belgium (2006; a fake-news announcement that Flanders was seceding provoked alarm), or Boston (2007; a weird marketing campaign triggered a mammoth security and bomb alert crackdown).

Given the sporadic but continuing episodes of the apparent dread induced by Orson Welles and his imitators, a subsequent weird mix of analysis and disclaimer in the scholarship centered on media influence has resulted.  Some conspiracy theorists have kept alive the rumor that the WOTW broadcast was actually a secret project of the Rockefeller Foundation, a live action social science experiment.  The more serious accounts still tend either to anchor a Whig narrative of media history (people used to be gullible, and incidents like the Wells/Welles broadcast gave rise to accounts of the media’s totalizing power that we now understand to be naive) or to connect to warnings about how media influence is taught today.  With respect to the latter, David Martinson (a Florida International University professor) has written:

Many communication scholars trace a decline in support for the magic bullet theory – interestingly and paradoxically to a radio broadcast that many “lay persons” continue to cite as “definitive” evidence to support their belief in an omnipotently powerful mass media.  That broadcast was Orson Welles’ adaptation of H.G. Well’s War of the Worlds which was broadcast on Halloween eve in 1938…  But – and this became critically important as researchers later examined the impact of the Welles’ broadcast – everyone did not panic.  If the magic bullet theory were valid, there should have been something approaching almost full-scale hysteria.  Instead, studies showed “that ‘critical ability’ was the most significant variable related to the response people made to the broadcast.”

Martinson’s point is reasonable, but the difficulty, it seems to me, relates to ongoing and conflicting tendencies in the scholarship to disavow strong media effects even as reports of extreme consequence surface.  The point is not that one should or can generalize from scattered reports of extreme panic to make indefensible claims about the sustained influence of the media forming the wallpaper backdrop of contemporary culture, but rather that in disclaiming strong overall media effects one should not disavow their possibility altogether.  Beyond the continuing incidence of extreme reaction, which obviously arise under very peculiar circumstances, media scholars still struggle to explain the durability of media influence both at the level of the specific program and as it shapes a culture’s fantasies.  Some notion of the latter was conveyed by Jeffrey Sconce’s (2000) Haunted Media: Electronic Presence from Telegraphy to Television (Duke UP), which calls attention to the subtle ways in which mass mediation reinforces cultural fantasies about the human capacity to connect with the spirit world and other domains of existence.

The genuine and I think undeniable psychic unease that new forms of mass mediation continue to provoke are not simply a result of a lurking but sometimes absent sense of spiritual fulfillment and a related longing to truly connect with forces external to oneself.  Rather, the wider but too often demeaned significance of mass media influence also connects to the large scale accomplishments of massive industrialization and organization that underwrite the contemporary networked society (electrical grids, systems of food distribution, bureaucratized multinational corporate culture), systems of connection and alienation that because of their size also evoke paranoia and incomprehension and, sometimes, panic.  The technologies of mass media production (e.g., digital special effects) allow artists to exaggerate and evoke extreme responses.  As Ray Bradbury, writing an introduction to a recent collection of WOTW materials, put it,

Wells and Welles prepared us for the delusionist madness of the past fifty years.  In fact, the entire history of the United States in the last half of the twentieth century is exemplified beautifully in Well’s work.  Starting with the so-called arrival of flying saucers in the 1950s, we’ve had a continuation of our mild panic at being invaded by creatures from some other part of the universe….  So we are all closet paranoids preached to by a paranoid…  The War of the Worlds is a nightmare vision of humanity’s conquest – one that inspired paranoia in all its forms throughout the twentieth century… Truth be told, ever since the novel and the broadcast, we are still in the throes of believing that we’ve been invaded by creatures from somewhere else.

In the context of the now-popular efforts to visually and emotionally render the perils of global warming, some commentators have taken to referring to climate porn – those money shots in which we all seem to revel in the cinematic moment when the Statue of Liberty is wiped out by a tidal wave and the oxygen sucked instantaneously out of flash-frozen lungs.  An American paleoclimatologist (William Hyde), reviewing The Day After Tomorrow (the 2004 blockbuster I’m referring to) noted how “this movie is to climate science as Frankenstein is to heart transplant surgery.”

The persistence and pervasiveness of mass mediated evocations of deep unease, enacted in everything from science fiction to negative political commercials (e.g., Elizabeth Dole’s truly revolting new ad that seriously claims, over sinister music, that her Senate opponent Kay Hagan is secretly part of an atheistic cabal) to Snakes on a Plane depictions that only minimally metaphorize Terrorists on a Plane to endlessly emailed conspiracy messages about Barack Obama’s “true” religious commitments have to be taken seriously even while one also insists on the limits of media influence.  In an age of too readily trumped up dread it seems to me overly simple to conclude, with Michael Socolow, that accounts of media influence can be deeply discounted simply by asking the questions Would you have fallen for Welles’ broadcast?  If not, why do you assume so many other people did?  

To the contrary, it seems to me that people fall for the hyped and distorted accounts of mega-risk (yes, me included) all the time.  Indeed, the words screamed in Indianapolis upon hearing War of the Worlds seventy years ago might as easily have been uttered around the nation seven years ago on another crisp autumn day:  “New York is destroyed.  It’s the end of the world. We might as well go home to die.  I’ve just heard it on the radio.”

SOURCES:  Michael Socolow, “The hyped panic over ‘War of the Worlds,’” Chronicle of Higher Education, 24 October 2008, pgs. B16-B17; The War of the Worlds:  Mars’ Invasion of Earth, Inciting Panic and Inspiring Terror from H.G. Wells to Orson Wells and Beyond (Naperville, Ill., Sourcebooks MediaFusion, 2005); David Martinson, “Teachers must not pass along popular ‘myths’ regarding the supposed omnipotence of the mass media,” High School Journal, October/November 2006, pgs. 16-21; Matthew Warren, “Drama, not doomsday,” The Australian, 28 August 2008, pg. 10; “The archive – November 1, 1938 – ‘US panic at Martian attack: Wireless drama causes uproar,” The London Guardian, 8 November 2007; Jason Zinoman, “Just close your eyes and pretend you’re scared,” New York Times, 17 October 2007, pg. 3; Michael Powell, “Marketing gimmick does bad in Boston: Light devices cause bomb scare,” Washington Post, 1 February 2007, p. A3; “TV prank leaves country divided,” New Zealand Herald, 4 January 2007.

Retrieving the historical David

The question of whether the biblical David actually existed and the extent to which he ruled over a minor tribe or a major kingdom may seem tangential to those who connect to him mainly as a mythic figure or by religious faith – the boy who killed Goliath, the young shepherd who loved Jonathan and whose musical playing could calm king Saul’s savage migraines and who later became king, the older melancholy intellect we connect with the Psalms, and the scheming sexual predator who murdered a general to sleep with his wife.  But the archeological interest connecting to recent finds that some see as confirming his biography have implications not only for Israeli identity and national history, they are likely to play a role in the ongoing work to vindicate Zionism.  This is so because the state of Israel has long been linked in the historical imagination as the modern-day incarnation of David’s united kingdom.

Yosef Garfinkel, a Hebrew University archeologist, has been overseeing a dig at Khirbet Qeiyafa near the historic Valley of Elah (the Bible says this is where David brought Goliath down and the dig is only a couple miles from Goliath’s home town of Gath), located near the modern Israeli town of Beit Shemesh, and containing the remains of a heavily armored 3000-year-old city.  In the immediate vicinity, scholars have long been at work to unearth evidence of Philistine culture and militarism, and the cryptic remains that date to the 10th-century B.C.E. are of special interest because that was the period of asserted national unification, when the biblical account says David brought together Judah and Israel and expanded the nation.  But defensible remains are hard to find, and as one journalist recently put it, “a  number of scholars today argue that the kingdom was largely a myth created some centuries later.  A great power, they note, would have left traces of cities and activity, and been mentioned by those around it” (Bronner).  Work at the Elah Fortress just began earlier this year in earnest.

The absence of historical data from this period so far is why archeologists and biblical partisans are interested in a new find at the Garfinkel dig, which is shown in the photograph above.  A shard of pottery, inscribed with charcoal and animal fat and carbon dated by Oxford University to the period between 1050 and 970 B.C. (the precise period in question), seems to offer evidence on several fronts.  First, if confirmed, it shows evidence of literacy from a period for which evidence of writing is sparse – this is important because for biblical scholars it offers a way to square the writings of received scripture with the assumption that the period’s culture was largely or even exclusively oral.  Second, the site holds promise because it was only active for a very short period before the encampment was shut down, possibly in the aftermath of a military defeat at the hands of the Philistines, and so the normal blending of archeological evidence in layer upon contested layer is less an obstacle to historical analysis than it often would be in other settings.

Garfinkel argues that the pottery fragment, inscribed in a proto-Canaanite script, shows that the city was a forward deployed military base of Israeli/Hebraic origin or use.  For him the site’s location, two day’s walking distance south of Jerusalem, proves or at least strongly suggests that the reach of David’s empire was considerable enough to invest the ten years time it would have taken to produce such fortifications.  The site is large and its six acre, 700-meter-long city wall would have taken a long time to construct (and the construction would not have been simple:  some of the stones comprising the wall weigh as much as eight tons apiece).  Such evidence might go far in settling an ongoing historical debate between those who argue the general surface remains show no evidence of urban centers or a large dispersed population capable of constituting a 10th-century B.C.E. kingdom, and those who read the anthropological evidence as strong enough to support the judgment that a centralized government and bureaucracy could have been sustained by the available population centers.

Garfinkel is no biblical literalist (“we have to calm down before we start jumping to sentimental, Biblical conclusions,” he has said).  For him the new find might simply suggest that the story of David and Goliath mythically represents the likely ongoing skirmishes between the people who lived in the Elah Fortress and the Philistines who lived nearby.  But skepticism has been expressed even about his more definitive claims, in part because some of the funding for the excavation comes from an organization called Foundation Stone, which encourages archeological work confirming the Jewish connection to the historical Holy Land.  Others are less sure of what to make of the findings than Garfinkel – while acknowledging its archeological importance, for example, Amihai Mazar (another archeologist working at Hebrew University) wonders at the strength of the evidence the Fortress has so far yielded:  “The question is who fortified it, who lived in it, why it was abandoned, and how it all relates to the reign of David and Solomon,” David’s son.

A more complicated suggestion made by Garfinkel and his colleagues is that the pottery piece directly hints at David himself.  The text is not fully deciphered, but because the words king, judge, and slave are written there, the fragment may suggest some sort of official communique from the time of David’s rule or a system of scribal regulatory conveyance.

For believers in the scriptural record, such a find would be a happy but unnecessary confirmation of what they already know:  that David unified a group of warring tribes into a significant Mediterranean kingdom whose existence was prophetically foreseen and which laid the foundations of the modern state of Israel (if one visits the website for the Israeli government one can see national historical maps that assume the biblical account of David and Solomon is literally true).  But for those who tend to read the Bible’s historical accounts as mainly mythological, the search for a historical David has never been settled by the biographical details enumerated in the books of First and Second Samuel.  And because so few extra-Biblical confirmations have been found (only one inscription from the period, the so-called Tel Dan stele, uses the phrase “House of David”), some doubt whether there ever was a King David.

A slew of recently suggestive finds have reactivated interest in the subject.  Beyond the new pottery shard, a Jerusalem researcher this past week claimed to have “found an ancient water drain mentioned in the Bible as the route used by David’s forces to capture the city from the Jebusites.  [And] in Jordan, scholars said they had uncovered an ancient copper excavation site that tests showed could be the legendary King Solomon’s mines” (Kalman).  The challenge is to avoid racing to friendly conclusions – on other occasions early finds have been publicly circulated as settling the questions surrounding the historical David (including the Mesha Stele and a Pharaonic inscription alleged to refer to the “highlands of David”) only to finally garner limited scholarly support after deeper investigation and debate.  Other sites have offered tantalizing hints of a Davidic reign (including digs done in the heart of Jerusalem and a site first announced as the site of David’s Palace), but have been judged inconclusive either because the sites were contaminated by remains from other periods or could not be definitively connected to the 10th century B.C.E. period under question.

SOURCES:  Ethan Bronner (New York Times correspondent), “Dig may shed light on biblical David,” Atlanta Journal Constitution, 31 October 2008, pg. A7; Matthew Kalman, “’Proof’ David slew Goliath found as Israeli archeologists unearth ‘oldest ever Hebrew text,’” The Mail online, 31 October 2008; Carolynne Wheeler, “Pottery shard lends evidence to stories of Biblical King David,” London Telegraph, 31 October 2008.

Reciprocity and 21st century liberalism

In Madison, Wisconsin this weekend for the biennial Public Address Conference, I had the pleasure tonight to hear a most interesting keynote address given by John Murphy, a communication scholar at the University of Illinois, as well as responses given by two of the field’s most productive scholars.  The talk was aimed to respond, one might say, to Danielle Allen’s Talking to Strangers:  Anxieties of Citizenship Since Brown v. Board of Education (University of Chicago Press, 2006), which has gotten a very friendly reception in rhetorical studies because of the way in which she offers norms of reciprocated dialogue as a corrective to what she sees as increasingly extreme political practices of exclusion and polarization.  Allen’s argument sees such reciprocation as part of the solution to an increasingly problematic paradox of democratic politics:  even while democratic politics depends on “good losers” who will stick with the system and remain committed to its overall legitimacy even when they don’t win (elections, federally allocated benefits, and so on), liberalism also cultivates a tendency to a hyper-competitiveness so extreme as to deny the possibility of good losers.  More often, and this is in my view a fair diagnosis of the American political scene, individualistic competitiveness leads to triumphalism for the electoral victors and causes us to actually “loath the losers.”

The problem this creates is the emergence of now-significant segments of the national electorate who, having borne a disproportionate share of the burdens of representative government and having been long denied real access to electoral power (e.g., African Americans), are disillusioned with the whole system, distrustful of government, and unwilling to play a game they find forever rigged for the benefit of others.  Allen blames a lot of this on what she sees as the too-ready acceptance of the political theories of the Germans (Kant, Habermas, etc.), who incline commentators on the American political scene to see the problem as “too little deliberation,” who often disdain the everyday practices of political persuasion, and who seem to simply prefer a system that attains consensus (even at the expense of broader legitimation).  Allen sees the result as a kind of stunted liberalism, bleeding away its legitimacy, and in need of a good (small-r) republican dose of what she calls a “citizenship of friendship” that would commit either to a reformed Aristotelian republicanism (stripped of its historical disdain for public persuasion) or Habermasean deliberative democracy (stripped of its strong interest in consensus formation as the central purpose of political interaction).

If I’m rightly recalling Allen’s position (and in Madison I’m away from my copy of the book), and if I understood him correctly, Murphy agrees with Allen on the diagnosis (liberalism has inclined too far in the direction of hyper-individualism), but not with her solution.  His concern is that scholars like Allen (and, he says, a number of communication theorists) revert to conceptions of engaged citizenship that require levels of attention and engagement that are simply unrealistic, and perhaps even unnecessary, in a frenetically globalized 24-7 information glutted world.  Far better, he argued, to reclaim the rhetorical resources of liberalism itself – the mechanisms by which speech can induce identification and empathy, both at the level of content and form – for the purpose of redeeming the American polity.

Kennedy’s summer speeches at American University, where he called for a new conception of Cold War competition with the Soviet Union, and regarding events at Little Rock, which articulated a case for racial civil rights, provide for Murphy exemplars for such a purpose.  Resisting interpretations that read Kennedy as either enacting Cold War realism or performing America’s civil religion, Murphy rightly focused on the norms and tropes of reciprocity that might induce a level of trust sufficient to adequate self-government.  In articulating the norms of respective engagement with racial difference, for Kennedy to invite his circa 1960’s white audiences to imagine themselves as African American was not an empty thought experiment but a striking challenge to the national imagination and a provocation to political transformation.

Professor Murphy’s address invited a wide range of questions:  Was Kennedy’s rhetoric truly exemplary of liberalism as inflected by the American experience, or were the features of his public addresses simply idiosyncratically Sorenson?  Was the potential effectivity of Kennedy’s mode of address fulfilled or obliterated by the rhetorical practices of Lyndon Johnson?  Is the sort of reciprocity Kennedy practiced suitable to the complicated challenges presented today by issues like same-sex marriage, and more broadly, the problem identity politics poses for a wider liberal politics?  Can rhetorical reciprocity ever actually give voice to the historically disenfranchised – to women, to blacks, to Native Americans?  Or is even well-intentioned speech that tries to do justice to the experiences of marginalization suffered by others doomed to fail, since the actually resulting norms and legal regimes that seek to institutionalize equality are likely to favor the powerful who draft them?  Is the creation of public trust in government even desirable at a time when a politics of suspicion may be more suitable to the circumstances of insider government and barely concealed cronyism?  And would a rhetorical practice more fully committed to strategies of enacted reciprocity actually be effective?

I am both intrigued and unsure of whether the rhetorical resources of liberalism illuminated by John Kennedy’s speeches, even conceding the multiple strategies the ideology enables for creating community as opposed to simple hyper-individuality, are suitable to the changed conditions of public argument.  And this is a question that Prof. Murphy seemed to raise – I thought I understood him at one point to be claiming that a redeemed liberal rhetoric was more suitable to a populace too busy to engage in the time-intensive practices of debate and “good government” than the (potentially naive) campaigns to convert America’s citizens into full-time policy makers or investigative reporters.  Because that particular issue could not be fleshed out in the limited time allocated to the keynote, I won’t presume to speculate on how such a view might be fully defended.  But I confess to skepticism.

For example:  in a culture where a 24/7 media environment is dominated by commentators always at work to sow distrust about political opponents (I refer to the Ann Coulter strategy of equating liberalism with treason, although even more reasonable talking heads follow similar argumentative paths), is it reasonable to think that political speech emphasizing empathy for the other will succeed in inducing trust among endlessly distracted viewers?  Or, putting the same point differently, in a society whose hyper-busy citizens increasingly self-segregate (white middle class kids go to religious academies or charter schools or are home schooled with other white middle class kids, rich families live with other rich families behind secure iron gates, etc.), can we plausibly expect that even a radical shift in the nation’s public rhetorical culture might break through the tunnel vision perspectives that come from mainly living in the absence of strangers?

And would a more fully embraced norm of reciprocity be strong enough even to begin to compensate for trumped up climates of panic (such as the Communist scares), mortal dread (such as induced and then hyped by the Osama bin Laden attacks), and apocalyptic speaking-in-tongues fundamentalism (such as apparently shared by Gov. Palin)?  In an age where critical thinking and reasoning skills seem too often undernourished at the very time when the most vexing public policy matters require ever more sophisticated knowledge (climate science, financial market modeling, and so on), is the answer really to be found in rhetorical practices that might only further narrativize public controversy, perpetuating (in the name of reciprocity) the kind of vacuous I’m running for President for Bobby, who lost his legs because his insurance claim was denied appeals that seem to give voice to the powerless but mainly as a cheap campaign trick to tug at the heart strings?

Of course to frame the case for a more articulate liberalism within the contours of Allen’s book risks implying a binary choice that is recognizably false to students of rhetorical history.  That is, the strong case Allen makes for citizenship as (intensive) friendship, as opposed to citizenship as litigation or winner-take-all debating society, while it does evoke the apparently contradicting impulse to imagine a contrary citizenship based on empathy, implies a both/and logic foreign to a broadly humanistic rhetorical tradition that has always seen a place among the practices of persuasion for appeals both to rationality (logos) and reciprocity/empathy (pathos).  This point leads to a very modest quibble with Prof. Murphy’s strategy of evidencing the claim for rhetorical reciprocity based solely on a textual analysis of these two significant addresses.  For me it is telling that Kennedy delivered these speeches to academic audiences, before groups of students and professors for whom appeals to the shared human condition would typically be heard as supplemental to the critical-rational norms of university scholarship.  One might alternatively read Kennedy’s addresses less as posing a radically alternative liberal rhetorical practice than as offering a simple (but nonetheless effective) supplement to the norms of public deliberation that would have been familiar to the professors and graduates within earshot.

It is certainly the case that practices of public engagement that solely obsess over consensus formation and critical-rational argument will fail to redeem the promise of an authentically emancipatory liberalism, and it is right to criticize such an approach for making unrealistic demands on a frazzled and distracted citizenry.  But I wonder whether appeals grounded in empathetic reciprocity, especially if offered as fast decisional heuristics for viewers too distracted to explore the issue in-depth, will fare any better?  I’m skeptical in part because although the demands of rhetorical education are much higher when true deliberation is the goal, the possible payoff is greater too, for it may be easier finally to inoculate audiences against flawed reasoning than against endlessly nurtured and corrosive cynicism.

Follow

Get every new post delivered to your Inbox.