Amateur Humanist

Start here

Interpreting ossuary boxes

Roughly seven years ago the discovery of a 2000-year old bone box (or, ossuary) which is engraved with the words James, Son of Joseph, brother of Jesus, was announced, setting in motion a media, scholarly, and now judicial frenzy.  There is not much doubt that the 20-inch long box is about the right age to be from the period when Jesus lived; the controversy has to do with whether the inscription was added later.  The editor of the Biblical Archeological Review (BAR first headlined the find in 2002 in an essay written by the Sarbonne scholar André Lemaire) has written a book defending the authenticity of the find, which he says makes this one of the greatest archeological finds of all time since it would be the only contemporaneous evidence that Jesus lived and that the New Testament naming of his (step-)father and brother is accurate.  By contrast, Nina Burleigh has a new book out (Unholy Business:  A True Tale of Faith, Greed and Forgery in the Holy Land, Harper Collins) arguing the whole thing is, as the title implies, a gigantic hoax.

The antiquities collector who sprang the find on the world is Oded Golan, who says he was sold the box by an Arab antiquities dealer; he can’t remember who the man was.  An investigation was subsequently undertaken by the Israeli Antiquities Authority, which pronounced the inscriptions a fraud (their Final Report is available on their main website); soon thereafter Golan and three others were arrested and, for the last almost four years, have been on trial for taking valuable historical artifacts and adding fake lettering in a scheme to make them massively more valuable.  Golan denies the charges.

The case is obviously complicated, and pretty interesting.  Golan is accused of also faking a tablet he claims came from the first Solomon Temple.  The ossuary, if confirmed, might rock the world of Christian scholarship (more on that in a moment); the Jehoash tablet, if confirmed, might rock the world of Judaism by proving the existence of Solomon’s Temple on the historically contested Al Aqsa Temple Mount.

A lot of the skepticism derives from the fact that the finds just seem too good to be true.  The tablet contains sixteen full lines of text, when similar finds from the right period are lucky to include a smattering of textual fragments.  Burleigh note that when the authorities searched Golan’s house, they found little baggies of ancient dirt and charcoal, along with carving tools one would use to fake age an object.  During one search, as the Toronto Star reported it, “the James Ossuary was found sitting atop a disused toilet, an odd place, police felt, for a box purported to have once contained the DNA of Jesus’ family.”

The Israel Antiquities Authority sees the case as open and shut.  While some have argued that scientifically valid tests of the stone patina verify the authenticity of the engraved lettering, the panel of experts convened by IAA judged the inscription a fraud.  In part their argument was based on a finding that the inscription cut through the old patina (implying it was of recent origin).  Parts of the inscription, they argued, were recently baked on; in that more recently applied inscription patina (the part that seems to connect the box to someone named Jesus), they found trace elements that wouldn’t have existed in ancient Jerusalem but are found today in chemically treated tap water.

But under intensive question-and-answer in the lawsuit, the case has weakened – one expert from Germany said the IAA had contaminated the key evidence and another (Ada Yardeni) said she would leave her profession if the ossuary turned out to be a fake.  Opponents of the IAA conclusions argue that their objectivity cannot be trusted given IAA’s strong opposition to artifacts brought to light via the commercial antiquities trade.  The testimony has been so conflicted that two months ago the judge actually suggested the prosecution drop the charges against Golan; he said it seemed unlikely to him a conviction could be achieved (which in turn led Hershel Shenks, the BAR editor, to issue a report that the find had been “vindicated” – this month writing, the “forgery case collapses”).  Burleigh is frustrated because a possible key witness is an Egyptian who says he used to forge for Golan.  But Egypt won’t extradite the man and he doesn’t seem interested in testifying, and so his story likely won’t be heard.  Defenders of the box’s authenticity argue Burleigh is just trying to sell her book, and the book’s thesis blows up if the find proves genuine (and so, they insinuate, she’ll say anything to discredit it).

The whole thing got even wilder earlier this year when a documentary film produced by James Cameron (yes, the Titanic guy) was released.  Directed by Simcha Jacobovici, The Lost Tomb of Jesus, which has by now screened around the world (Jacobovici has also co-authored a book on the subject called The Jesus Tomb and the documentary aired under the title The Jesus Family Tomb on the Discovery Channel), argues that the James ossuary and others found nearby establish (at a high level, they say, of statistical probability) that what had been found was the final burial grounds of Jesus’ family.  The statistical part is interesting – the expert quoted in the film did calculations given a series of contingencies laid out by the film’s director.  The statistician is credible (Andre Feuerverger, from the University of Toronto) and the calculations have been judged serious and methodologically sophisticated by a peer-reviewed forum in a leading statistics journal, but the original parameters are highly disputed (especially given how common the names Mary, Jesus, Joseph, and James were back then).

Stephen Pfann, from the University of the Holy Land, isn’t buying it:  “What database serves as the basis for establishing the probability of this claim?  There are no surviving genealogies or records of family names in Judea and Galilee to make any statement concerning frequency of various personal names in families there.”  Joe Zias, former curator of archeology at the Rockefellar Museum in Jerusalem, quoted in a March 2007 Newsweek article, was even blunter: “Simcha has no credibility whatsoever.  He’s pimping off the Bible…  Projects like these make a mockery of the archeological profession.”

Smart people got involved in the film (among them Princeton’s James Charlesworth and the University of North Carolina [Charlotte]’s James Tabor), but the film still reaches pretty far.  Based on a fourth ossuary from the same tomb (which some now aim to turn into a mega-tourist site), (here quoting a summary by David Horovitz in the Jerusalem Post) the filmmakers:

 …point to Ossuary 701… inscribed “Mariamne,” who they say is identified as Mary Magdelene in the 4th century text, The Acts of Philip.  And since Mary Magdalene is in the Jesus family tomb, and ultra-modern testing has established, astoundingly, that her bone-box and Jesus’ contained DNA of non-blood relatives, she must have been Jesus’ partner, they reason.  And since there’s a “Judah son of Jesus” in the tomb too (Ossuary 702) they dare to suggest he was most likely their son.  

Why, it’s the Da Vinci Code, all over again!  Burleigh half jokingly predicts we’ll soon see Solomon’s crown and Abraham’s sandals appearing on the antiquities market.

The case, beyond its intrinsic interest, has implications for how knowledge is created and distorted and popularized.  Some believers eager for evidence confirming their faith prove gullible to media mythmakers who popularize (and sometimes grotesquely distort) the scientific basis for their claims.  And the scientists get hauled into courts, where the standards of evidence vary dramatically from the tests of the laboratory or the peer review publication process.  Two sides get ginned up, science goes on trial, and (as Burleigh puts it) “the subjective underbelly of the science is… exposed…, big time” (qtd. by Laidlaw, Toronto Star, 11/4/08).  In cases of ambiguity, either fraud is perpetuated or doubt cast on potentially astonishing discoveries.  The debate rages on forever, creating cottage industries of scholarly blood feud.  It is this very cycle that accounts for the fact that Holy Family tombs have now been “authenticated” (as the Newsweek report put it) beneath the Dormition Abbey in Jerusalem and at another site in Ephesus (the Catholic Church says Mary was buried both places), the rock on which the Church of the Holy Sepulchre was erected in Jerusalem (Constantine said that was where Jesus was laid to rest), and a tomb in Safed (where last year Tabor said he found a Jesus tomb).

Stay tuned.  The Golan trial gets going again later this month.

SOURCES:  “’Jesus box’ may not be a fake after all,” Daily Mail (London), 30 October 2008, pg. 11; Stuart Laidlaw, “Forgery of antiquities is big business,” Toronto Star, 4 November 208, pg. L01; David Horovitz, “Giving ‘Jesus’ the silent treatment,” Jerusalem Post, 2 March 2007, pg. 24; Nina Burleigh, “Faith and fraud,” Los Angeles Times, 29 November 2008, pg. A21; “Forgery case collapses,” Biblical Archeology Review, January/February 2009, pgs. 12-13; Lisa Miller and Joanna Chen, “Raiders of the lost tomb,” Newsweek, 5 March 2007, pg. 60; Nicole Gaouette, “What ‘Jesus hoax’ could mean for Mideast antiquities,” Christian Science Monitor, 19 June 2003, pg. 7.

Publishing the papers of the U.S. founders

More than a half century ago, the Congress committed to producing definitive editions of the papers of the American founders – Alexander Hamilton, John Adams, George Washington, James Madison, Thomas Jefferson, and Benjamin Franklin in particular.  The first volume (which happened to be volume one of the Jefferson papers) was published in 1950, while Harry Truman was president.  Since then only the Hamilton papers have been completed.  As Senator Patrick Leahy (D-VT) put it in congressional hearings held last February:

According to the National Historic Publication and Records Commission [NHPRC], the papers of Thomas Jefferson will not be completed until 2025, the Washington papers in 2023, the papers of Franklin and Madison in 2030, and the Adams papers in 2050.  That is a hundred years after the projects began.  We spent nearly $30 million in taxpayer dollars in Federal taxpayer projects, and it is estimates another $60 million in combined public and private money is going in here.  One volume of the Hamilton papers costs $180.  The price for the complete 26-volume set of the papers is around $2,600.  So… only a few libraries [have] one volume of the papers, and only six percent [have] more than one volume.

The challenge, of course, is that everyone wants these collections, which have been often described as American Scripture, to be academically accurate, definitively comprehensive, and available yesterday.  But the imperatives of accuracy and speed work at cross purposes.  Some sense of why it takes so long to pull together and confirm the impossibly numerous details was conveyed in a story told by the historian David McCullough, who testified at the hearings.  McCullough, now at work on a Jefferson project, wanted to know the exact contents of the eighty or so crates Jefferson shipped back to Virginia while he was doing diplomatic work in France, information he rightly felt might convey some sense of Jefferson’s thinking.  The answer was to be found in volume 18 of the published papers, “the whole sum total in a footnote that runs nearly six pages in small type.”  McCullough has proposed that the national investment in the work of editing be doubled, so that the papers can be published more speedily but at no loss of historical quality.

The complications of doing this work are legion.  The papers of contemporary presidents are routinely collected and published soon after administrations end, but it wasn’t until 1934 and the founding of the National Historical Publications Commission, the precursor to today’s NHPRC, that a serious effort was made to comprehensively collect 18th century documentation, often scattered in private collections.  Although 216 volumes have now been published and praised, the frustration of the anticipated 2049 completion date has resulted in a drumbeat of criticism.  Private funding has been mobilized (the Pew Charitable Trust was the main original funder and has been persistent in directing funds over the years, including a failed 2000 challenge grant of $10 million – more on that soon), and the pace of publication is accelerating, but these final deadlines remain far off.

Rebecca Rimel, president of Pew, argues that there has been too little accountability for funds already spent – “there has never been a full accounting of the Founding Fathers Project.  There has been a lack of performance metrics” able to measure progress over time, she argues (11).  Pew has a special reason for frustration because they made the funding they coordinated contingent on production of such information, and they say it has never been forthcoming.  The criticism was reiterated in a more particular way by Deanna Marcum of the Library of Congress, who expressed the concern that university project work is spending too much of the funding to float graduate student stipends and on connected graduate programs, sometimes at the expense of faster methods of completion (37).  Stanley Katz has responded to this critique by noting that the expenditures of the projects are held tightly accountable to the reporting processes at NHPRC and NEH, in ways no different than any other funded project supported by those agencies.

The scholarly challenges of doing this work are also enormous.  To assure that consistently high standards of annotation are used in all the collections, very complex protocols of verification and citation are in place.  When one hears that a given project may “only” be producing one or two new volumes a year, it is easy to forget that each of these volumes may run to 800 pages with a large number of small print footnotes, and the Washington papers alone run to 27,347 pages.  Ralph Ketcham, an emeritus historian at Syracuse University who has spent his entire career on these projects (first working on Franklin and now on Madison), noted that the longevity of many of the Founders adds additional challenges – “It’s not surprising,” he noted, “that Alexander Hamilton’s papers are the only ones that have been completed.  The chief editor of the Hamilton papers, Cy Surrett, emphasized long ago that he thought he might dedicate his volumes to Aaron Burr, who made completion of the task possible” (14).  Sometimes this longevity results in vast collections of material – if the microfilmed papers connecting to the Adams papers were stretched out (the collection includes the presidential papers but also the materials produced by Henry and Charles Francis Adams) it would extend more than five miles long (McCollough, pg. 20).  The actual papers, when not in the custodial care of the Library of Congress, have to be transcribed and proofread on-site at collections often unwilling to let them physically travel.  To take just one example, the Jefferson papers are geographically dispersed over 100 different global depositories (Katz, pg. 18).

Fundraising has always been a challenge despite recent Congressional support.  The projects were intended from the outset to be funded privately, although public funds have also been allocated (the National Endowment for the Humanities started providing project grants in 1994).  Stanley Katz, a Princeton professor and former president of the Organization of American Historians, chairs NHPRC’s fundraising operation, whose major purpose is to do fundraising for all the Founders projects with the goal of freeing scholars at work on annotation from that burden, and although the organization has raised millions, many more are needed.  Although federal funding was restored after considerable lobbying, last year’s Bush budget proposal recommended zeroing out the NHPRC altogether.  And the story of the finally failed Pew matching grant, which imposed a probably impossible challenge, is also instructive:  Pew (this according to Katz, pg. 28) gave Founding Fathers Papers, Inc., nine months to come up with the requisite 3-to-1, $30 million match.  When they couldn’t raise that amount of money so quickly, the Pew match was withdrawn.  The model of creating so-called “wasting funds” (large endowments designed to spend down to zero with the completion of a project) makes sense (the strategy was used to complete the Woodrow Wilson papers and is a solution to the threats posed by funding uncertainty), and the Pew impulse to put tight timeframes on creating such funds also makes sense.  But too optimistically calibrated, overly-fast timetables can produce wasted effort and final funding failure.

Katz has also warned against the temptation of thinking the projects can simply be scaled up to speed publication:  “These are rather extraordinary works of scholarship.  This a craft skill, this is not an industrial skill.  It can’t be scaled up in the way that industrial skills can” (12).  Progress has been expedited by splitting up projects so that different parts can be simultaneously worked on; this is the strategy now in use with the Jefferson and Madison papers.   But because this is the case for most of the series in process, the marginal possibilities for accelerating production are not likely as great as one might imagine.

A common refrain is to call attention to the presumed absurdity of continuing the commitment to expensive hard copy printing, when many imagine the papers could be scanned, thrown up on the Worldwide Web, and annotated perhaps by the collaborative wikipedia-type work of a preselected group of scholars.  In fact, this is already well underway, though the new commitments add major new work to existing teams.  Allen Weinstein, the U.S. Archivist, has committed to online dissemination, and digital commitments go back all the way to 1988, when agreements were made with the Packard Humanities Institute.  Packard continues to plug away along with the University of Virginia Press (the electronic imprint is called Rotunda).  The University of Virginia work also received major support from the the Mellon Foundation.  Rotunda, which is receiving no public funds for its work (31), has already posted the papers of Washington and Dolley Madison, with Adams, Jefferson, Ratification, and James Madison papers slated for online publication by the end of 2009.

But that solution, for anyone who has struggled to put up a respectable website, is a lot more complicated than it may seem.  For one thing, unlike the recent NEH initiative to digitize American historical newspapers, which can be electronically scanned, the handwritten papers of the founders have to be keyed in by hand and then verified one at a time, an exceptionally labor-intensive process.  The publication arrangements that have been made with major university presses makes it a challenge to place unannotated material on a website, which would seriously subvert the investments those presses have made in projects in anticipation of a return on investment with publication.  For another, nationally-sanctioned authoritative editions need to be handled with great care and with sensitivity to the fast changing environments of digital presentation, so that money will not be wasted investing in formats that will soon be judged unworthy of the material.  Still, the Library of Congress, which has proprietary control over many of the materials, has already begun significant digitization connected with its American Memory Project (e.g., all the Washington, Jefferson, and Madison papers are available online).  Their position is that they can do the job given more money.

And thus the brilliant, historically incomparable Founding papers annotations roll out, one expensive volume at a time, inexorably researched and in a seemingly never-ending quest for financial support, so that their educational potential for scholars, citizens, and students will not be delayed for yet another half century.

SOURCE:  The Founding Fathers’ Papers:  Ensuring Public Access to Our National Treasures, Hearings before the Senate Judiciary Committee, S. Hrg. 110-334 (Serial No. J-110-72), 7 February 2008.

When social science is painful

The latest issue of the Annals of the American Academy of Political and Social Science (#621, January 2009) is wholly focused on the report authored in 1965 (read it here) by Daniel Patrick Moynihan focused on the status of black families, “the most famous piece of social scientific analysis never published” (Massey and Sampson, pg. 6).  The report arose out of Moynihan’s experience in the Kennedy and Johnson administrations working on poverty policy; his small group of underlings included Ralph Nader, working in his first Washington job.  Inspired by Stanley Elkins’ work on slavery (a book of that name argued that slavery set in motion a still-continuing tendency to black economic and social dependency), Moynihan’s group examined the ways in which welfare policy was, as he saw it, perpetuating single-family households led mainly by women, and at the expense of social stability and racial progress.  [In what follows I am relying almost totally on the full set of essays appearing in the January 2009 AAPSS, and the pagination references that follow are to those articles.]

Moynihan was writing in the immediate aftermath of passage of the 1964 Civil Rights Act, and a principle theme of the report is that the eradication of legal segregation would not be enough to assure racial equality given larger structural forces at work.  Pressures on the black family had produced a state of crisis, a “tangle of pathology” that was reinforcing patterns of African-American poverty, he wrote.  Moynihan’s larger purpose was to recommend massive federal interventions, a goal subverted, unfortunately, by the report’s rhetorical overreaching (e.g.:  matriarchy in black families were said to prevent black men from fulfilling “the very essence of the male animal from the bantam rooster to the four star general… to strut”).  The solution, in his view, was to be found in a major federal jobs program for African American men.

The report was leaked to the press and was, by and large, immediately condemned, first because it seemed to provide aid and comfort to racists in its emphasis on out-of-wedlock births as a demographic pathology, and second because it seemed to many readers a classic case of “blaming the victim.”  In fact, the term “blaming the victim” may have its genesis in William Ryan’s use of the phrase to critique Moynihan in the Nation.  I think it likely that cultural salience of these critiques was later reinforced by a memo he wrote to Richard Nixon advocating the idea that “the issue of race could benefit from a period of ‘benign neglect,’” a locution he came to regret since that one soundbite came to dominate the actual point of the memo, better encapsulated in this perspective:  “We need a period in which Negro progress continues and racial rhetoric fades” (contrary to the impression given by the benign neglect comment, he was actually trying to be critical of the hot and racially charged rhetoric coming out of Vice President Agnew).  Moynihan’s report proved divisive in the African American community, endorsed on release by Roy Wilkins and Martin Luther King, Jr., but condemned by James Farmer.  By the time the report itself was more widely read its reception was distorted by the press frame, and a counter-tradition of research, celebrating the distinctiveness of black community formation, was well underway.

Read by today’s lights the Moynihan report has in some respects been both confirmed and its critics also partly vindicated too.  The essays in this special issue offer many defenses.  Douglas Massey (the Princeton sociologist) and Sampson (chair of sociology at Harvard, both writing in the introduction at pgs. 7-8) defend the report against the accusation of sexism:

Although references to matriarchy, pathological families, and strutting roosters are jarring to the contemporary ear, we must remember the times and context.  Moynihan was writing in the prefeminist era and producing an internal memo whose purpose was to attract attention to a critical national issue.  While his language is certainly sexist by today’s standards, it was nonetheless successful in getting the attention of one particular male chauvinist, President Johnson, who drew heavily on the Moynihan Report for his celebrated speech at Howard University on June 4.

Ironically, though, the negative reactions to the leaked report (which suffered since the report itself was not publicly circulated, only the critical synopses) led Johnson himself to disavow it, and no major jobs program for black men was forthcoming as part of Great Society legislative action.  Moynihan left government soon afterward and found the national coverage, a lot of which attacked him as a bigot, scarring and unwarranted given the overall argumentative arc of the report.  Only when serious riots reerupted in 1968 did jobs get back on the agenda, but the watered down affirmative action programs that resulted failed to transform the economic scene for racial minorities while proving a galvanizing lightning rod for conservative opponents (Massey and Sampson, 10).  The main policy change relating to black men since then has been sharp increases in rates of incarceration, not rises in employment or economic stability, a phenomenon which is the focus of an essay by Bruce Western (Harvard) and Christopher Wildeman (University of Michigan).

Several of the contributors to the special issue mainly write to insist that Moynihan has been vindicated by history.  His simple thesis, that in subgroups pressures tending to disemploy males will in turn fragment families and produce higher incidences of out-of-wedlock birth, divorce, all at the main expense of women and children, is explicitly defended as having been vindicated by the newest data.  James Q. Wilson writes that the criticism the report received at the time “reflects either an unwillingness to read the report or an unwillingness to think about it in a serious way” (29). Harry Holzer, an Urban Institute senior fellow, argues that the subsequent trends in black male unemployment have only intensified since the 1960’s, thereby reaffirming the prescience of Moynihan’s position and strengthening the need for a dramatic federal response (for instance, Holzer defends the idea that without larger educational investments the destructive perceptions of working opportunities will produce perceptual barriers to cultural transformation).  The predicate in Ron Haskins (of the Brookings Institution) essay is announced by its title:  “Moynihan Was Right:  Now What?” (281-314).

Others argue that the Moynihan claims, which relied on the assumption that only traditional family arrangements can suitably anchor culture, ignore the vitality of alternative family forms that have become more common in the forty years since.  Frank Furstenberg notes that “Moynihan failed to see that the changes taking place in low-income black families were also happening, albeit at a slower pace, among lower-income families more generally” (95).  For instance, rates of single parenting among lower-income blacks have dropped while increasing among lower-income whites.  Linda Burton (Duke) and Belinda Tucker (UCLA) reiterate the criticism that the behavior of young women of color should not be pathologized, but is better understood as a set of rational responses to the conditions of cultural uncertainty that pervade poorer communities (132-148):  “Unlike what the Moynihan Report suggested, we do not see low-income African American women’s trends in marriage and romantic unions as pathologically out of line with the growing numbers of unmarried women and single mothers across all groups in contemporary American culture.  We are hopeful that the uncertainty that is the foundation of romantic relationships today will reinforce the adaptive skills that have sustained African American women and their families across time” (144).  Kathryn Edin (Harvard) et al., criticize Moynihan’s work for diverting research away from actual attention to the conditions of black fatherhood, which in turn has meant that so-called “hit and run” fathers could be criticized in ways that have raced far out of proportion to their actual incidence in urban populations (149-177).

The lessons drawn by the AAPSS commentators from all this for the practice of academic research are interesting.  One drawn by Massey relates to the “chilling effect on social science over the next two decades [caused by the Moynihan report and its reception in the media].  Sociologists avoided studying controversial issues related to race, culture, and intelligence, and those who insisted on investigating such unpopular notions generally encountered resistance and ostracism” (qtd. from a 1995 review in Massey and Sampson, 12). Because of this, and the counter-tendency among liberal/progressive scholars to celebrate single parenting and applaud the resilience of children raised in single-parent households, conservatives were given an ideological opening to drumbeat media reports about welfare fraud, drug usage rates, and violence, and to pathologize black men, an outcome M/S argue led to a conservative rhetoric of “moralistic hectoring and cheap sermonizing to individuals (“Just say no!”).  Not until William Julius Wilson’s The Truly Disadvantaged (1987), did the scholarly tide shift back to a publicly articulated case for social interventions more in tune with Moynihan’s original proposals – writing in the symposium WJW agrees with that judgment and traces the history of what he argues has been a social science abandonment of structural explanations for the emergence of poverty cultures.  The good news is arguably that “social scientists have never been in such a good position to document and analyze various elements in the ‘tangle of pathology’ he hypothesized” (Massey and Sampson, pg. 19).

The history of the report also calls attention to the limits of government action, a question with which Moynihan is said to have struggled for his entire career in public service.  Even accepting the critiques of family disintegration leaves one to ask what role the government might play in stabilizing family formations, a question now controversial on many fronts.  James Q. Wilson notes that welfare reform is more likely to shift patterns of work than patterns of family, since, e.g.,  bureaucrats can more reasonably ask welfare recipients to apply for a job than for a marriage license (32-33).  Moynihan’s answer was that the government’s best chances were to provide indirect inducements to family formation, mainly in the form of income guarantees (of the sort finally enacted in the Earned Income Tax Credit).  But asked at the end of his career about the role of government, Moynihan replied:  “If you think a government program can restore marriage, you know more about government than I do” (qtd. in Wilson, 33).

Moynihan was an intensely interesting intellectual who thrived, despite his peculiarities, in the United States Senate (four terms from New York before retiring and blessing Hillary Clinton’s run for his seat), as he had earlier serving as Nixon’s ambassador to India and Ford’s representative at the United Nations.  At his death in 2003, a tribute in Time magazine said that “Moynihan stood out because of his insistence on intellectual honesty and his unwillingness to walk away from a looming debate, no matter how messy it promised to be.  Moynihan offered challenging, groundbreaking – sometimes even successful – solutions to perennial public policy dilemmas, including welfare and racism.  This is the sort of intellectual stubbornness that rarely makes an appearance in Washington today” (Jessica Reaves, Time, March 27, 2003).  His willingness to defend his views even when deeply unpopular gave him a thick skin and the discipline to write big books during Senate recesses while his other colleagues were fundraising.

Moynihan’s intellectualism often put him at odds with Democratic orthodoxy, and maybe on the wrong side of the issue – he opposed the Clinton efforts to produce a national health insurance system, publicly opposed partial birth abortion (“too close to infanticide”), was famously complicit in pushing the American party line at the United Nations, a fact that has been much criticized as enabling the slaughter of maybe 200,000 victims, killed in the aftermath of Indonesia’s takeover of East Timor.  But he also held a range of positions that reassured his mainly liberal and working class base:  opposed to the death penalty, the Defense of Marriage Act, NAFTA, and a famous champion of reducing the government’s proclivity to classify everything as top secret.

But Daniel Patrick Moynihan will be forever linked to his first and most (in)famous foray into the nation’s conversation on race, which simultaneously revealed the possibilities for thoughtful social science to shape public policy and the risks of framing such research in language seeking to make such research dramatic and attention-getting in a glutted sea of white papers and task force reports whose issuance typically come and go without any serious notice.

Neil Armstrong’s sublime silence

Over the holiday I had a chance to watch Ron Howard’s elegant documentary about the US-USSR race to the moon, a film that interviewed nearly all those who still live and walked on the moon.  All, that is, but Neil Armstrong, the very first human being to step foot on the lunar surface.  If human beings are still around in 5000 years, and barring a catastrophic erasure of human history, Neil Armstrong’s name will still be known and his serendipitous selection to be the first astronaut to step outside the lunar module at 2:56 UTC July 21, 1969, will still be celebrated as an astonishing feat of corporate (by which I simply mean massively collective) scientific enterprise, and the one line first spoken from the moon’s surface – “That’s one small step for [a] man, one giant leap for mankind” – will still be recited.  Since more than two-thirds of the world’s population had not yet been born in 1969, perhaps my thought is a naive one; I hope not.

Armstrong has been accused of being a recluse (historian Douglas Brinkley famously described him as “our nation’s most bashful Galahad”), but that descriptor doesn’t quite work.  After all, now 78 years old, Armstrong followed up his service to NASA by doing an ASO tour with Bob Hope and then a 45-day “Giant Leap” tour that included stops in Soviet Russia.  For thirteen months he served as Deputy Associate Administrator for Aeronautics at DARPA, and then taught at the University of Cincinnati for eight years.  More recently he has served as a technical consultant on two panels convened to report on space disasters (in the aftermath of the Apollo 13 and Challenger explosions; NA vice-chaired the Rogers Commission investigating the latter).  Armstrong has spoken selectively at commemorative events, including at a White House ceremony recalling the 25th anniversary of the moon walk, at a ceremony marking the 50th anniversary of NASA just a couple months ago, and the opening of a new engineering building at Purdue University (his alma mater) named after him in 2007.

So, no, Neil Armstrong is not a recluse in the sense we typically ascribe to monks or the painfully shy.  He is willing to be interviewed (he does seem to be tough on his own performances, which may explain some of his selectivity in accepting offers – after a 60 Minutes profile in which he participated, he gave himself a C-minus).  He gives speeches.  He has been happy to offer commentary on public policy subjects relating to outer space.  But what he has refused to do is endlessly reflect on what he did that July day.  And I admire him for this, not because others who have been forthcoming and talkative about the experience are to be criticized – their stories are compelling and their histories worth recalling and Aldrin and Lovell and the others have been important ambassadors and salesmen for space exploration – but because what Armstrong did, and the event in which he so memorably participated, would be diminished by more talk.

The recognition of this fact is the brilliance of the one line he so famously spoke, which remains a masterfully succinct response to a world historical moment.  Speech was required – the first man to step on the moon had to say something, after all – but too much yammering would have undermined the collective majesty of the moment, and excessive talk after the fact would have done the same.  Can you imagine a thousand years from now school children watching hours upon hours of the alternative, Neil Armstrong in a hundred oral history interviews?  Were you sweating?  Did you burp in your space helmet?  Were your space boots chafing?  As you jumped off the last step did you think you would be swallowed up?  Did you get verklempt?  How do people pee in space?  How did the event compare with taking your marriage vows?  To whom were you dedicating the experience?  Did you hear God’s voice?  If you were, in that moment, a tree, what kind of tree would you have been?

Ugh.  No thank you.  I don’t want to know the infinite and microscopic details and I don’t think they matter one whit.  The deeply powerful impression created by watching that grainy black and white event on a small television, for me as a child three days short of my eighth birthday, remains indelible – pay attention!  watch this!  look out the window – do you see the moon? – those people on the television are actually up there – one small step…  It was late at night (close to 11:00 EST in the United States) and I was getting tired and grumpy – why weren’t we going home yet? – but when the moment came I and the other 450 million estimated to have also watched the landing live (some estimates range as high as one billion) sat completely absorbed by what we were seeing and held our breaths to see later if the landing vehicle would escape the moon’s atmosphere.

And Neil Armstrong, at some deeply personal level, understands all this in a way that may be best analogized to the disappearance of musicians and celebrities who leave the stage and never reappear again.  In the television context, think Johnny Carson or Lucille Ball, who knew they could only subvert the quality of their life’s work in public by agreeing to appear in “comeback specials” and all the rest.  (This is why DVDs with nonstop director’s commentary are so often, in my view, a terrible mistake – let the work make its own impression.)  And so Armstrong, since 1994 or so, has stopped signing autographs (he found out they were simply being sold for profit and decided he didn’t want to be involved, paradoxically of course only increasing their value).  He also hasn’t been arrested shoplifting or been accused of harassment or even, so far as I know, been caught speeding, any of which would also have diminished his most public visible moment of achievement in the space program.

In the words of one writer, “Neil Armstrong achieved one of the greatest goals in human spaceflight but then did not go on to proselytize the faith…  For True Believers in the Cause, this is apostasy, and they resent him for it.”  Thomas Mallon, writing in the New Yorker, seemed to criticize Armstrong (the implicit assumption was that he’s too litigious) because he sued his barber – turns out the guy was cutting his hair and then selling it online.  I think Armstrong was right:  the hair thing was cheap and exploitive and diminished the work.

When Armstrong agreed to participate in the writing of a biography, which appeared in 2006 (James Hansen, First Man: The Life of Neil A. Armstrong, Simon and Schuster), there was a lot of speculation that at last its subject was prepared to go onto the couch, if only to debunk the stories that implied there was something creepy about his reluctance to talk all the time to reporters.  In reading the book I am struck by the good choice Armstrong made in settling on a collaborator – Hansen’s book is saturated with information (almost four hundred pages before we even get to Apollo 11), but the information is crisply organized. Hansen refuses the temptation to plant thoughts, speculate endlessly about feelings, and so on, and if pressed Armstrong to undergo psychoanalysis that doesn’t come across in the narrative.  Some have criticized the short final section (covering the years post-moon landing) as less interesting, and others have found fault in the fact that the book reveals Armstrong’s occasionally interpersonal coldness and the toll his career took on his family life.  Only in reading that Armstrong didn’t take souvenirs on the mission for his two sons did I start to think this is too much information.  But I found myself wondering if his notorious interpersonal coolness is also the reason he made a perfect astronaut – ice in the veins, cool under pressure, and all that.

Neil Armstrong is no Superman.  He was one of a thousand military men who might have served as the public face of the mammoth and expensive engineering triumph that achieved spaceflight, and had he come down with the flu it would probably be Buzz Aldrin we most remember today.  And so my point is not to celebrate the relative silence because it creates a mythology.  To the contrary, what I admire about Armstrong’s long refusal to be daily feted and interrogated about July 21 is that as he recedes, the work is allowed to dominate the scene.  In the eloquence of his one first sentence spoken from the lunar surface, and in his silence on that experience since, the sublime accomplishment of this supreme national effort is best recollected.

Oh, and one other thing:  Armstrong donated the royalties from the biography to Purdue, to be used to build a space program archive there.

Perfect.

The death of the literary critic

I’ve just finished Rónán McDonald’s little book, The Death of the Critic (London: Continuum, 2007), the broad point of which is to decry the diminution of the literary critical role in society that was formerly occupied by well trained readers like Calvin Trilling, Matthew Arnold, F.R. Leavis, and writers who also produced criticism, like T.S. Eliot and Virginia Woolf and Susan Sontag.  Criticism has been democratized by the blogosphere, mostly in ways McDonald sees as insidious; as he puts it, We Are All Critics Now (4).   And academic attention to literature, he argues, has been dominated by cultural studies perspectives that mostly insist on reading novels as symptoms of capitalism or patriarchy or racism, and in ways that have made criticism less linguistically accessible to a wider readership.  To those who might counter that criticism is more ubiquitous than ever, and who might immediately think of the New York and London book review publications and others, McDonald replies, but “how many books of literary criticism have made a substantial public impression in the last twenty years?”  “Academics in other subjects with a gift for popularizing their subject, like Richard Dawkins and Stephen Hawking, Simon Schama and A.C. Grayling, command large non-academic audiences and enjoy high media profiles.  However, there are very few literary critics who take on this role for English” (3).

McDonald sidesteps a lot of the traps characterizing other work critiquing academic literary studies.  He is not defending a return to a Great Books Canon or to the pure celebration of high culture.  His review of the historical debates over the value of criticism make clear that he grasps the complexities in the longer tradition.  He is not hostile to Theory, but rather sees it as having made important contributions that now can be superceded not because theory should be rejected but because its central insights have been mainly and rightly accepted.  McDonald sees the value in the proliferation of critical methods (genre, psychoanalytic, Marxist, formalist, semiotic, New Historicist) even as he argues that this expansion was mainly driven by the demands of 20th-century university culture to devise rigorous quasi-scientific perspectives.  He does not by and large (a notable exception is at pgs. 127-129) disparage cultural studies either substantively or by painting with too broad a brush (in fact, he spends some time defending Raymond Williams as doing the very kind of theoretically informed but also interesting work he would like to see more of).  And he is not finally a doomsayer about the culture; in fact the book closes with a sense of optimism that the attention to literary aesthetics he desires is making a sort of comeback.

Having said all this, McDonald still takes a pretty hard line, especially with respect to the culture war debates of the last half century, which in his view too readily dispatched even the merits of a long tradition of debate over the rightful role of criticism.  He thinks Matthew Arnold has been cartooned, at the expense of his insights about the way an intelligent culture of criticism can produce more interesting art.  Arnold’s defense of critical “disinterestedness,” he notes, has been almost absurdly distorted. The quote most often used to beat Arnold over the head (that criticism’s role is “to make the best that has been thought and known in the world everywhere,” a sentiment that reads like pure colonialism) is usually cited without its introduction, which says that true culture “does not try to reach down to the level of inferior classes but rather seeks to do away with classes; to make the best…”).  The correction obviously doesn’t let Arnold off the hook, but read as against the grain of the broader prejudices of his time, his perspective elaborates a more compelling vision for criticism and the capacity of art to undo elitism than a reading that sees him as simply advocating snobbery.

The case against the blogs and the kind of “thumbs up” criticism that characterize so much newspaper book reviewing and the Oprah Book Club is for McDonald situated in his recognition that the institutional practice of criticism arose under peculiar circumstances that are now being transformed.  As capitalism developed (and here he is following Habermas’ claims about the short-lived emergence of a bourgeois public sphere) and industrialization created new middle classes with leisure time and an interest in cultural elevation, a demand was created for sophisticated taste makers.  There is a tendency today to forget how radically democratic these impulses were:  “this early development was an intellectual movement from below, a way of appropriating and redistributing cultural authority from the aristocracy and land-owning classes” (54).

What is today at risk, in McDonald’s perspective, is the essential role critics can play in challenging popular preconceptions and making the world safe for difficult artworks as they defend or enact idiosyncratic perspectives and nudge or argue audiences toward controversial but potentially essential ways of seeing.  This role requires critics who are educated to the possibilities of literary and artistic generation and who are willing to make and defend evaluative judgments about what art is worthwhile or worthless.  His attack on the bloggers and academic critics is that they either insist on reading new work through existing prejudices or refuse to make evaluative claims at all, not wanting to seem elitist or read as disparaging popular culture.  Critical practice has thus been transformed from offering acts of thoughtful judgment into offering acts of clever insight, where the question implicitly answered is not so much what makes this work aesthetically rich and worth your time? and more did you notice such-and-such about this novel/TV show/film?  Skills of observation are thus elevated over skills of interpretation, and the outcomes of critical engagement are more likely to center on how interesting (or not) a text is, at the expense of how engagement with it might better educate its audience.  Taste has trumped judgment, and the demand for books is more than ever driven by the marketing of a dwindling number of books and the ever-tightening circle of I saw Ann Coulter on Fox and she was nasty and funny and so I think I’ll buy her nasty and funny new book.

McDonald does not do enough to specify exactly what sort of criticism he seeks.  He argues for criticism that makes aesthetic judgments and dismisses those who simply connect novels to the broader culture, but he seems to celebrate Virginia Woolf for doing the very thing he dislikes (in fairness to McDonald, he tries to defend Woolf as striking a sensitive balance between these tendencies).  He argues that criticism that takes an evaluative stand will attract readers, but the argument slides around a bit:  at pg. 130, where this claim is articulated, he starts by noting that boring academic writing turns readers off.  Then he says “those critics who examined popular culture alert to its pleasures found the wider public more ready to listen to what they had to say,” though that seems to imply that audiences are best found when one cheerleads (a position I take as antithetical to his larger purposes).  And then he shifts into a case for critics who write “about the value and delights of art” (note how evaluative judgment, which so far did not play in his perspective on attracting readers, is now slipped back in).  But it isn’t clear how critics who defend judgments are supposed to attract audiences in a world where enthusiastic reviews are likely to be more contagious than briefs for the defense.

But even if the cure is underspecified, I found it hard not to be persuaded by McDonald’s broader diagnosis, and the case for more fully reconnecting academic and popular cultures.

On student cheating

If you work in education, you likely saw the reports earlier this month relating to a new study on the incidence of high school cheating.  David Crary wrote the Associated Press report I saw, which made me wince:

In the past year, thirty percent of U.S. high school students have stolen from a store and 64 percent have cheated on a test, according to a new large-scale survey suggesting that Americans are too apathetic about ethical standards.  Educators reacting to the findings questioned any suggestion that today’s young people are less honest than previous generations, but several agreed that intensified pressures are prompting many students to cut corners….

Other findings from the survey [done by the Los Angeles-based Josephson Foundation]:

•  Cheating in school is rampant and getting worse.  Sixty-four percent of students cheated on a test in the past year and 38 percent did so two or more times, up from 60 percent and 35 percent in a 2006 survey.

•  Thirty-six percent said they used the Internet to plagiarize an assignment, up from 33 percent in 2004.

Despite such responses, 93 percent of the students said they were satisfied with their personal ethics and character, and 77 percent affirmed that “when it comes to doing what is right, I am better than most people I know.”

I don’t know the agenda of the Josephson Institute, or if they even have one, but the survey only reiterates findings well known to educational researchers.  Just one example is a study done in 2001 by McCabe et al., which showed well documented long term increases in cheating.

In reacting to the most recent reports, and apart from the director of the Josephson Institute, who is quoted as asking about the social costs of this cheating – what is “the implication for the next generation of mortgage brokers?” – everyone else in the article rushes to defend students.  They live in a more competitive environment, kids are under stress, and the temptation is greater (this, believe it or not, was the defense offered by the National Secondary School Principals Association).  A teacher from Philadelphia is quoted as completely absolving students of all responsibility:  “A lot of people like to blame society’s problems on young people, without recognizing that young people aren’t making the decisions about what’s happening in society.  They’re very easy to scapegoat.”  The fellow from the NSSPA added:  “We have to create situations where it’s easy for kids to do the right things.  We need to create classrooms where learning takes on more importance than having the right answer.”  Easy to do the right things?

These perspectives are, I fear, common, and I think disappointing:  because there are social pressures to unethical behavior, and because we cannot attribute 100% of the blame to individual actors, we should therefore wholly absolve individuals of any blame at all.  And in a culture where mid-level wrongdoing lands one in jail but big-deal wrongdoing gets one a book contract and an appearance on Leno where one can make he requisite public apology and be forgiven and move on, criticizing unethical conduct or pointing out how central integrity is to one’s work and life choices ends up sounding puritanical.  Ok, then, consider me a Puritan. [I should note, by the way, that the quotations in this article do not misrepresent the wider university culture; for example, research reports by Keith-Spiegel et al. (1998) and Schneider (1999) found that faculty tend not to actively prevent student misconduct or confront cheating students.]

Cheating, it is true, is certainly a symptom of wider educational dynamics that need to be addressed.  I often hear it asserted that the No Child Left Behind K-12 testing environment means, among other things, that students come to college with less experience at doing serious research and writing papers.  Ignorant of the protocols of writing, they are said to more easily give way to the temptations of online appropriation.  Gerald Graff wrote a persuasive book a couple years back that argues college sets students up to fail – they come to us Clueless in Academe, unable to participate in research generation but held to standards of work we have never taught them.

Some research also suggests that even bright college students continue to suffer the consequences of their high school environments.  A study reported in the Chronicle of Higher Education, done by Mark Engberg (Loyola University-Chicago) and Gregory Wolniak (University of Chicago National Opinion Research Center), using data collected on 2,500 students, found that “those from schools with high levels of violence tended to have lower grades.  Having attended a well-maintained and well-equipped school seemed to offer many freshmen advantages over their peers” (CHE, 11/28/08).  And a related project, done by Serge Herzog (from the University of Arkansas) “found that, even after controlling for differences in background and academic preparation, low-income freshmen tended to post lower grades if their high schools had high levels of violence or disorder.”

As many students arrive at college unprepared, instructors have understandably reacted by reconfiguring their assignments.  Instead of the drop dead system of midterm, final exam, and big research paper, most classes (at least in my experience at public universities) now offer a wider range of low-impact assignments.  Students now get daily journaling grades, may take multiple reading quizzes where low grades can be dropped, are more readily offered extra credit, and so on.  But this dynamic has, I think, contributed to the explosion of cheating.  Too often students don’t see any harm in copying the Amazon book summary (usually taken right off the book jacket) in turning in an annotated bibliography when they know the bibliography assignment only counts as, say, five percent of the grade – it feels like make-work and so it’s handled that way.  Meanwhile, professors are reluctant to impose the academic death penalty (an F in a course or academic suspension) when it was a low-stakes project.  So over time students learn they can get credit for work that isn’t their own and professors live in frustration but feel they can’t really keep up in playing classroom cop.

A lot of attention has been given to the growth of a reported entitlement culture among students, evidence of broader forces at work in the culture and perhaps also the result of a customer is always right mentality that some see at work even in academic, and in a very limitedly anecdotal way I see evidence of that as having emerged over the last five years.  Even five years ago, when I would meet with students accused of cheating, the main reaction was emotional meltdown – crying, apology, please give me another chance, and so on.  Today the most common reaction is anger – how dare you accuse me of cheating! – and this even in cases where the open and shut evidence is laying there on the table.  To some extent these impressions are confirmed by the broader work done on student cheating, which is partly summarized by McCabe, Trevino, and Butterfield in a 1999 report:

With regard to individual characteristics, results have typically found that underclassmen cheat more than upperclassmen, that male students cheat more than female students, and that students with lower grade point averages cheat more than higher achieving students.  With regard to contextual characteristics, studies have found that cheating is higher among fraternity or sorority members, among students involved in intercollegiate athletics, among students who perceive that their peers cheat and are not penalized, and is lower at institutions that have strong academic honor codes.  (211).

Academic misconduct is a symptom of wider problems in the university culture, and part of the responsibility rests with professors who spend too little time helping students see why the originality of work matters so much to intellectual work.  But even in the context of student hostility in the face of accusations, at some level students know that cheating is wrong.  If they engage in it they are more likely to be lazy or overworked than evil.  But dishonesty is not justified as a shortcut in a scene of overwork, or by the fact that many others are doing it too.  The rationalizations one encounters when it comes to plagiarism collapse even under the simple logics of moral conduct any young child should be picking up on the schoolyard playground.

A final paradox produced by all these factors is that even while cheating skyrockets, at least in my world, professor-reported misconduct remains low.  In my own department of roughly fifty full time faculty, in a given year only five or so will report academic misconduct of any kind, and this is the dynamic in a system where reporting cheating only starts a professor-controlled process (that is, professors at my university need not fear that by reporting minor infractions inexorable suspension will be set in motion).  Here again, the research confirms that my experience is not unusual.  Diekhoff et al., report that only three percent of cheaters report having ever been caught; Keith-Spiegel report a faculty survey that showed 71% of professors said confronting student cheating was one of the most negative aspects of their job; and a 1994 study by Graham found that only 9% of instructors who caught students cheating had penalized them (all this is summarized in Vandehey et al.).

Many faculty may simply be living in naivete, imagining that their own creativity in coming up with assignments on which cheating is “impossible” has exempted them from the broader trends.  Others suspect cheating but might feel that getting too serious about it is itself unfair, since the most likely outcome is that the obvious cheating novices will get busted after a five seconds Google-search but the more systematically clever (and thus more objectionable) will still get off scot free.  It is also possible that professors are astonishingly vigilant but simply choose to handle cheating within their own classrooms.

Blogging on the Chronicle of Higher Education website, Laurie Fendrich (a fine arts professor at Hofstra) argued that college honor systems make less and less sense given the wider erosion of an understanding even of what the term honor means:

In our society, nobody has an obligation to own up to the truth.  Instead, we have an obligation to get as far ahead as possible as long as someone else doesn’t stop us.  In no case does honor apply to resisting temptation – which is precisely what’s called for in order for cheating not to occur under the honor system.  Since honor in America doesn’t exist, we should replace college honor systems with an academic penal code. (We should have a penal code for faculty malfeasances as well, but that’s for another discussion.)  I propose that it be phased in slowly, so that incoming students understand the new rules.  The new rules should be something like this:  The first cheating offense earns the grade WC, for “withdrew from course for cheating,” and the student is required to withdraw from the course.  The grade stays on the student’s transcript until graduation, when a “W” replaces it if there are no further instances of cheating.  A second offense earns another grade of WC, and the two grades remain permanently on the student’s transcript.  The third offense follows the American way:  Three strikes and you’re out.  Expelled for cheating.

I like this idea but am also skeptical that it will solve the wider problem.  Still, I think it would a step forward.  I wonder if most professors would be willing to report cheating, since many I talk with are hesitant to take actions that connect to permanent transcript notations.  But maybe I’m wrong – perhaps faculty would favor a system that simply gets the student out of their classes.  I also think Fendrich goes too far in downplaying the role of honor codes on college campuses (the investigation by McCabe, Trevino and Butterfield I mentioned earlier found strong evidence that the existence of an honor code does make a still-significant difference in creating a strong college culture of more honest behavior).  But the actual proposal seems reasonable.

Until we all, students and teachers alike, do more to discuss these issues in our classes, stay vigilantly on the lookout for misconduct that is currently undetected, and make use of the procedures for handling unethical behavior, cheating will persist and likely increase, and the most important opportunities for character formation available in the university environment will be lost.

SOURCES:  Peter Schmidt, “Studies examine major influences on freshmen’s academic success,” Chronicle of Higher Education, 28 November 2008, pg. A21; David Crary, “Lie, cheat and steal? In survey, many high school students admit those misdeeds,” Atlanta Journal Constitution, 1 December 2008, pg. A3; Laurie Fendrich, “The honor code has no honor,” Chronicle of Higher Education (excerpting her post on the Chronicle Review blog), 12 December 2008, pg. B2; Patricia Keith-Spiegel et al., “Why professors ignore cheating:  Opinions of a national sample of psychology instructors,” Ethics and Behavior 8 (1998), 215-227; Alison Schneider, “Why professors don’t do more to stop students who cheat,” Chronicle of Higher Education, 22 January 1999, pg. A8-A10; Donald McCabe et al., “Cheating in academic institutions:  A decade of research,” Ethics and Behavior 11 (2001), 219-232; Donald McCabe, Linda Trevino, and Kenneth Butterfield, “Academic integrity in honor code and non-honor code environments,” Journal of Higher Education 70.2 (1999), 211-234; Michael Vandehey, George Diekhoff, and Emily LaBeff, “College cheating:  A twenty-year follow-up and the addition of an honor code,” Journal of College Student Development 48.4 (2007), 468-480.

Interpreting the nativity accounts

A book written last year by Marcus Borg (the professor of religion at Oregon State) and John Dominic Crossan (whose work on the “historical Jesus” has long been controversial), The First Christmas:  What the Gospels Really Teach About Jesus’s Birth (New York:  Harper Collins, 2007) starts with a premise likely to be rejected by most mainstream Christians.  I’ve been reading it today – appropriately, I received it as a Christmas gift.

What if, ask Borg and Crossan, we set aside for a moment the impulse to read the nativity accounts in the Gospels of Matthew and Luke as either historically true or false, and work to read them either as parables or overtures?  Doing so, they suggest, produces interesting readings that can help explain how first century believers would have understood the birth accounts.  They argue that such an approach is warranted, at least in part, because only the later gospels deal extensively with the birth of Jesus (Mark’s gospel, believed to be the first, includes no account of extraordinary birth, and Paul’s letters, which may predate all the gospels, do not dwell on anything exceptional about his birth).  Thus, “the reason that references to a special birth do not appear in the earliest Christian writings is either because the stories did not yet exist or because they were still in the process of formation.  In either case, these stories are relatively late, not part of the earliest Christian tradition about Jesus” (26).

Reading the birth accounts as parables, Borg and Crossan insist, “does not require denying their factuality.  It simply sets that question aside.  A parabolic approach means, ‘Believe whatever you want about whether the stories are factual – now, let’s talk about what these stories mean” (35).  And reading them as overtures, where “Matthew 1-2 is a miniature version of the succeeding Matthew 3-28, and Luke 1-2 is a miniature version of Luke 3-24” (38), makes each a “summary, synthesis, metaphor, or symbol of the whole” (39) gospel that follows.

Reading Luke’s account as purposely constructed for certain persuasive ends (as opposed to a diary-like review of day-by-day chronology) reveals it more plainly as an anti-imperial story whose details enact an antithetical narrative set in diametric opposition to stories then circulating about Caesar Augustus as Savior of the World and Son of God and Bringer of Peace.  Every detail situates Jesus-as-not-Caesar.  Every specific feature of Jesus’ miraculous birth is made more spectacular than the mythologized birth of Caesar Augustus then in public circulation.  And the details dwell on the powerless and marginalized – women and shepherds and the poor are given pride of place, but all within a narrative structure that would have been readily recognizable to any Roman/pagan cosmopolitan.  In patterns that continue in Luke’s Acts of the Apostles, the story places the marginalized at the heart of empire and positioned to speak truth to power, challenging Roman rule at every turn in a contrast that makes ever-present the difference between the Roman Way (peace through victory) and the Jesus Way (peace through justice).

The parallels between the world view of the Romans and the Judeo/Christian eschatology include the then-common theory that Rome was the fifth of five world historical empires (following Assyria, Medes, Persians, Macedonia, as recounted in Caius Velleius Paterculus’ Compendium of Roman History, written around 30CE), and Daniel’s Old Testament description of four empires (Babylon, Medean, Persian, and Macedonian) that would be superceded by a kingdom of God.  Borg/Crossan:  “It is not accurate to distinguish the imperial kingdom of Rome from the eschatological kingdom of God by claiming that one is earthly the other heavenly, one is evil the other holy, or one demonic the other sublime.  That is simply name-calling.  Both come to us with divine credentials for the good of humanity.  There are two alternative transcendental visions.  Empire promises peace through violent force.  Eschaton promises peace through nonviolent justice” (75).

Matthew’s birth account, which barely focuses on Jesus as a character and mainly on Joseph and the wider efforts made by the regional prefect, Herod, to murder him, emerges as a parable of Jesus-as-Moses.  “It would scream to those Jews as it should to us Christians as loudly as a giant newspaper headline:  EVIL RULER SLAUGHTERS MALE INFANTS.  PREDESTINED CHILD ESCAPES” (42).  The pattern fixed in the account of Moses’ birth (elaborated both in Exodus chapters 1 and 2 and in later accounts written by Philo and under the titles Targum Pseudo-Jonathan and the Book of Memoirs) lays out a detailed chronology of royal decree, necessary divorce (marriages were abandoned to avoid the risk that sex would result in the birth of a child condemned to death), a divine prophesy, and remarriage.  This, in turn, lays the predicate for a New Testament account where Herod commands infant death, Joseph is made to threaten divorce (this time fearing Mary’s infidelity), but the New Moses is born and survives, ironically finding safety in the country of Egypt from which the original Moses had to flee.

Modeled after the Five Books of the Pentateuch, Matthew’s gospel repeats the pattern:  five divine dreams, five scriptural fulfillments, five women in the genealogy, five mentions of Jesus as Messiah, and a subsequent five major discourses delivered by Jesus (starting with the Sermon on the Mount at chapters 5-7, then sermons delivered in chapters 10-11, 13, 18-19, and 24-25).

The genealogies that accompany each account, the discrepancies between which have long provoked theological debate and downright skepticism from non-believers, are also constructed for certain persuasive purposes.  Borg and Crossan “see those genealogies of Jesus in Matthew and Luke as countergenealogies to that of Caesar Augustus” (96).

Borg and Crossan repeatedly insist that their thought experiment is not offered as an exercise in “pointing out ‘contradictions,’ as debunkers of the stories often due.  In their minds, the differences mean the stories are fabrications, made-up tales unworthy of serious attention.  This is not our point at all.  Rather, paying attention to the distinctiveness and details of the nativity stories is how we enter into the possibility of understanding what they meant in the first century” (23-24).  But the extent to which they press their case will still unnerve many believers, especially those persuaded that the two birth accounts can be easily harmonized.  An example is a brief detour that seems to imply a radical retelling of Jesus’ birth as inflected through the event of the Roman ransacking of the Sepphoris region in 4 BCE:

Jesus grew up in Nazareth after 4 BCE, so this is our claim.  The major event in his village’s life was the day the Romans came.  As he grew up toward Luke’s coming of age at twelve, he could not not have heard, again and again and again, about the day of the Romans – who had escaped and who had not, who had lived and who had died.  The Romans were not some distant mythological beings; they were soldiers who had devastated Nazareth’s backyard around the time of his birth.  So this is how we imagine, as close to history as possible, what his actual coming-of-age might have entailed.

One day, when he was old enough, Mary took Jesus up to the top of the Nazareth ridge.  It was springtime, the breeze had cleared the air, and the wildflowers were already everywhere.  Across the valley, Sepphoris gleamed white on its green hill.  “We knew they were coming,” Mary said, “but your father had not come home.  So we waited after the others were gone.  Then we heard the noise, and the earth trembled a little.  We did too, but your father had still not come home.  Finally, we saw the dust and we had to flee, but your father never came home.  I brought you up here today so you will always remember that day we lost him and what little else we had.  We lived, yes, but with these questions.  Why did God not defend those who defended God?  Where was God that day the Romans came?”  [pgs. 77-78]

The account is sure to infuriate, though in explaining away a virgin birth scenario at least Borg and Crossan do not slip into the more explosive accounts offered by the first opponents of Christianity (that Mary was raped by a Roman soldier or that Jesus was otherwise an illegitimate child, both stories insinuated by Celsus in his ancient anti-Christian polemic On the True Doctrine).

Others will rebel against the definitive refusal by Borg and Crossan to entertain the factual possibility of a Roman worldwide census – they categorically rule out that part of the Luke account as wildly improbable (there was a regional census organized by Quirinius, they agree, but the timing is wrong and it would have come ten years too late to anticipate Jesus’ birth, Joseph wouldn’t have been living in the right region, citizens were not typically required by Rome to return to their birthplaces but were taxed and counted where they lived and worked, and the way the census is made to get Mary and Joseph to Bethlehem doesn’t square with how a Roman census/taxation would have worked).  But, again, for B/C the issue of factual (in)accuracy is beside the point.  The real power of the census story is that “Jesus and earliest Christianity are… historically located, imperially dated, and cosmically significant events” (149).

The value of the thought experiment this little book performs arrives in the reading of the Christian creed it finally unfolds, which I find compelling.  Contrasting Rome and Christianity, the authors note:

The terrible truth is that our world has never established peace through victory.  Victory establishes not peace, but lull.  Thereafter, violence returns once again, and always worse than before.  And it is that escalator violence that then endangers our world.  The four-week period of Advent before Christmas… are times of penance and life change for Christians…  We [have] suggested that [the Easter-season] Lent was a penance time for having been in the wrong procession and a preparation time for moving over to the right one by Palm Sunday.  That day’s violent procession of the horse-mounted Pilate and his soldiers was contrasted with the nonviolent procession of the donkey-mounted Jesus and his companions.  We asked:  in which procession would we have walked then and in which do we walk now?  We face a similar choice each Christmas…  Do we think that peace on earth comes from Caesar or Christ?  Do we think it comes through violent victory or nonviolent justice?  Advent, like Lent, is about a choice of how to live personally and individually, nationally and internationally. [168]

Or, as they put it in closing the book:  “Both personal and political transformation… require our participation.  God will not change us as individuals without our participation, and God will not change the world without our participation” (242).

Claude Levi-Strauss at 100

On Friday, November 28, Claude Levi-Strauss turned 100, an event that set loose a series of worldwide commemorations.  As one might expect, an intellectual of such enormous influence provoked competing reactions.  In London, the Guardian dismissed Levi-Strauss (“the intricacies of the structural anthropology he propounded now seem dated… [and] he has become the celebrated object of a cult’) while the Independent celebrated him (“his work, after going out of fashion several times, is more alive than ever”), both judgments issued on the same day.  French President Nicolas Sarkozy paid a personal evening visit to the Levi-Strauss apartments, and the museum he inspired in Paris, the Musee du Quai Branly, gave away free admission for a day in his honor (that day 100 intellectuals gave short addresses at the museum or read excerpts from his writings).  ARTE, the French-German cultural TV channel, dedicated the day to Levi-Strauss, playing documentaries and interviews and films centered on his lifework, and the New York Times reported that “centenary celebrations were being held in at least 25 countries.”

Levi-Strauss has not, for obvious reasons, made many public appearances of late.  His last was at the opening of the Quai Branly in 2006; not only did he inspire the museum intellectually but many of the exhibit objects were donated by him, the accumulation of his own worldwide life of travels.  In a 2005 interview given with Le Monde, he expressed some pessimism about the planet:  “The world I knew and loved had 2.5 billion people in it.  The terrifying prospect of a population of 9 billion has plunged the human race into a system of self-poisoning.”  In my own field of communication studies, I am not aware that he is widely read or remembered at all, even in seminars on mythology and narrative (two fields in which he made significant contributions), probably an unfortunate byproduct of Jacques Derrida’s sharp attack in two essays that are widely read by rhetorical scholars (“Structure, Sign and Play in the Discourse of the Human Sciences,” in Writing and Difference, Routledge, 1978 and “The Violence of the Letter:  From Levi-Strauss to Rousseau,” in Of Grammatology, Johns Hopkins UP, 1976).

For all I know Levi-Strauss remains must-reading in anthropology, the discipline he did so much to shape as an intellectual giant of the twentieth century.  But his wider absence from the larger humanities (which I mean simply as a reference to the extent to which he is read or cited across the disciplines) is, I think, unfortunate.  No intellectual of his longevity and productivity will leave a legacy as pure as the driven snow.  His campaign against admitting women to the Academie Francaise (he argued for what he saw as long tradition) was wrong and rightly alienating.  His attempt to systemize the universal laws of mythology, which formed what was for some an off-putting four-volume work, remains a brilliant and densely rich analysis of the underlying logics of mythological meaning-making.

But the trajectory of structuralism, and in turn poststructuralism and contemporary French social thought (including the research tradition shaped by Jacques Lacan, who founded his account of the Symbolic on Levi-Strauss’ work on kinship and the gift), cannot be understood without engaging his work, his engagements with Marxist dialectics, Malinowski, Roland Barthes, Jean-Paul Sartre, Paul Ricoeur and many others who respected his work even when they profoundly disagreed with it.  Lacan’s first 1964 seminar on “The Four Fundamental Concepts of Psychoanalysis” virtually begins by raising a Levi-Strauss-inspired question (Lacan wonders whether the idea of the pensée sauvage is itself capacious enough to account for the unconscious as such).  Today it is Foucault who is fondly remembered for pushing back Sartre’s temporally-based dialectical theory, but at the time Levi-Strauss played as significant a role (and his essays, which take on Sartre in part by deconstructing the binary distinction between diachronic and synchronic time, remain models of intellectual engagement).

Levi-Strauss has been a key advocate for a number of important ideas that have now become accepted as among the conventional wisdoms of social theory, and that absent his articulate forcefulness might still have to be fought for today:  the idea that Saussure and Jakobson’s work on language should be brought to bear on questions relating to social structure, the thought that comprehending the relationship of ideas within a culture is more important to intercultural understanding than anthropological tourism, the sense that cultural difference cannot be reduced to the caricature that modern peoples are somehow smarter or wiser than ancient ones or that modern investigators should inevitably disparage the “primitive,” the insight that the relationship between things can matter more than the thing-in-itself, and many more.

But the reasons to read Levi-Strauss are well justified on grounds that go beyond his interesting biography (including his sojourn in exile from the Nazis at the New School for Social Research in New York and public longevity as a national intellectual upon his return to France), his historical role in old philosophical disputes, or even the sheer eloquence of his writing (Tristes tropiques, written in 1955, remains a lovely piece of work and a cleverly structured narrative argument).  It is, I think, a mistake to dismiss Levi-Strauss’ work as presuming to offer a science of myth – the best point of entry on this point is the set of lectures he delivered in English for the Canadian Broadcast Corporation in the late 1970’s (published as Myth and Meaning in 1978), where his overview reveals, as if it was necessary, the layers of ambiguity and interpretation that always protected Levi-Strauss’ work from easy reductionism).

And the exchanges with Derrida and Sartre merit a return as well.  There is an impulse, insidious in my view, to judge Derrida’s claims as a definitive refutation when they signal a larger effort to push the logic of structuralism and modernism to its limits.  The post in poststructuralism is not an erasure or even a transcendence but a thinking-through-the-implications-of maneuver that lays bare both the strengths and limits of the tradition begun by Saussure.  Levi-Strauss developed a still-powerful account of how linguistic binaries structure human action but he was also deeply self-reflective as he interrogated the damage done to anthropological theory by its own reversion to binary logics (such as the elevation of literacy over orality, or advanced over primitive societies).  Paul Ricoeur, and Derrida himself, saw the debate with Levi-Strauss as a definitive refutation (Ricoeur, writing in his Conflict of Interpretation, set Derrida’s “school of suspicion” against Levi-Strauss’ “school of reminiscence”).  But the insights generated by principles that Derrida (and Levi-Strauss) rightly understood as provisional and even contradictory remain powerful, perhaps even more so at a time when poststructuralist logics seem to be running their course.

None of this denies the real objections raised against Levi-Strauss’ version of structuralism – its methodological conservatism or its tendency (offered in the name of scholarly description) to valorize or make invisible power arrangements that reinforce the tendency of one part of any binary to obliterate or repress its opposite.  But the richness of Derridean thought is enriched and not subverted by putting it back into conversation with Levi-Strauss.  To take just one example, CLS’s work on myth usefully presages Derrida’s own insights on the limits of inferring a “final” or “original” meaning.  The elements of myths circulate within the constraints of social structure to create endless transformations and possibilities of meaning best understood not through the logics of reference or mimesis but logics of context and relationship.  And the case Levi-Strauss articulated against phenomenology still holds up pretty well in the context of its reemergence in some quarters (in communication studies, phenomenological approaches are increasingly advocated as a way forward in argumentation theory and cinema studies).  The first volume of Structural Anthropology remains one of the most important manifestos for structuralism.

From the vantage point of communication, one of the intriguing dimensions of CLS’s work is his claim that modern societies are plagued by an excess of communication.  When first articulated, his concern related to the risk that too much cross-cultural exchange would obliterate differences, a view then current in the work of scholars like Herbert Schiller and the circa-1970s view that the allures of America’s entertainment culture was producing a one-way destruction of other societies.  But Levi-Strauss means something more too, and his argument is made intriguing in the light of his lifelong commitment to the idea that the deep grammars of cultural mythologies are universal.  For it is the interplay of universally shared experience and local variability that expresses the real genius of the human condition, and the twin threats of global groupthink and overcrowding are still not quite fully apprehended, even within the terms of the poststructuralist conversations he did so much to shape.

Michel Foucault, writing in Order of Things, says of Levi-Strauss that his work is motivated “by a perpetual principle of anxiety, of setting in question, of criticism and contestation of everything that could seem, in other respects, as taken for granted.”  Foucault’s sentiment is complicated and not intended, as I read it, as a simple compliment.  But it points to an aspect of his century-long work that should also attract continued interest.

SOURCES:  “In praise of Claude Levi Strauss,” (London) Guardian, 29 November 2008, pg. 44; John Lichfield, “Grand chieftain of anthropology lives to see his centenary,” (London) Independent, 29 November 2008, pg. 38; Steven Erlanger, “100th birthday tributes pour in for Levi-Strauss,” New York Times, 29 November 2008, pg. C1; Albert Doja, “The advent of heroic anthropology in the history of ideas,” Journal of the History of Ideas (2005): 633-650;  Lena Petrovic, “Remembering and disremembering: Derrida’s reading of Levi-Strauss,” Facts Universitatis 3.1 (2004): 87-96.

How free trade regimes collapse

Under circumstances of international economic duress, free trade is especially jeopardized:  democratically elected officials, even those committed in principle to unfettered commerce as the best-available engine of economic growth, will cede to local demands for protection.  Desperate to preserve market share, governments will be tempted to raise tariffs that make imports more expensive and locally produced goods cheaper, or they will be persuaded that economic exigency warrants temporary protections that will likely only induce retaliation elsewhere.  Thus are set in motion cycles of retaliatory protectionism like the one perpetuated by the notorious Smoot-Hawley tariffs (the Tariff Act of 1930) now believed to have worsened the deep economic depression of the late 1920’s and 1930’s.

When Herbert Hoover signed the Smoot-Hawley Act in June 1930, Thomas Lamont, a J.P. Morgan partner and Hoover adviser, begged him not to (along with a thousand economists who also petitioned against the bill):  “I almost went down on my knees to beg Herbert Hoover to veto the asinine Hawley-Smoot Tariff.”  The legislation jumped duties on almost 900 American imports.  Decades later, debating the merits of NAFTA with Ross Perot on the Larry King CNN show, then-Vice President Gore presented Perot with a framed picture of the two Congressmen.

While free trade regimes are regularly defended by economists – the intellectual commitment remains strong despite work done over the last several decades showing that nascent markets require or at least have inevitably benefited from protective regulatory regimes and that regimes today most adamant in their declared support for free trade (like the United States and Japan) provided intensive protection for their long-dominant manufacturing sectors – the arguments for and against protectionism are alive again thanks to the severity of the current economic downturn.

The uncertain signals sent during the campaign by Barack Obama (he said we couldn’t hide from the world economy but also that NAFTA should be renegotiated) have been reinforced by his early Cabinet picks.  As Clive Cook put it last week, “Mr. Obama’s US trade representative (his chief international negotiator) will be Ron Kirk, a former mayor of Dallas, a leading proponent of NAFTA and a long-time supporter of liberal trade.  His appointment disappoints the president’s supporters on the left of the party.  The new labour secretary has them applauding, however:  she is Hilda Solis, an ally of the unions, a leader in Congress of opposition to the Central American Free Trade Agreement and a forthright critic of orthodox liberal trade.”  Larry Summers, director-designate of the National Economic Council, is a big-time free trader; Bill Richardson, picked to be the next Commerce Secretary, kept calling for “fair” trade on the campaign trail.

The debate over the consequences of free trade, especially when broadened beyond the technical mathematical models and into the domain of distributive politics, gets complicated fast.  Even the strongest advocates of free trade agree that codifying it sets in motion significant sectoral dislocations that imperil communities and work at odds with local social justice imperatives.  President Clinton tried to square this circle by arguing that trade needed to be promoted, but that such promotion also needed to include strengthened social security networks to help those dislocated by the vagaries of global capital, a position that has become today’s trade realpolitick on both sides of the political aisle.  This position has enabled the ongoing negotiation of global trade instruments, both on a regional and national basis (such as the agreement now being advocated by President Bush negotiated with Colombia) and within the ongoing framework of what started as the Global Agreement on Tariffs and Trade (GATT) and has since evolved into the World Trade Organization legal regime.

The common expectation is that the global trade talks most recently stalled in the so-called Doha Round will be even more imperiled by the bad economy.  Clive Crook:  “With unemployment rising, wages under pressure and no firm countervailing push from the administration, protecting jobs (or claiming to, at any rate) is likely to be a higher priority than liberal trade.  The prospects for widening the opportunities for international commerce look grim.”  The Doha failure was disconcerting even to some economic progressives since global poverty was explicitly on the agenda – “while idealistic in its goal, [Doha] set out in 2001 to develop a new platform for global cooperation that would depart from traditional aid and development programs” (MacBain, 39-40).  The World Bank had estimated that a Doha agreement might have brought as many as 32 million persons out of extreme poverty.

Just this month, the World Bank projected that the total volume of global trade is likely to fall in 2009 by 2%, the first actual drop since 1982 (the estimate might be conservative given December reports, based on year-to-date data through November, showing roughly twenty percent drops in exports from Taiwan, Chile, and South Korea).  Several countries, including Russia and India, have already announced tariff increases, although jumps so far haven’t posed a major threat because they only undo tariff cuts previously announced that went lower than international law had required (and so the new increases don’t violate WTO protocols).  But even this path to higher tariffs poses dangers:  “If all countries were to raise tariffs to the maximum allowed, the average global rate of duty would be doubled, according to Antoine Bouet and David Laborde of the International Food Policy Research Institute in Washington, DC.  The effect could shrink global trade by 7.7%” (Economist, “Barriers”).  As the magazine editorialized, even “a modest shift away from openness – well within the WTO’s rules – would be enough to turn the recession of 2009 much nastier.”

In such an environment two prospects seem especially likely (in addition to the third:  mounting outright protectionism and the risk of reprisals).  One is that global economic giants like the European Union, China and the United States will continue to sidestep global framework talks by cutting one-on-one-deals with specific trading partners.  The problem with that approach is that side deals can complicate wider talks; local arrangements thus undermine international ones.  And one-on-one negotiations are more easily dominated by this-or-that industrial lobby, where final arrangements end up sideskirting fundamental distortions in trading for the benefit of entrenched corporate interests on both sides.

The other prospect may be the most insidious, and has already been set in motion thanks to bipartisan support.  This approach provides protection not by taxing imports but by subsidizing exports.  The latest automobile bailout is a classic example of this sort of non-tariff trade barrier; American cars have been given a massive $18 billion economic benefit relative to the car companies manufacturing in other nations.  For now the huge investments being made in national banking and manufacturing and infrastructural development have not triggered serious retaliation since everyone is doing it and all are agreed that bailouts are needed to avert far greater economic catastrophe (at least this is the rhetorical bludgeon that has been used so far to enact gargantuan packages).

These subsidies are not new – in the United States, the 2008 Farm Bill promised another $20 billion in help for cash-crop producers.  But individual ad hoc bailouts acquire an accelerating logic, and turn into subsidy cycles that are hard to resist – China is now talking about domestic steel subsidies and has already put in place more than 3,000 tariff rebates established to promote Chinese products.  Indonesia has raised tariffs on 500 products this month; France has started a national fund to protect French companies from being bought out by foreigners; and Russia has imposed a tax on imported pork and cars (Faiola and Kessler).  And “there are other, more subtle, means of protection available.  Marc Busch, a professor of trade policy at Georgetown University in Washington DC, worries that health and safety standards and technical barriers to trade, such as licensing and certification requirements, will be used aggressively to shield domestic industries as the global downturn drags on” (Economist, “Barriers”).

Over time these localized subsidies unravel both the legal architecture of global trade and the political good will necessary to sustain it.  Part of the reason the Doha Round failed was an inability to come to terms on long standing trade subsidies, such as American and EU cash support for their agricultural sectors and other smaller but flash-making provocations.  When America refuses to import Chinese toys we say it’s on account of safety but they see it as a trade barrier.  When France throws up obstacles to importing American wine they say they are simply protecting their national culture, but we cry foul.  South Africa insists that Italian mining companies doing business outside Pretoria adhere to affirmative action mandates, which Italy says is an impediment to international commerce.  Europe threatens to prohibit the import of American cars because they pollute too much, and we cry protectionism.  And so on.  Patterns of reasonable protection are thus made acrimonious, and nations unable to throw cash at their favored industries consider reverting to more traditional forms of tariff protection.  And this is how trade wars are ignited.

It has to be conceded that the political/rhetorical threat of trade war! is too easily bandied about, and liberals have long rightly complained that economic justice policies are inevitably thrown under the bus when such Great Depression talk looms.  As I read a recent column by Jeff Immelt, CEO of General Electric, laying out a case for why “business and government leaders must reset the debate, re-establishing why interdependent economies and healthy competition are good for the world,” and then proposing six “GE” principles to take charge of such a debate that try to have it both ways (protectionism must be resisted and global trade must be fair), I admit to skepticism.  And even a recent essay in the free-trade-leaning Economist admitted recently that “few economists think the Smoot-Hawley tariff was one of the principal causes of the Depression.”

But the history of the Smoot-Hawley protections is cautionary nonetheless.  The bill was not supposed to be so draconian, but started as a much more modest effort to provide some quick help to American agriculture.  “With no obvious logic – most American farmers faced little competition from imports – attention shifted to securing for agriculture the same sort of protection as for manufacturing, where tariffs were on average twice as high” (Economist, “Battle”).  In a nearly six-month conference reconciliation process, the bill was quickly larded up – Robert LaFollette, the Wisconsin progressive, said the bill was “the product of a series of deals, conceived in secret, but executed in public with a brazen effrontery that is without parallel in the annals of the Senate.”

While the actual economic costs associated with the bill’s 890 tariff hikes were modest, the deal soured international comity – the League of Nations (which of course the United States had never joined) was negotiating a “tariff truce” which fell through in the resulting acrimony.  Even in a climate like today’s, where product manufacturing is deeply interdependent and reliant on multinational industrial networks, political disputes can easily escalate.  British prime minister Gordon Brown has already given major speeches implicitly connecting the car bailout to protectionism, the German automaker Opel has used the bailout to argue for 1-billion euros in credit guarantees, and the EU recently agreed to a $50 billion package of support that will help European automakers meet newly toughened environmental standards.  And regimes of free trade, deeply imperfect as they are, may thus give way to even more destabilizing nationalistic free-for-alls.

SOURCES:  Clive Crook, “Obama has to lead the way on trade,” Financial Times, 22 December 2008, p. 9; Jeff Immelt, “Time to re-embrace globalisation,” Economist/World in 2009, p. 141; “The battle of Smoot-Hawley,” The Economist, 20 December 2008, pgs. 125-126; “Barriers to entry,” The Economist, 20 December 2008, pg. 121; “Farewell, free trade,” The Economist, 20 December 2008, pg. 15; Louise Blouin MacBain, “Doha’s good deeds,” World Policy Journal (Summer 2008): 39-43; Anthony Faiola and Glenn Kessler, “Trade barriers toughen with global slump,” Washington Post, 22 December 2008, p. A01; Jemy Gatdula, “Trade tripper: Cars, plans, and bailouts,” Business World, 28 November 2008, pg. S1/5.

Christianity: Undergoing a Great Emergence?

Phyllis Tickle is the founding editor of the religion section of Publisher’s Weekly, a position created when the market for spiritual books exploded in the late 1980’s (she started in the early 1990’s).  From that vantage point, and given her own theological predispositions, she has had a unique perspective on the unfolding debates within Christendom that are both dividing denominations and arguably creating what she, in a recent book, terms a Great Emergence (Tickle, The Great Emergence: How Christianity is Changing and Why, BakerBooks, 2008).

The book starts with an intriguing premise whose promise is, I think, unfulfilled as Tickle works through the argument.  The idea is that Christianity (she is also willing to concede this may be true of the Islamic and Jewish traditions; pgs. 29-30) moves in roughly 500-year cycles, each concluded by significant ideological upheaval, schism, and regeneration.  Thus roughly 500 years ago was the Protestant Reformation (dated to 1517, when Luther nailed his 95 Theses to the Wittenberg church, portrayed in the image above), another 500 years before that the Great Schism, and another 500 years earlier to the work and aftermath of the Chalcedon Council.   Following the standard accounts, the Great Schism is credited as producing, in no small measure under the example of Gregory the Great, an end to the wars that had split Christendom into three competing regional institutions.  And the debates settled or papered over at Chalcedon in 451 led in turn to the production of a monastic culture that preserved literacy and learning through the aftermath of the collapse of the Roman Empire.  By this historical reckoning, we are roughly due for another rebooting of the Christian faith, or, as Tickle puts it, following the Anglican bishop Mark Dyer, a “giant rummage sale” – all of which will induce Christianity 5.0, as it were.

The term Great Emergence references the phenomenon of religious uncertainty and a crisis of spiritual authority in the modern world, and also broader cultural transformations, such as globalization (15), information overload (15), and the World Wide Web (53).

While one can never be certain of the outcome, Tickle takes comfort from the historical fact that “there are always at least three consistent results or corollary events.  First, a new, more vital form of Christianity does indeed emerge.  Second, the organized expression of Christianity which up until then had been the dominant one is reconstituted into a more pure and less ossified expression of its former self…  The third result is of equal, if not greater significance”:  “…every time the incrustations of an overly established Christianity have been broken up, the faith has spread – and been spread – dramatically into new geographic and demographic areas…” (17).  This leads her to a repeated expression of optimism, even when (as follows) she is recounting the worst aspects of Christian history (here, colonialism):

…the more or less colonialized Church that Reformation Protestantism and Catholicism managed to plant was, obviously more or less colonialized, with all the demeaning psychological, political, cultural, and social overtones and resentments which that term brings with it.  One does not have to be particularly gifted as a seer these days, however, to perceive the Great Emergence already swirling like balm across that wound, bandaging it with genuinely egalitarian conversation and with an undergirding assumption of shared brotherhood and sisterhood in a world being redeemed. (29).

The ferment in the Christian world today is, depending on one’s perspective, evidence of the End of the Age and a coming Rapture/Apocalypse, evidence that rationalism has finally ushered religious superstition into the final death throes announced almost fifty years ago with the phrase God is Dead, evidence of a long overdue urgent need for Christian revival, or, as is argued here, the birth pangs of a reconfigured and stronger faith tradition.  One problem in Tickle’s argument is that she starts by asserting a case that needs to be proved:  why derive confidence from the Episcopal or Anglican schisms, or the increasing divide between mainstream Christianity as understood in, say, North America and Africa?  why believe that the denominational spasms opened by the debates over gay marriage and Terry Schiavo are the happy start of a revitalized faith as opposed to signifying irreparable breaches in the Body of Christ?  One cannot simply point to prior reformations as establishing the case for optimism.

The book goes downhill, not because the author lacks insight, but because the issues it engages are inevitably too complicated to be reduced to the metaphorical images Tickle offers as roadmaps to an ever more fragmented religious scene.  Those maps are just complicated enough to seem awkward (religious signification is like a cable connecting a boat to a dock, where the cable has an outer waterproof covering that is the story of community, an internal mesh sleeve which is the common imagination, and internal woven strands signifying spirituality, corporeality, and morality:  get it?) but not complex enough to do justice to the worlds of faith.  And all this is worsened in the final pages, where a 2-by-2 grid is made more and more complex, such that by the end the picture has been made into an unholy mess.  The grids that organize the book thus give rise to sentences that make no sense:  “Corporeality’s active presence in religion is also the reason why doctrinal differences like those surrounding homosexuality, for example, are more honestly and effectively dealt with as corporeal rather than as moral issues” (39).  Huh?

The book’s middle section, which aims to enumerate the factors that have brought us to this juncture, is the weakest.  While naming all the usual suspects (Darwin, Freud, the pill, industrial transformation, science, Marxism, recreational drug use, womens’ rights organizing that changed the family, and others), the argument sometimes veers into weird territory.  Alcoholics Anonymous is blamed for making God generic.  The automobile is accused of weakening grandma’s Sunday afternoon hegemony over religious training (instead of interrogating the kids about that morning’s Sunday School lesson, the kids took the car on a fast Sunday drive; pg. 87).  The Sony Walkman and the iPod are blamed for ruining worship services (105).  Generation X disenchantment with organized religion is ironically blamed on efforts by the church to extend programs into communities, like after-school basketball (91).

Joseph Campbell (the Hero With a Thousand Faces, Bill Moyers guy) is named the leading suspect in the collapse of Christianity authority, a claim that seems wildly exaggerated (Tickle:  “It would be very difficult, in speaking of the coming of the Great Emergence, to overestimate the power of Campbell in the disestablishment of what is called ‘the Christian doctrine of particularity’ and ‘Christian exclusivity,” pg. 67).  The central claims of Marx’s Das Kapital are significantly caricatured (89).  A couple pages later (90) Tickle implies the Great Society was a communist plot (judge for yourself:  “Twentieth century Christianity in this country met the statism and atheism in communist theory head on, and American political theory militated from the beginning against the heinous brutality inherent in unfettered power.  Nonetheless, we voted in Roosevelt’s New Deal and Johnson’s Great Society.”)  Left out altogether or only passingly mentioned are other events that seem to me a lot more theologically decisive:  the Bomb, the Holocaust, the world wars, Vietnam, the Cold War.   The case starts to feel sloppy, too quickly written.

I regret this because the book raises important questions:  Are we living in a time of religious transformation or evisceration?  Are there resources in the Christian faith sufficient to reconstitute doctrinal authority in an age that resists authority wherever asserted?  To what extent is the cultural elite rejection (sometimes articulated as postmodernism) of capitalism, middle class values, the nuclear family, and the nation-state also evidence of the collapse of institutional religion (or is religion the potential cure)?  Are current upheavals (economic, political, security) more likely to rekindle religious faith or to weaken denominations further by arousing skepticism?

Perhaps a Great Emergence lies close at hand.  Or maybe not.