Start here

Remembering Harold Pinter

Several of the obituaries for Harold Pinter, the Nobel prize winning playwright who died on Christmas Eve, see the puzzle of his life as centered on the question of how so happy a person could remain so consistently angry.  The sense of anger, or perhaps sullenness is the better word, arises mainly from the diffidence of his theatrical persona and the independence of his best characters even, as it were, from himself, and of course his increasingly assertive left-wing politics.  The image works, despite its limitations, because as he suffered in recent years from a gaunting cancer he remained active and in public view, becoming something of a spectral figure.  And of course many who were not fans of theatrical work (from the hugely controversial Birthday Party, to the critically acclaimed Caretaker, and then further forays into drama and film) mainly knew him through his forceful opposition to Bush and Blair and their Iraqi policies and the larger entanglements of American empire.

But Pinter, and this is true I think of all deeply intellectual figures, cannot be reduced to the terms provocateur or leftist.  In this case, to be sure, simple reductions are wholly inadequate to the task given his very methods of work:  one of his most abiding theatrical legacies is his insistence that dramatic characters are inevitably impenetrable – they owe us no “back story,” nor are their utterances ever finally comprehensible, any more than are our interactions in the real world of performed conversation.  And so Pinter set characters loose who even he could not predict or control, an exercise that often meant his productions were themselves angering as audiences struggled to talk sense into the unfolding stories. As the Economist put it, “his characters rose up randomly… and then began to play taunting games with him.  They resisted him, went their own way.  There was no true or false in them.  No certainty, no verifiable past…  Accordingly, in his plays, questions went unanswered.  Remarks were not risen to.”

So what does all this say about the ends of communication?  For Pinter they are not connected to metaphysical reflection or understanding (this was Beckett’s domain; it is somehow fitting that Pinter’s last performance was in Beckett’s Krapp’s Last Tape, played from a wheelchair), but simple self defense, a cover for the emptiness underneath (Pinter: “cover for nakedness”), a response to loneliness where silence often does just as well as words.  And so this is both a dramatic device (the trait that makes a play Pinteresque) and a potentially angering paradox:  “Despite the contentment of his life he felt exposed to all the winds, naked and shelterless.  Only lies would protect him, and as a writer he refused to lie.  That was politicians’ work, criminal Bush or supine Blair, or the work of his critics” (Economist).  Meanwhile, the audience steps into Pinter’s worlds as if into a subway conversation; as Cox puts it, “The strangers don’t give you any idea of their backgrounds, and it’s up to the eavesdropper to decide what their relationships are, who’s telling the truth, and what they’re talking about.”

The boundaries that lie between speaking and silence are policed by timing, and Pinter once said he learned the value of a good pause from watching Jack Benny performing at the Palladium in the early 1950’s.  One eulogist recalls the “legendary note” Pinter once sent to the actor Michael Hordern:  “Michael, I wrote dot, dot, dot, and you’re giving me dot, dot.”  As Siegel notes:  “It made perfect sense to Hordern.”  The shifting boundaries of communication, which in turn provide traces of the shifting relations of power in a relationship, can devolve into cruelty or competition where both players vie for one-up status even as all the rest disintegrates around them.  As his biographer, Michael Billington, put it, “Pinter has always been obsessed with the way we use language to mask primal urges.  The difference in the later plays is not simply that they move into the political arena, but they counterpoint the smokescreen of language with shocking and disturbing images of torture, punishment, and death.”  At the same time, and this because Pinter was himself an actor and knew how to write for them, the written texts always seemed vastly simpler on paper than in performance – and this is not because simple language suggests symbolic meaning (Pinter always resisted readings of his work that found symbolic power in this or that gesture) but because the dance of pauses and stutters and speaking end up enacting scenes of apparently endless complexity.

For scholars of communication who attend to his work, then, Pinter poses interesting puzzles and even at their most cryptic his plays bump up against the possibilities and limits of language.  One such riddle, illuminated in an essay Dirk Visser, is that while most Pinter critics see his plays as revealing the failures of communication, Pinter himself refused to endorse such a reading, which he said misapprehended his efforts.  And as one moves through his pieces, the realization that language is not finally representational of reality slowly emerges (or, in some cases, with the first line), nor even instrumental (where speakers say certain things to achieve certain outcomes).  Pinter helps one see how language can both stabilize and unmoor meaning, even in the same instant (this is the subject of an interesting analysis of Pinter’s drama written by Marc Silverstein), and his work both reflects and straddles the transition from modernism to postmodernism he was helping to write into existence (a point elaborated by Varun Begley).

His politics were similarly complicated, I think, a view that runs contrary to the propagandists who simply read him as a leftist traitor, and a fascist at that.  His attacks on Bush/Blair were often paired with his defense of Milosevic in the press as implying a sort of left-wing fascism where established liberal power is always wrong.  But his intervention in the Milosevic trial was not to defend the war criminal but to argue for a fair and defensible due process, and this insistence on the truth of a thing was at the heart of his compelling Nobel address.  Critics saw his hyperbole as itself a laughable performative contradiction (here he is, talking about the truth, when he hopelessly exaggerates himself).  I saw a long interview done with Charlie Rose, replayed at Pinter’s death, where Rose’s impulse was to save Pinter from this contradiction, and from himself:  (Paraphrasing Rose) “Surely your criticism is not of all the people in America and Britain, but only made against particular leaders.”  “Surely you do not want to oversimplify things.”  Pinter agreed he was not accusing everyone of war crimes but also refused to offer broader absolution, since his criticism was of a culture that allowed and enabled lies as much as of leaders who perpetuated them without consequence.  Bantering with Rose, that is to say, he refused to take the bait, and the intentional contradictions persisted.  His Nobel speech (which was videotaped for delivery because he could not travel to Stockholm and is thus available for view online) starts with this compelling paragraph:

In 1958 I wrote the following:  “There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false.  A thing is not necessarily either true or false; it can be both true or false.”  I believe that these assertions still make sense and do still apply to the exploration of reality through art.  So as a writer I stand by them but as a citizen I cannot.  As a citizen I must ask:  What is true?  What is false?

What was so angering for many was Pinter’s suggestion that the American leadership (and Blair too) had committed war crimes that had first to be recognized and tallied and then perpetrators held to account:

The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile.  The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven.  Hundreds of thousands of deaths took place throughout these countries.  Did they take place?  And are they in all cases attributable to US foreign policy?  The answer is yes they did take place and they are attributable to American foreign policy.  But you wouldn’t know it.  It never happened.  Nothing ever happened.  Even while it was happening it wasn’t happening.  It didn’t matter.  It was of no interest.  The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them.  You have to hand it to America.  It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good.  It’s a brilliant, even witty, highly successful act of hypnosis.

The argument is offensive to many (when the Nobel was announced, the conservative critic Roger Kimball said it was “not only ridiculous but repellent”), though for a playwright most attentive to the power of the obscuring mask and the underlying and sometimes savage operations of power they obscure, it is all of a piece.  McNulty:  “But for all his vehemence and posturing, Pinter was too gifted with words and too astute a critic to be dismissed as an ideological crank.  He was also too deft a psychologist, understanding what the British psychoanalyst D. W. Winnicott meant when he wrote that ‘being weak is as aggressive as the attack of the strong on the weak’ and that the repressive denial of personal aggressiveness is perhaps even more dangerous than ranting and raving.”

As the tributes poured in, the tensions between the simultaneous arrogance (a writer refuses to lie) and the humility (he felt exposed to all the winds, naked and shelterless) in this arise again and again.  The London theatre critic John Peter gets at this when he passingly noted how Pinter “doesn’t like being asked how he is.”  And then, in back to back sentences:  “A big man, with a big heart, and one who had the rare virtue of being able to laugh at himself.  Harold could be difficult, oh yes.”  David Wheeler (at the ART in Cambridge, Massachusetts):  “What I enjoyed [of my personal meeting with him] was the humility of it, and his refusal to accept the adulation of us mere mortals.”  Michael Billington:  “Pinter’s politics were driven by a deep-seated moral disgust… But Harold’s anger was balanced by a rare appetite for life and an exceptional generosity to those he trusted.”  Ireland’s Sunday Independent:  “Pinter was awkward and cussed… It was the cussedness of massive intellect and a profound sense of outrage.”

Others were more unequivocal.  David Hare:  “Yesterday when you talked about Britain’s greatest living playwright, everyone knew who you meant.  Today they don’t.  That’s all I can say.”  Joe Penhall:  Pinter was “my alpha and beta…  I will miss him and mourn him like there’s no tomorrow.”  Frank Gillen (editor of the Pinter Review):  “He created a body of work that will be performed as long as there is theater.”  Sir Michael Gambon:  “He was our God, Harold Pinter, for actors.”

Pinter’s self-selected eulogy conveys, I think, the complication – a passage from No Man’s Land – “And so I say to you, tender the dead as you would yourself be tendered, now, in what you would describe as your life.”  Gentle.  Charitable.  But also a little mocking.  A little difficult.  And finally, inconclusive.

SOURCES:  Beyond Pinter’s own voluminous work, of course – Marc Silverstein, Harold Pinter and the Language of Cultural Power (Bucknell UP, 1993); Varun Begley, Harold Pinter and the Twilight of Modernism (U Toronto P, 2005); “Harold Pinter,” Economist, 3 January 2009, pg. 69; Ed Siegel, “Harold Pinter, Dramatist of Life’s Menace, Dies,” Boston Globe, 26 December 2008, pg. A1; John Peter, “Pinter:  A Difficult But (Pause) Lovely Man Who Knew How to Apologise,” Sunday Times (London), 28 December 2008, pgs. 2-3; Gordon Cox and Timothy Gray, “Harold Pinter, 1930-2008,” Daily Variety, 29 December 2008, pg. 2; Charles McNulty, “Stilled Voices, Sardonic, Sexy:  Harold Pinter Conveyed a World of Perplexing Menace with a Vocabulary All His Own,” Los Angeles Times, 27 December 2008, pg. E1; Dirk Visser, “Communicating Torture: The Dramatic Language of Harold Pinter,” Neophilologus 80 (1996): 327-340; Matt Schudel, “Harold Pinter, 78,” Washington Post, 26 December 2008, pg. B5; Michael Billington, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 15; Esther Addley, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 14; Frank Gillen, “Farewell to an Artist, Friend,” St. Petersburg Times (Florida), 4 January 2009, pg. 4E; “Unflagging in His Principles and Unrivalled in His Genius,” Sunday Independent (Ireland), 28 December 2008; Dominic Dromgoole, “In the Shadow of a Giant,” Sunday Times (London), 28 December 2008, pgs. 1-2; Mel Gussow and Ben Brantley, “Harold Pinter, Whose Silences Redefined Drama, Dies at 78,” New York Times, 26 December 2008, pg. A1.

The lessons derived from aging backward

I enjoyed seeing The Curious Case of Benjamin Button, but not because the film finally coheres into a memorable totality but rather since the sum of the parts end up actually greater than the whole, where vivid moments linger after the grand narrative arc fades.

The premise on which the story is based, the idea of an anomalous child born physically old who dies decades later after complete disappearance into infancy, is constrained by several challenges, some of which are skillfully handled (the old boy grows up in a retirement home, and so attracts no special notice) and others which strain credulity (including the fact that his former lover, having been essentially abandoned by him, ends up providing years of now-maternal attention without her new husband or daughter ever thinking to ask about the young child for whom the mother nows cares as she takes up residence in the same rest home).  [I am still unpersuaded of the narrative plausibility of this turn of events – the film implies that the cognitively vacant infant(old) Benjamin reconnects because Daisy’s name is all over his diary, but everything in his prior behavior and decision to leave makes implausible the idea that he would want the diary to serve as a child’s name tag, enabling a return to her or the imposition of his own care onto her later life.  And if, by the end, Benjamin can’t remember who he is or anything about his past, why should we believe that his journals and scrapbooks would have been so well preserved?]

His biological father follows Benjamin’s development from a distance but, oddly, when they reconnect at his dying invitation, not much time is spent dwelling on the biological mysteries of the reverse aging.  The fact that Benjamin’s strange trajectory is never discovered (in contrast to the original story, where the Methuselah story makes the papers) allows the terribly abbreviated end-of-life sequences a kind of melancholic privacy – the teen-then-boy-then-infant never raises anyone’s interest and no one apparently ever connects the dots, but the benefit of this is that the more mundane moments of early/late life take on an unexpected sadness, such as the quiet passing observation noting the moment when the boy loses his capacity for speech.  It hadn’t really occurred to me until that instant that this would be so haunting a moment.

The idea that old age is a sort of reversion to infancy is cruel, and apart from those whose physical or mental infirmities cause total end-of-life dependency on others, I always find myself repelled and even afraid of the mentality that leads minimum-wage nursing home attendants to treat their clients as addled or stupid.  The idea of ending my life in a nursing home is less jarring to me, per se, than is the idea that having lived a life of growth and experience and (hopefully ongoing) intellectual stimulation, one is reduced finally to having some 20-something screaming at how I need to finish my oatmeal.  I am skeptical that any death is a good one, and I know many end-of-life professional caregivers are angels in disguise, but it is the possibility of old-age condescension as much as isolation that terrifies me.  But Button, despite his final senility, is able to die a good death, lovingly cared for to the end in a mode of caregiving that recalls what caring for someone with Alzheimer’s must entail:  is the final gleam in his infant eyes a sort of final cognitive reaching out or just the last biological gasp?  And is the senile child’s sad efforts to remember the piano he played for so long a failure or a final point of human contact, or both?

Button’s awful choice to abandon his Daisy and child at a time that seems far too early – after all, Pitt is in his prime, and would any child really think to notice that her father is getting younger all the time for several more years? – raises questions too that are larger than the unique temporal disconnection haunting Benjamin’s relationship to the people he loves.  Evoked are the larger ways in which so much of human destiny is shaped by the randomness of timing and the disconnections that keep people apart.  The audience is rather beaten over the head with this theme, especially in the backwards-edited scene derailing the end of Daisy’s performing career, but it pops up everywhere.  And even with respect to Daisy the issues raised by disconnection are interesting – a scene where she and Benjamin have physically moved within the realm of sexual plausibility ends instead in a disappointing failure to connect, produced not by their respective ages or even by their sexual histories (which by this point in the midnight park have come into sync), but by the awkwardness of seeing a child-as-lover (for Benjamin in this moment an unbridgeable chasm).

The film is bracketed at both ends by disasters – World War I and Katrina – and their denouement, but their visual enactment is oblique and produce their own temporal reversals.   World War I, today remembered (if at all) as a war of horrifically widespread and anonymous slaughter, is reenacted through the very particular personal drama of a blind clock maker and his wife who lose their beloved child to battle, and to the extent the war evokes mass drama it is the exuberance of its conclusion more than the horror of its killing machines that we witness.  And Katrina, which we remember in part for the indignities of the preventable deaths it caused, is here recalled within the confines of a hospital that while impersonal (under threat of the advancing storm) is also a place of close and immediate care.  [Tangent:  Button is an example of how the twists-and-turns of film industrial production can have significant consequence.  The movie only got made, according to an account in the New York Times, because Louisiana offers big movie tax breaks to production companies.  This, in turn, caused the story to shift to New Orleans, and this has yielded a film wholly unimaginable in its originally anticipated location of Baltimore, the setting of the original short story].

Benjamin, born on the day of the Armistice, is raised and dies in a house wholly comfortable with frequent death, a upbringing at odds with a contemporary milieu where even adults are so often separated from end-of-life experiences that when they finally start to happen with friends and family their accompanying rituals and significance seem all the more jarring and derailing.  A baby in an old man’s body, physically caucasian but raised by parents of color, made mightily richer by the manufacture of something as tiny as a button, a boy who attends a faith healing where it turns out the fake patter inspires the boy to walk without changing him physically (one might say the lie actually has healing power) but kills the minister, an American who works many years of the 20th century in Russia or on the water and who wins his own battle with the Nazi subs not on a carrier but on a tugboat, a man who for much of the story seems not self-reflective at all but (it is revealed) kept a detailed daily journal for most of his life – much more than time is narratively reversed.  The familiar is thus made strange.

The opening of the film, with its clock-that-runs-backward allegory, is intriguing too.  The idea of God as a kind of watchmaker who has set into motion a universe of logically connected causes-and-effects and who is the lord of time itself was already in circulation 50 years before Darwin published Origin of Species, now 150 years old, and provides a persistent commonsensical response elaborated today by Intelligent Design Theory.  Read this way, one might see Benjamin’s magical appearance on earth as a divine effort to awaken our sensibilities and unnerve our comfortable sense of time passing.

Or one might take an opposite tack.  It was Richard Dawkins back in the 1980’s who worked to turn the designer idea on its head, arguing for a Blind Watchmaker, which is to say the concept that a universe may be ordered in ways reflective not of a central intelligence but rather a universally available concept (here, natural selection).  In Benjamin Button the blind clockmaker is visually but not cognitively impaired, and his grand backward-running clock is not an error but a commemoration of possibilities lost.  Read this way, Benjamin’s case is more curious than compelling, evidence of the oddities produced by evolutionary caprice.

The F. Scott Fitzgerald short story (written in 1922) on which the film is very loosely based reads more like a fable on medicalization (part of the problem, it seems, may be that in 1860 the Buttons decide to have the birth in a hospital instead of at home) than the allegory of aging and dying that structures the film.  And in the story Benjamin is born talking and with not just the body of an old man but his sensibilities too (“See here, if you think I’m going to walk home in this [baby] blanket, you’re entirely mistaken.” he says hours after birth).  It is inevitably mentioned that the film bears virtually no relationship with the Fitzgerald story; having just read the short, I think this fact is to the credit of the film, whose melancholic aftertaste is far sweeter than the sense of absurdity and only occasional sadness induced by F. Scott original tale.

Counting the humanities

Last week the American Academy of Arts and Sciences released a long-anticipated prototype of its Humanities Indicators project.  The initiative – organized a decade ago by the American Council of Learned Societies, the National Endowment for the Humanities, and the National Humanities Alliance, and funded by the Hewlett and Mellon Foundations – responds to the accumulating sense that (and I guess this is ironic) the humanities haven’t paid enough attention to quantifying their impact and history.  As Roger Geiger notes, “gathering statistics on the humanities might appear to be an unhumanistic way to gain understanding of its current state of affairs.”  But noting the value of a fuller accounting, the HI project was proposed as a counterpart to the Science and Engineering Indicators (done biennially by the National Science Board), which have helped add traction to the now widely recognized production crisis in the so-called STEM disciplines.

The Chronicle of Higher Education summarized the interesting findings this way (noting that these were their extrapolations; the Indicators simply present data without a narrative overlay apart from some attached essays):

In recent years, women have pulled even with men in terms of the number of graduate humanities degrees they earn but still lag at the tenure-track job level.  The absolute number of undergraduate humanities degrees granted annually, which hit bottom in the mid-1980s, has been climbing again.  But so have degrees in all fields, so the humanities’ share of all degrees granted in 2004 was a little less than half of what it was in the late 1960s.

This published effort is just a first step, and the reported data mainly usefully repackage data gleaned by other sources (such as from the Department of Education and the U.S. Bureau of Labor Statistics).  Information relating to community colleges is sparse for now.  Considerably more data have been originally generated by a 2007-2008 survey, and that will be added to the website in coming months.

The information contained in the tables and charts confirm trends long suspected and more anecdotally reported at the associational level:  the share of credit hours and majors and faculty hired who connect to the humanistic disciplines has fallen dramatically as a percentage of totals.  The percentage of faculty hired into tenure lines, which dropped most significantly in the late 1980s and 1990s, is still dropping, though more modestly, today.  Perhaps most telling, if a culture can be said to invest in what it values, is the statistic that in 2006, “spending on humanities research added up to less than half a percent of the total devoted to science and engineering research” (Howard).  As Brinkley notes, in 2007, “NEH funding… was approximately $138.3 million – 0.5 percent of NIH funding and 3 percent of NSF… [And] when adjusted for inflation, the NEH budget today is roughly a third of what it was thirty years ago.”  Even worse:  “[T]his dismal picture exaggerates the level of support for humanistic research, which is only a little over 13% of the NEH program budget, or about $15.9 million.  The rest of the NEH budget goes to a wide range of worthy activities.  The largest single outlay is operating grants for state humanities councils, which disburse their modest funds mostly for public programs and support of local institutions.”  And from private foundations, “only 2.1% percent of foundation giving in 2002 went to humanities activities (most of it to nonacademic activities), a 16% relative decline since 1992.”  Meanwhile, university presses are in trouble.  Libraries are struggling to sustain holdings growth.

Other information suggests interesting questions.  For instance:  why did the national production of humanities graduates climb so sharply in the 1960’s (doubling between 1961 and 1966 alone)?  Geiger argues the bubble was a product of circa-1960s disillusionment with the corporate world, energy in the humanistic disciplines, the fact that a humanities degree often provided employment entree for women (especially to careers in education), and a booming economy that made jobs plentiful regardless of one’s academic training.  After 1972, Geiger argues, all these trends were flipped:  the disciplines became embroiled in theoretical disputes and thus less intellectually compelling for new students (some attracted by Big Theory and arguably more antagonized), universities themselves became the target of disillusion, business schools expanded fast and became a more urgent source of competition, and so on.  Today, although enrollments are booming across the board in American universities, the humanities remain stable in generating roughly 8% of B.A. degrees, which may mean the collapse has reached bottom.

One interesting suggestion is posed by David Laurence, who reads the Indicators as proving that the nation can be said to have produced a “humanities workforce,” which in turn “makes more readily apparent how the functioning of key cultural institutions and significant sectors of the national economy depends on the continued development and reproduction of humanistic talent and expertise.”  This infrastructure includes (as listed by Laurence) schools and teachers, libraries, clergy, writers, editors, museums, arts institutions, theater and music, publishing, entertainment and news (where the latter involve the production of books, magazines, films, TV, radio, and Internet content).  And this gives rise to some potential confidence:  humanities programs continue to attract brilliant students, good scholarship is still produced, and the “’rising generation’ of humanities scholars is eager to engage directly with publics and communities” (Ellison), implying that the public humanities may grow further.  An outreach focus for humanists is a double-edged sword, of course, but might enhance the poor standing university humanities programs have, for example, with state funding councils.

SOURCES:  Jennifer Howard, “First National Picture of Trends in the Humanities is Unveiled,” Chronicle of Higher Education, 16 January 2009, pg. A8; Jennifer Howard, ‘Early Findings From Humanities-Indicators Project are Unveiled at Montreal Meeting,” Chronicle of Higher Education, 18 May 2007, pg. A12; Essays attached to the AAAS Humanities Indicators website, including Roger Geiger, “Taking the Pulse of the Humanities: Higher Education in the Humanities Indicators Project,” David Laurence, “In Progress: The Idea of a Humanities Workforce,” Alan Brinkley, “The Landscape of Humanities Research and Funding,” and Julie Ellison, “This American Life:  How Are the Humanities Public?”

Interpreting ossuary boxes

Roughly seven years ago the discovery of a 2000-year old bone box (or, ossuary) which is engraved with the words James, Son of Joseph, brother of Jesus, was announced, setting in motion a media, scholarly, and now judicial frenzy.  There is not much doubt that the 20-inch long box is about the right age to be from the period when Jesus lived; the controversy has to do with whether the inscription was added later.  The editor of the Biblical Archeological Review (BAR first headlined the find in 2002 in an essay written by the Sarbonne scholar André Lemaire) has written a book defending the authenticity of the find, which he says makes this one of the greatest archeological finds of all time since it would be the only contemporaneous evidence that Jesus lived and that the New Testament naming of his (step-)father and brother is accurate.  By contrast, Nina Burleigh has a new book out (Unholy Business:  A True Tale of Faith, Greed and Forgery in the Holy Land, Harper Collins) arguing the whole thing is, as the title implies, a gigantic hoax.

The antiquities collector who sprang the find on the world is Oded Golan, who says he was sold the box by an Arab antiquities dealer; he can’t remember who the man was.  An investigation was subsequently undertaken by the Israeli Antiquities Authority, which pronounced the inscriptions a fraud (their Final Report is available on their main website); soon thereafter Golan and three others were arrested and, for the last almost four years, have been on trial for taking valuable historical artifacts and adding fake lettering in a scheme to make them massively more valuable.  Golan denies the charges.

The case is obviously complicated, and pretty interesting.  Golan is accused of also faking a tablet he claims came from the first Solomon Temple.  The ossuary, if confirmed, might rock the world of Christian scholarship (more on that in a moment); the Jehoash tablet, if confirmed, might rock the world of Judaism by proving the existence of Solomon’s Temple on the historically contested Al Aqsa Temple Mount.

A lot of the skepticism derives from the fact that the finds just seem too good to be true.  The tablet contains sixteen full lines of text, when similar finds from the right period are lucky to include a smattering of textual fragments.  Burleigh note that when the authorities searched Golan’s house, they found little baggies of ancient dirt and charcoal, along with carving tools one would use to fake age an object.  During one search, as the Toronto Star reported it, “the James Ossuary was found sitting atop a disused toilet, an odd place, police felt, for a box purported to have once contained the DNA of Jesus’ family.”

The Israel Antiquities Authority sees the case as open and shut.  While some have argued that scientifically valid tests of the stone patina verify the authenticity of the engraved lettering, the panel of experts convened by IAA judged the inscription a fraud.  In part their argument was based on a finding that the inscription cut through the old patina (implying it was of recent origin).  Parts of the inscription, they argued, were recently baked on; in that more recently applied inscription patina (the part that seems to connect the box to someone named Jesus), they found trace elements that wouldn’t have existed in ancient Jerusalem but are found today in chemically treated tap water.

But under intensive question-and-answer in the lawsuit, the case has weakened – one expert from Germany said the IAA had contaminated the key evidence and another (Ada Yardeni) said she would leave her profession if the ossuary turned out to be a fake.  Opponents of the IAA conclusions argue that their objectivity cannot be trusted given IAA’s strong opposition to artifacts brought to light via the commercial antiquities trade.  The testimony has been so conflicted that two months ago the judge actually suggested the prosecution drop the charges against Golan; he said it seemed unlikely to him a conviction could be achieved (which in turn led Hershel Shenks, the BAR editor, to issue a report that the find had been “vindicated” – this month writing, the “forgery case collapses”).  Burleigh is frustrated because a possible key witness is an Egyptian who says he used to forge for Golan.  But Egypt won’t extradite the man and he doesn’t seem interested in testifying, and so his story likely won’t be heard.  Defenders of the box’s authenticity argue Burleigh is just trying to sell her book, and the book’s thesis blows up if the find proves genuine (and so, they insinuate, she’ll say anything to discredit it).

The whole thing got even wilder earlier this year when a documentary film produced by James Cameron (yes, the Titanic guy) was released.  Directed by Simcha Jacobovici, The Lost Tomb of Jesus, which has by now screened around the world (Jacobovici has also co-authored a book on the subject called The Jesus Tomb and the documentary aired under the title The Jesus Family Tomb on the Discovery Channel), argues that the James ossuary and others found nearby establish (at a high level, they say, of statistical probability) that what had been found was the final burial grounds of Jesus’ family.  The statistical part is interesting – the expert quoted in the film did calculations given a series of contingencies laid out by the film’s director.  The statistician is credible (Andre Feuerverger, from the University of Toronto) and the calculations have been judged serious and methodologically sophisticated by a peer-reviewed forum in a leading statistics journal, but the original parameters are highly disputed (especially given how common the names Mary, Jesus, Joseph, and James were back then).

Stephen Pfann, from the University of the Holy Land, isn’t buying it:  “What database serves as the basis for establishing the probability of this claim?  There are no surviving genealogies or records of family names in Judea and Galilee to make any statement concerning frequency of various personal names in families there.”  Joe Zias, former curator of archeology at the Rockefellar Museum in Jerusalem, quoted in a March 2007 Newsweek article, was even blunter: “Simcha has no credibility whatsoever.  He’s pimping off the Bible…  Projects like these make a mockery of the archeological profession.”

Smart people got involved in the film (among them Princeton’s James Charlesworth and the University of North Carolina [Charlotte]’s James Tabor), but the film still reaches pretty far.  Based on a fourth ossuary from the same tomb (which some now aim to turn into a mega-tourist site), (here quoting a summary by David Horovitz in the Jerusalem Post) the filmmakers:

 …point to Ossuary 701… inscribed “Mariamne,” who they say is identified as Mary Magdelene in the 4th century text, The Acts of Philip.  And since Mary Magdalene is in the Jesus family tomb, and ultra-modern testing has established, astoundingly, that her bone-box and Jesus’ contained DNA of non-blood relatives, she must have been Jesus’ partner, they reason.  And since there’s a “Judah son of Jesus” in the tomb too (Ossuary 702) they dare to suggest he was most likely their son.  

Why, it’s the Da Vinci Code, all over again!  Burleigh half jokingly predicts we’ll soon see Solomon’s crown and Abraham’s sandals appearing on the antiquities market.

The case, beyond its intrinsic interest, has implications for how knowledge is created and distorted and popularized.  Some believers eager for evidence confirming their faith prove gullible to media mythmakers who popularize (and sometimes grotesquely distort) the scientific basis for their claims.  And the scientists get hauled into courts, where the standards of evidence vary dramatically from the tests of the laboratory or the peer review publication process.  Two sides get ginned up, science goes on trial, and (as Burleigh puts it) “the subjective underbelly of the science is… exposed…, big time” (qtd. by Laidlaw, Toronto Star, 11/4/08).  In cases of ambiguity, either fraud is perpetuated or doubt cast on potentially astonishing discoveries.  The debate rages on forever, creating cottage industries of scholarly blood feud.  It is this very cycle that accounts for the fact that Holy Family tombs have now been “authenticated” (as the Newsweek report put it) beneath the Dormition Abbey in Jerusalem and at another site in Ephesus (the Catholic Church says Mary was buried both places), the rock on which the Church of the Holy Sepulchre was erected in Jerusalem (Constantine said that was where Jesus was laid to rest), and a tomb in Safed (where last year Tabor said he found a Jesus tomb).

Stay tuned.  The Golan trial gets going again later this month.

SOURCES:  “’Jesus box’ may not be a fake after all,” Daily Mail (London), 30 October 2008, pg. 11; Stuart Laidlaw, “Forgery of antiquities is big business,” Toronto Star, 4 November 208, pg. L01; David Horovitz, “Giving ‘Jesus’ the silent treatment,” Jerusalem Post, 2 March 2007, pg. 24; Nina Burleigh, “Faith and fraud,” Los Angeles Times, 29 November 2008, pg. A21; “Forgery case collapses,” Biblical Archeology Review, January/February 2009, pgs. 12-13; Lisa Miller and Joanna Chen, “Raiders of the lost tomb,” Newsweek, 5 March 2007, pg. 60; Nicole Gaouette, “What ‘Jesus hoax’ could mean for Mideast antiquities,” Christian Science Monitor, 19 June 2003, pg. 7.

Publishing the papers of the U.S. founders

More than a half century ago, the Congress committed to producing definitive editions of the papers of the American founders – Alexander Hamilton, John Adams, George Washington, James Madison, Thomas Jefferson, and Benjamin Franklin in particular.  The first volume (which happened to be volume one of the Jefferson papers) was published in 1950, while Harry Truman was president.  Since then only the Hamilton papers have been completed.  As Senator Patrick Leahy (D-VT) put it in congressional hearings held last February:

According to the National Historic Publication and Records Commission [NHPRC], the papers of Thomas Jefferson will not be completed until 2025, the Washington papers in 2023, the papers of Franklin and Madison in 2030, and the Adams papers in 2050.  That is a hundred years after the projects began.  We spent nearly $30 million in taxpayer dollars in Federal taxpayer projects, and it is estimates another $60 million in combined public and private money is going in here.  One volume of the Hamilton papers costs $180.  The price for the complete 26-volume set of the papers is around $2,600.  So… only a few libraries [have] one volume of the papers, and only six percent [have] more than one volume.

The challenge, of course, is that everyone wants these collections, which have been often described as American Scripture, to be academically accurate, definitively comprehensive, and available yesterday.  But the imperatives of accuracy and speed work at cross purposes.  Some sense of why it takes so long to pull together and confirm the impossibly numerous details was conveyed in a story told by the historian David McCullough, who testified at the hearings.  McCullough, now at work on a Jefferson project, wanted to know the exact contents of the eighty or so crates Jefferson shipped back to Virginia while he was doing diplomatic work in France, information he rightly felt might convey some sense of Jefferson’s thinking.  The answer was to be found in volume 18 of the published papers, “the whole sum total in a footnote that runs nearly six pages in small type.”  McCullough has proposed that the national investment in the work of editing be doubled, so that the papers can be published more speedily but at no loss of historical quality.

The complications of doing this work are legion.  The papers of contemporary presidents are routinely collected and published soon after administrations end, but it wasn’t until 1934 and the founding of the National Historical Publications Commission, the precursor to today’s NHPRC, that a serious effort was made to comprehensively collect 18th century documentation, often scattered in private collections.  Although 216 volumes have now been published and praised, the frustration of the anticipated 2049 completion date has resulted in a drumbeat of criticism.  Private funding has been mobilized (the Pew Charitable Trust was the main original funder and has been persistent in directing funds over the years, including a failed 2000 challenge grant of $10 million – more on that soon), and the pace of publication is accelerating, but these final deadlines remain far off.

Rebecca Rimel, president of Pew, argues that there has been too little accountability for funds already spent – “there has never been a full accounting of the Founding Fathers Project.  There has been a lack of performance metrics” able to measure progress over time, she argues (11).  Pew has a special reason for frustration because they made the funding they coordinated contingent on production of such information, and they say it has never been forthcoming.  The criticism was reiterated in a more particular way by Deanna Marcum of the Library of Congress, who expressed the concern that university project work is spending too much of the funding to float graduate student stipends and on connected graduate programs, sometimes at the expense of faster methods of completion (37).  Stanley Katz has responded to this critique by noting that the expenditures of the projects are held tightly accountable to the reporting processes at NHPRC and NEH, in ways no different than any other funded project supported by those agencies.

The scholarly challenges of doing this work are also enormous.  To assure that consistently high standards of annotation are used in all the collections, very complex protocols of verification and citation are in place.  When one hears that a given project may “only” be producing one or two new volumes a year, it is easy to forget that each of these volumes may run to 800 pages with a large number of small print footnotes, and the Washington papers alone run to 27,347 pages.  Ralph Ketcham, an emeritus historian at Syracuse University who has spent his entire career on these projects (first working on Franklin and now on Madison), noted that the longevity of many of the Founders adds additional challenges – “It’s not surprising,” he noted, “that Alexander Hamilton’s papers are the only ones that have been completed.  The chief editor of the Hamilton papers, Cy Surrett, emphasized long ago that he thought he might dedicate his volumes to Aaron Burr, who made completion of the task possible” (14).  Sometimes this longevity results in vast collections of material – if the microfilmed papers connecting to the Adams papers were stretched out (the collection includes the presidential papers but also the materials produced by Henry and Charles Francis Adams) it would extend more than five miles long (McCollough, pg. 20).  The actual papers, when not in the custodial care of the Library of Congress, have to be transcribed and proofread on-site at collections often unwilling to let them physically travel.  To take just one example, the Jefferson papers are geographically dispersed over 100 different global depositories (Katz, pg. 18).

Fundraising has always been a challenge despite recent Congressional support.  The projects were intended from the outset to be funded privately, although public funds have also been allocated (the National Endowment for the Humanities started providing project grants in 1994).  Stanley Katz, a Princeton professor and former president of the Organization of American Historians, chairs NHPRC’s fundraising operation, whose major purpose is to do fundraising for all the Founders projects with the goal of freeing scholars at work on annotation from that burden, and although the organization has raised millions, many more are needed.  Although federal funding was restored after considerable lobbying, last year’s Bush budget proposal recommended zeroing out the NHPRC altogether.  And the story of the finally failed Pew matching grant, which imposed a probably impossible challenge, is also instructive:  Pew (this according to Katz, pg. 28) gave Founding Fathers Papers, Inc., nine months to come up with the requisite 3-to-1, $30 million match.  When they couldn’t raise that amount of money so quickly, the Pew match was withdrawn.  The model of creating so-called “wasting funds” (large endowments designed to spend down to zero with the completion of a project) makes sense (the strategy was used to complete the Woodrow Wilson papers and is a solution to the threats posed by funding uncertainty), and the Pew impulse to put tight timeframes on creating such funds also makes sense.  But too optimistically calibrated, overly-fast timetables can produce wasted effort and final funding failure.

Katz has also warned against the temptation of thinking the projects can simply be scaled up to speed publication:  “These are rather extraordinary works of scholarship.  This a craft skill, this is not an industrial skill.  It can’t be scaled up in the way that industrial skills can” (12).  Progress has been expedited by splitting up projects so that different parts can be simultaneously worked on; this is the strategy now in use with the Jefferson and Madison papers.   But because this is the case for most of the series in process, the marginal possibilities for accelerating production are not likely as great as one might imagine.

A common refrain is to call attention to the presumed absurdity of continuing the commitment to expensive hard copy printing, when many imagine the papers could be scanned, thrown up on the Worldwide Web, and annotated perhaps by the collaborative wikipedia-type work of a preselected group of scholars.  In fact, this is already well underway, though the new commitments add major new work to existing teams.  Allen Weinstein, the U.S. Archivist, has committed to online dissemination, and digital commitments go back all the way to 1988, when agreements were made with the Packard Humanities Institute.  Packard continues to plug away along with the University of Virginia Press (the electronic imprint is called Rotunda).  The University of Virginia work also received major support from the the Mellon Foundation.  Rotunda, which is receiving no public funds for its work (31), has already posted the papers of Washington and Dolley Madison, with Adams, Jefferson, Ratification, and James Madison papers slated for online publication by the end of 2009.

But that solution, for anyone who has struggled to put up a respectable website, is a lot more complicated than it may seem.  For one thing, unlike the recent NEH initiative to digitize American historical newspapers, which can be electronically scanned, the handwritten papers of the founders have to be keyed in by hand and then verified one at a time, an exceptionally labor-intensive process.  The publication arrangements that have been made with major university presses makes it a challenge to place unannotated material on a website, which would seriously subvert the investments those presses have made in projects in anticipation of a return on investment with publication.  For another, nationally-sanctioned authoritative editions need to be handled with great care and with sensitivity to the fast changing environments of digital presentation, so that money will not be wasted investing in formats that will soon be judged unworthy of the material.  Still, the Library of Congress, which has proprietary control over many of the materials, has already begun significant digitization connected with its American Memory Project (e.g., all the Washington, Jefferson, and Madison papers are available online).  Their position is that they can do the job given more money.

And thus the brilliant, historically incomparable Founding papers annotations roll out, one expensive volume at a time, inexorably researched and in a seemingly never-ending quest for financial support, so that their educational potential for scholars, citizens, and students will not be delayed for yet another half century.

SOURCE:  The Founding Fathers’ Papers:  Ensuring Public Access to Our National Treasures, Hearings before the Senate Judiciary Committee, S. Hrg. 110-334 (Serial No. J-110-72), 7 February 2008.

When social science is painful

The latest issue of the Annals of the American Academy of Political and Social Science (#621, January 2009) is wholly focused on the report authored in 1965 (read it here) by Daniel Patrick Moynihan focused on the status of black families, “the most famous piece of social scientific analysis never published” (Massey and Sampson, pg. 6).  The report arose out of Moynihan’s experience in the Kennedy and Johnson administrations working on poverty policy; his small group of underlings included Ralph Nader, working in his first Washington job.  Inspired by Stanley Elkins’ work on slavery (a book of that name argued that slavery set in motion a still-continuing tendency to black economic and social dependency), Moynihan’s group examined the ways in which welfare policy was, as he saw it, perpetuating single-family households led mainly by women, and at the expense of social stability and racial progress.  [In what follows I am relying almost totally on the full set of essays appearing in the January 2009 AAPSS, and the pagination references that follow are to those articles.]

Moynihan was writing in the immediate aftermath of passage of the 1964 Civil Rights Act, and a principle theme of the report is that the eradication of legal segregation would not be enough to assure racial equality given larger structural forces at work.  Pressures on the black family had produced a state of crisis, a “tangle of pathology” that was reinforcing patterns of African-American poverty, he wrote.  Moynihan’s larger purpose was to recommend massive federal interventions, a goal subverted, unfortunately, by the report’s rhetorical overreaching (e.g.:  matriarchy in black families were said to prevent black men from fulfilling “the very essence of the male animal from the bantam rooster to the four star general… to strut”).  The solution, in his view, was to be found in a major federal jobs program for African American men.

The report was leaked to the press and was, by and large, immediately condemned, first because it seemed to provide aid and comfort to racists in its emphasis on out-of-wedlock births as a demographic pathology, and second because it seemed to many readers a classic case of “blaming the victim.”  In fact, the term “blaming the victim” may have its genesis in William Ryan’s use of the phrase to critique Moynihan in the Nation.  I think it likely that cultural salience of these critiques was later reinforced by a memo he wrote to Richard Nixon advocating the idea that “the issue of race could benefit from a period of ‘benign neglect,’” a locution he came to regret since that one soundbite came to dominate the actual point of the memo, better encapsulated in this perspective:  “We need a period in which Negro progress continues and racial rhetoric fades” (contrary to the impression given by the benign neglect comment, he was actually trying to be critical of the hot and racially charged rhetoric coming out of Vice President Agnew).  Moynihan’s report proved divisive in the African American community, endorsed on release by Roy Wilkins and Martin Luther King, Jr., but condemned by James Farmer.  By the time the report itself was more widely read its reception was distorted by the press frame, and a counter-tradition of research, celebrating the distinctiveness of black community formation, was well underway.

Read by today’s lights the Moynihan report has in some respects been both confirmed and its critics also partly vindicated too.  The essays in this special issue offer many defenses.  Douglas Massey (the Princeton sociologist) and Sampson (chair of sociology at Harvard, both writing in the introduction at pgs. 7-8) defend the report against the accusation of sexism:

Although references to matriarchy, pathological families, and strutting roosters are jarring to the contemporary ear, we must remember the times and context.  Moynihan was writing in the prefeminist era and producing an internal memo whose purpose was to attract attention to a critical national issue.  While his language is certainly sexist by today’s standards, it was nonetheless successful in getting the attention of one particular male chauvinist, President Johnson, who drew heavily on the Moynihan Report for his celebrated speech at Howard University on June 4.

Ironically, though, the negative reactions to the leaked report (which suffered since the report itself was not publicly circulated, only the critical synopses) led Johnson himself to disavow it, and no major jobs program for black men was forthcoming as part of Great Society legislative action.  Moynihan left government soon afterward and found the national coverage, a lot of which attacked him as a bigot, scarring and unwarranted given the overall argumentative arc of the report.  Only when serious riots reerupted in 1968 did jobs get back on the agenda, but the watered down affirmative action programs that resulted failed to transform the economic scene for racial minorities while proving a galvanizing lightning rod for conservative opponents (Massey and Sampson, 10).  The main policy change relating to black men since then has been sharp increases in rates of incarceration, not rises in employment or economic stability, a phenomenon which is the focus of an essay by Bruce Western (Harvard) and Christopher Wildeman (University of Michigan).

Several of the contributors to the special issue mainly write to insist that Moynihan has been vindicated by history.  His simple thesis, that in subgroups pressures tending to disemploy males will in turn fragment families and produce higher incidences of out-of-wedlock birth, divorce, all at the main expense of women and children, is explicitly defended as having been vindicated by the newest data.  James Q. Wilson writes that the criticism the report received at the time “reflects either an unwillingness to read the report or an unwillingness to think about it in a serious way” (29). Harry Holzer, an Urban Institute senior fellow, argues that the subsequent trends in black male unemployment have only intensified since the 1960’s, thereby reaffirming the prescience of Moynihan’s position and strengthening the need for a dramatic federal response (for instance, Holzer defends the idea that without larger educational investments the destructive perceptions of working opportunities will produce perceptual barriers to cultural transformation).  The predicate in Ron Haskins (of the Brookings Institution) essay is announced by its title:  “Moynihan Was Right:  Now What?” (281-314).

Others argue that the Moynihan claims, which relied on the assumption that only traditional family arrangements can suitably anchor culture, ignore the vitality of alternative family forms that have become more common in the forty years since.  Frank Furstenberg notes that “Moynihan failed to see that the changes taking place in low-income black families were also happening, albeit at a slower pace, among lower-income families more generally” (95).  For instance, rates of single parenting among lower-income blacks have dropped while increasing among lower-income whites.  Linda Burton (Duke) and Belinda Tucker (UCLA) reiterate the criticism that the behavior of young women of color should not be pathologized, but is better understood as a set of rational responses to the conditions of cultural uncertainty that pervade poorer communities (132-148):  “Unlike what the Moynihan Report suggested, we do not see low-income African American women’s trends in marriage and romantic unions as pathologically out of line with the growing numbers of unmarried women and single mothers across all groups in contemporary American culture.  We are hopeful that the uncertainty that is the foundation of romantic relationships today will reinforce the adaptive skills that have sustained African American women and their families across time” (144).  Kathryn Edin (Harvard) et al., criticize Moynihan’s work for diverting research away from actual attention to the conditions of black fatherhood, which in turn has meant that so-called “hit and run” fathers could be criticized in ways that have raced far out of proportion to their actual incidence in urban populations (149-177).

The lessons drawn by the AAPSS commentators from all this for the practice of academic research are interesting.  One drawn by Massey relates to the “chilling effect on social science over the next two decades [caused by the Moynihan report and its reception in the media].  Sociologists avoided studying controversial issues related to race, culture, and intelligence, and those who insisted on investigating such unpopular notions generally encountered resistance and ostracism” (qtd. from a 1995 review in Massey and Sampson, 12). Because of this, and the counter-tendency among liberal/progressive scholars to celebrate single parenting and applaud the resilience of children raised in single-parent households, conservatives were given an ideological opening to drumbeat media reports about welfare fraud, drug usage rates, and violence, and to pathologize black men, an outcome M/S argue led to a conservative rhetoric of “moralistic hectoring and cheap sermonizing to individuals (“Just say no!”).  Not until William Julius Wilson’s The Truly Disadvantaged (1987), did the scholarly tide shift back to a publicly articulated case for social interventions more in tune with Moynihan’s original proposals – writing in the symposium WJW agrees with that judgment and traces the history of what he argues has been a social science abandonment of structural explanations for the emergence of poverty cultures.  The good news is arguably that “social scientists have never been in such a good position to document and analyze various elements in the ‘tangle of pathology’ he hypothesized” (Massey and Sampson, pg. 19).

The history of the report also calls attention to the limits of government action, a question with which Moynihan is said to have struggled for his entire career in public service.  Even accepting the critiques of family disintegration leaves one to ask what role the government might play in stabilizing family formations, a question now controversial on many fronts.  James Q. Wilson notes that welfare reform is more likely to shift patterns of work than patterns of family, since, e.g.,  bureaucrats can more reasonably ask welfare recipients to apply for a job than for a marriage license (32-33).  Moynihan’s answer was that the government’s best chances were to provide indirect inducements to family formation, mainly in the form of income guarantees (of the sort finally enacted in the Earned Income Tax Credit).  But asked at the end of his career about the role of government, Moynihan replied:  “If you think a government program can restore marriage, you know more about government than I do” (qtd. in Wilson, 33).

Moynihan was an intensely interesting intellectual who thrived, despite his peculiarities, in the United States Senate (four terms from New York before retiring and blessing Hillary Clinton’s run for his seat), as he had earlier serving as Nixon’s ambassador to India and Ford’s representative at the United Nations.  At his death in 2003, a tribute in Time magazine said that “Moynihan stood out because of his insistence on intellectual honesty and his unwillingness to walk away from a looming debate, no matter how messy it promised to be.  Moynihan offered challenging, groundbreaking – sometimes even successful – solutions to perennial public policy dilemmas, including welfare and racism.  This is the sort of intellectual stubbornness that rarely makes an appearance in Washington today” (Jessica Reaves, Time, March 27, 2003).  His willingness to defend his views even when deeply unpopular gave him a thick skin and the discipline to write big books during Senate recesses while his other colleagues were fundraising.

Moynihan’s intellectualism often put him at odds with Democratic orthodoxy, and maybe on the wrong side of the issue – he opposed the Clinton efforts to produce a national health insurance system, publicly opposed partial birth abortion (“too close to infanticide”), was famously complicit in pushing the American party line at the United Nations, a fact that has been much criticized as enabling the slaughter of maybe 200,000 victims, killed in the aftermath of Indonesia’s takeover of East Timor.  But he also held a range of positions that reassured his mainly liberal and working class base:  opposed to the death penalty, the Defense of Marriage Act, NAFTA, and a famous champion of reducing the government’s proclivity to classify everything as top secret.

But Daniel Patrick Moynihan will be forever linked to his first and most (in)famous foray into the nation’s conversation on race, which simultaneously revealed the possibilities for thoughtful social science to shape public policy and the risks of framing such research in language seeking to make such research dramatic and attention-getting in a glutted sea of white papers and task force reports whose issuance typically come and go without any serious notice.

Neil Armstrong’s sublime silence

Over the holiday I had a chance to watch Ron Howard’s elegant documentary about the US-USSR race to the moon, a film that interviewed nearly all those who still live and walked on the moon.  All, that is, but Neil Armstrong, the very first human being to step foot on the lunar surface.  If human beings are still around in 5000 years, and barring a catastrophic erasure of human history, Neil Armstrong’s name will still be known and his serendipitous selection to be the first astronaut to step outside the lunar module at 2:56 UTC July 21, 1969, will still be celebrated as an astonishing feat of corporate (by which I simply mean massively collective) scientific enterprise, and the one line first spoken from the moon’s surface – “That’s one small step for [a] man, one giant leap for mankind” – will still be recited.  Since more than two-thirds of the world’s population had not yet been born in 1969, perhaps my thought is a naive one; I hope not.

Armstrong has been accused of being a recluse (historian Douglas Brinkley famously described him as “our nation’s most bashful Galahad”), but that descriptor doesn’t quite work.  After all, now 78 years old, Armstrong followed up his service to NASA by doing an ASO tour with Bob Hope and then a 45-day “Giant Leap” tour that included stops in Soviet Russia.  For thirteen months he served as Deputy Associate Administrator for Aeronautics at DARPA, and then taught at the University of Cincinnati for eight years.  More recently he has served as a technical consultant on two panels convened to report on space disasters (in the aftermath of the Apollo 13 and Challenger explosions; NA vice-chaired the Rogers Commission investigating the latter).  Armstrong has spoken selectively at commemorative events, including at a White House ceremony recalling the 25th anniversary of the moon walk, at a ceremony marking the 50th anniversary of NASA just a couple months ago, and the opening of a new engineering building at Purdue University (his alma mater) named after him in 2007.

So, no, Neil Armstrong is not a recluse in the sense we typically ascribe to monks or the painfully shy.  He is willing to be interviewed (he does seem to be tough on his own performances, which may explain some of his selectivity in accepting offers – after a 60 Minutes profile in which he participated, he gave himself a C-minus).  He gives speeches.  He has been happy to offer commentary on public policy subjects relating to outer space.  But what he has refused to do is endlessly reflect on what he did that July day.  And I admire him for this, not because others who have been forthcoming and talkative about the experience are to be criticized – their stories are compelling and their histories worth recalling and Aldrin and Lovell and the others have been important ambassadors and salesmen for space exploration – but because what Armstrong did, and the event in which he so memorably participated, would be diminished by more talk.

The recognition of this fact is the brilliance of the one line he so famously spoke, which remains a masterfully succinct response to a world historical moment.  Speech was required – the first man to step on the moon had to say something, after all – but too much yammering would have undermined the collective majesty of the moment, and excessive talk after the fact would have done the same.  Can you imagine a thousand years from now school children watching hours upon hours of the alternative, Neil Armstrong in a hundred oral history interviews?  Were you sweating?  Did you burp in your space helmet?  Were your space boots chafing?  As you jumped off the last step did you think you would be swallowed up?  Did you get verklempt?  How do people pee in space?  How did the event compare with taking your marriage vows?  To whom were you dedicating the experience?  Did you hear God’s voice?  If you were, in that moment, a tree, what kind of tree would you have been?

Ugh.  No thank you.  I don’t want to know the infinite and microscopic details and I don’t think they matter one whit.  The deeply powerful impression created by watching that grainy black and white event on a small television, for me as a child three days short of my eighth birthday, remains indelible – pay attention!  watch this!  look out the window – do you see the moon? – those people on the television are actually up there – one small step…  It was late at night (close to 11:00 EST in the United States) and I was getting tired and grumpy – why weren’t we going home yet? – but when the moment came I and the other 450 million estimated to have also watched the landing live (some estimates range as high as one billion) sat completely absorbed by what we were seeing and held our breaths to see later if the landing vehicle would escape the moon’s atmosphere.

And Neil Armstrong, at some deeply personal level, understands all this in a way that may be best analogized to the disappearance of musicians and celebrities who leave the stage and never reappear again.  In the television context, think Johnny Carson or Lucille Ball, who knew they could only subvert the quality of their life’s work in public by agreeing to appear in “comeback specials” and all the rest.  (This is why DVDs with nonstop director’s commentary are so often, in my view, a terrible mistake – let the work make its own impression.)  And so Armstrong, since 1994 or so, has stopped signing autographs (he found out they were simply being sold for profit and decided he didn’t want to be involved, paradoxically of course only increasing their value).  He also hasn’t been arrested shoplifting or been accused of harassment or even, so far as I know, been caught speeding, any of which would also have diminished his most public visible moment of achievement in the space program.

In the words of one writer, “Neil Armstrong achieved one of the greatest goals in human spaceflight but then did not go on to proselytize the faith…  For True Believers in the Cause, this is apostasy, and they resent him for it.”  Thomas Mallon, writing in the New Yorker, seemed to criticize Armstrong (the implicit assumption was that he’s too litigious) because he sued his barber – turns out the guy was cutting his hair and then selling it online.  I think Armstrong was right:  the hair thing was cheap and exploitive and diminished the work.

When Armstrong agreed to participate in the writing of a biography, which appeared in 2006 (James Hansen, First Man: The Life of Neil A. Armstrong, Simon and Schuster), there was a lot of speculation that at last its subject was prepared to go onto the couch, if only to debunk the stories that implied there was something creepy about his reluctance to talk all the time to reporters.  In reading the book I am struck by the good choice Armstrong made in settling on a collaborator – Hansen’s book is saturated with information (almost four hundred pages before we even get to Apollo 11), but the information is crisply organized. Hansen refuses the temptation to plant thoughts, speculate endlessly about feelings, and so on, and if pressed Armstrong to undergo psychoanalysis that doesn’t come across in the narrative.  Some have criticized the short final section (covering the years post-moon landing) as less interesting, and others have found fault in the fact that the book reveals Armstrong’s occasionally interpersonal coldness and the toll his career took on his family life.  Only in reading that Armstrong didn’t take souvenirs on the mission for his two sons did I start to think this is too much information.  But I found myself wondering if his notorious interpersonal coolness is also the reason he made a perfect astronaut – ice in the veins, cool under pressure, and all that.

Neil Armstrong is no Superman.  He was one of a thousand military men who might have served as the public face of the mammoth and expensive engineering triumph that achieved spaceflight, and had he come down with the flu it would probably be Buzz Aldrin we most remember today.  And so my point is not to celebrate the relative silence because it creates a mythology.  To the contrary, what I admire about Armstrong’s long refusal to be daily feted and interrogated about July 21 is that as he recedes, the work is allowed to dominate the scene.  In the eloquence of his one first sentence spoken from the lunar surface, and in his silence on that experience since, the sublime accomplishment of this supreme national effort is best recollected.

Oh, and one other thing:  Armstrong donated the royalties from the biography to Purdue, to be used to build a space program archive there.

Perfect.

Follow

Get every new post delivered to your Inbox.