Amateur Humanist

Home » Arts

Category Archives: Arts

Thalberg: Making the piano sensitive

Sigismund Thalberg’s piano performance tour of the United States prior to the Civil War came at a key point in the nation’s cultural emergence on the world scene.  By the 1830’s the United States’ cultural and social elite knew the musical works of Haydn, Beethoven, Mozart, and Bach, but the acquired tastes of the refined classical tradition had not reached the American masses and European musical refinement was often caricatured.  While in New York City the Philharmonic had been organized as a voluntary association since 1842, even by the late 1850’s (on the brink of the Civil War) efforts to expand concerts into mid-day matinees struggled to find audiences.  It wasn’t until 1881 that “the Boston Symphony became the nation’s first permanent full-time orchestra” (Horowitz).  Meanwhile, as the director of the Parisian Opera had but it in the 1840’s, in language characteristic of the European prejudice:  “We look upon America as an industrial country – excellent for electric telegraphs and railroads but not for Art.”

The mismatch between American and European musical sensibilities created a mutual cycle of mistrust and disparagement that did not really begin to crumble until Anton Rubenstein’s tour of the states in the 1870’s.  Prior to his tour, European impresarios often appealed to their publics in ways more reminiscent of circus performers (as when Jenny Lind was debuted as an “angel,” descended from heaven, by P.T. Barnum), where publicity machines were wildly ramped up.  But the irony of the Paris Opera director’s comment is that it was precisely America’s midcentury industrialization that enabled its cultural transformation.  The frenetic pace of Thalberg’s American concertizing was only possible because he was able to perform in the evening and then often travel all night on the trains, on a rapidly expanded and precisely organized transportation system that gave predictability to one’s efforts to reach small towns like Kenosha and Sandusky and Natchez and Atlanta.

Thalberg’s tour – his first American concert was in November 1856 in New York City, his last, provoked by a sudden and still unexplained return to Europe, took place in Peoria in June 1858 – was a noticeable contrast to the earlier hype of Jenny Lind, partly because his reputation as a European master needed little exaggeration.  Training in piano had already emerged as a marker of middle-class respectability, and Thalberg’s sheet music was well known to American music students.  News of Thalberg’s status as the most credible European pianist, second only to Franz Liszt, had been long circulated by the time of his arrival (and of course Liszt’s refusal to tour the United States left the stage to Thalberg’s crowd-pleasing sensibilities).

The generally exuberant reaction Thalberg received from American audiences reiterated the enthusiasm Europe had shown for his virtuosity twenty years earlier.  Reacting to a Parisian performance given in 1836, one reviewer described Thalberg this way:  “From under his fingers there escape handfuls of pearls.”  As time passed, the early rapture was revved into a feud, where partisans of Liszt (most notably Berlioz) took sides against Thalberg partisans (most notably Mendelssohn).  The whole thing came to head in a famously reported Paris recital where both Lizst and Thalberg played.  The March 1837 event, where tickets sold for charity cost an extravagant $8 apiece, featured each playing one of their famous fantasias:  Liszt played his volcanic transcription of Pacini’s Niobe and Thalberg his subtler version of Rossini’s Moise.  The outcome, although historically dominated by Liszt, was judged a close call at the time.  Some viewed the contest a tie.  A 1958 account by Vera Mikol argued that the winner was, “in the eyes of the ladies, Thalberg; according to the critics, Liszt.”

Thalberg’s reputation, though sustained by sold out European performances, faded on the continent, even as his worldwide reach (with concert tours in Russia, Holland, Spain, and Brazil) expanded.  Robert Schumann was notoriously hostile, and in his writing used the term “a la Thalberg” as a slur to describe lightweight compositions.  Mendelssohn remained an admirer.  The sparkling romanticism of Thalberg’s compositions made them simultaneously popular (and stylistically imitated) and critically panned.  It is evidence of both impulses that when Jenny Lind launched her American tour, the concert opened with a two-piano performance of Thalberg’s transcription of Norma.

But what made Thalberg extraordinary was not necessarily best displayed in his compositions, and this is undoubtedly why his reputation has so seriously abated.  Even in his day his compositions were often criticized for their repetitive impulse to showcase technique.  The key, for his audiences, was a compositional trick popularized and perhaps even invented by Thalberg and imitated everywhere:  the melodic line was switched thumb to thumb while the other fingers ranged widely across arpeggios above and below in an effect that made the player sound as if he had three hands.  Audiences were so impressed with this illusion that in some cities they reportedly stood up to get a better glimpse of his hands on the keys.

The key for his admiring critics, meanwhile, lay in his technique.  For Mendelssohn, Thalberg “restores one’s desire for playing and studying as everything really perfect does.”  Ernest Legouve wrote that “Thalberg never pounded.  What constituted his superiority, what made the pleasure of hearing him play a luxury to the ear, was pure tone.  I have never heard such another, so full, so round, so soft, so velvety, so sweet, and still so strong.”  Arthur Pougin, memorializing Thalberg at his death, said it was he “who, for the first time we had seen, made the piano sensitive,” which was to say that in the eyes of other players, he had mastered the art of pressing against the limits of the instrument so as to make it sound most like the singing human voice.  It was this Thalberg himself was seeking to highlight when he entitled his own piano text, L’Art du chant applique au piano, or The Art of Song Applied to the Piano.

Academic debate continues about the role of Thalberg and the other early European virtuosos who toured the states.  Some defend him as representing the necessary first step in tutoring America in musical sophistication, all while softening the more difficult numbers with crowd pleasing fantasias that classed up songs like “The Last Rose of Summer” and “Home Sweet Home.”  Others render a harsher judgment – Mikol closed her 1958 essay on Thalberg with this highly negative assessment:  “we should not underestimate the part he played a hundred years ago in delaying our musical coming-of-age.”  Part of the sharp discrepancy relates to differing views of the emergence of high culture.  R. Allen Lott’s From Paris to Peoria credits the impresario experience as laying the groundwork for a richer American culture, where the audience experience of the classical repertoire was “sacralized” over time.  Contra Lawrence Levine and others, who have argued that this period shows how cultural norms were imposed by rich elites as a method of bringing the masses under disciplined control, Lott and Ralph Locke credit not elite control but a widely shared eagerness for intensive aesthetic experiences that transcended class divisions.  Still others point to the deeply gendered responses this sort of musical performance elicited – women were not allowed entry into the evening theatre without a male escort, and so Thalberg and others added afternoon matinees where women could attend unaccompanied.

Today Thalberg is mainly forgotten – my own interest in him came from seeing one of his works performed on campus two months ago by a Mississippi musicologist – but In towns across America he and the other touring virtuosos provoked both class antagonisms and the enthralled reactions of the spiritually “slain in the spirit,” which makes it difficult to judge either perspective uniquely correct.  In Boston a huge controversy erupted when Thalberg’s manager sought to limit ticket sales to the upper class (by briefly requiring patrons to provide a “correct address”); the papers had a field day about foreign snobbery and defended music for its democratizing potential.

Meanwhile, audiences were enthralled and carried to heights of emotional ecstasy by the actual concerts; the press accounts often noted that listeners wept.  Thalberg managed to achieve this response, amazingly, without resort to the usual theatrics apart from his pure technique; as a Boston reviewer put it, “no upturning of the eyes or pressing of the hand over the heart as if seized by a sudden cramp in that region, the said motions caused by a sudden fit of thankfulness.”  Others, sometimes in small towns but even in New York City, already the nation’s cultural capital, reacted with disdain, a fact that led one of the city’s preeminent critics to ask, “Why will mere glitter so far outweigh solid gold with the multitude?”  Still others attended not to hear the music but display their social status.

Such reactions persist to this day in the nation’s symphony halls, but even as audiences reproduce the prejudices of their time, it is hard not to be moved by the more singular reaction of that same New York correspondent who, upon hearing Thalberg play the opening of Beethoven’s Emperor, said that even as “it fell dead upon the audience, …I drank it in as the mown grass does the rain.  A great soul was speaking to mine, and I communed with him.”

SOURCES:  R. Allen Lott, From Paris to Peoria:  How European Piano Virtuosos Brought Classical Music to the American Heartland (Oxford:  Oxford University Press, 2003); Vera Mikol, “The Influence of Sigismund Thalberg on American Musical Taste, 1830-1872,” Proceedings of the American Philosophical Society 102.5 (20 October 1958), pgs. 464-468; Joseph Horowitz, online review of Vera Lawrence’s Strong on Music:  The New York Music Scene in the Days of George Templeton Strong, vol. 3 (Chicago:  University of Chicago Press, 1999), at http://www.josephhorowitz.com; Lawrence Levine, Highbrow/Lowbrow:  The Emergence of Cultural Hierarchy in America (Cambridge, Mass.:  Harvard University Press, 1988); Ralph Locke, “Music Lovers, Patrons, and the ‘Sacralization’ of Culture in America,” 19th-Century Music 17 (Fall 1993), pgs. 149-173; E. Douglas Bomberger, “The Thalberg Effect:  Playing the Violin on the Piano,” Musical Quarterly 75.2 (Summer 1991), pgs. 198-208.

The importance of watching

I’m not quite finished with it yet, but Paul Woodruff’s recent The Necessity of Theatre: The Art of Watching and Being Watched (Oxford:  Oxford University Press, 2008) makes a compelling case for treating theatre as central to the human experience.  Woodruff’s point is not to reiterate the now-familiar claim that theatrical drama importantly mirrors human experience, although I assume he would agree with thinkers like Kenneth Burke (who insisted in his own work that theatricality was not a metaphor for human life, but that our interactions are fundamentally dramatically charged).  Rather, theatre, which he defines (repeatedly) as “the art by which human beings make or find human action worth watching, in a measured time and place” (18), enacts much of what is basic to human sociability.

Theatre and life are about watching and the maintenance of appropriate distance, and the way in which collective observation provides validation for human interaction (such as in the ways public witness validates a marriage ceremony or makes justice, itself animated by witnesses, collectively persuasive).

The book is a little frustrating – Woodruff is a philosopher and the book starts by discovering the river (and making its boldest claim up front) and then guiding the reader through all the connected tributaries, and that can be a little tedious when the journey starts to feel less like a riverboat cruise and more like navigating sandbars.  That is, the project proceeds too fully as a definitional typology of theatre, an approach that performatively contradicts the most important think about theatre itself:  finding audiences and keeping them interested.  Woodruff also has a tendency to keep announcing how important his claims are:  “Formally, however, I can point out already that [my] definition has an elegance that should delight philosophers trained in the classics” (39).  “This is bold” (67).  “My proposed definition of theatre is loaded” (68).  And so on.

But along the way Woodruff says a lot of interesting things.  Some examples:

•  “Justice needs a witness.  Wherever justice is done in the public eye, there is theatre, and the theatre helps make the justice real” (9).  

•  “People need theatre.  They need it the way they need each other – the way they need to gather, to talk things over, to have stories in common, to share friends and enemies.  They need to watch, together, something human.  Without this…, well, without this we would be a different sort of species.  Theatre is as distinctive of human beings, in my view, as language itself” (11).

•  “Politics needs all of us to be witnesses, if we are to be a democracy and if we are to believe that our politics embody justice.  In democracy, the people hold their leaders accountable, but the people cannot do this if they are kept in the dark.  Leaders who work in closed meetings are darkening the stage of public life and they are threatening justice” (23).

•  “The whole art of theatre is the one we must be able to practice in order to secure our bare, naked cultural survival” (26).

•  “A performance of Antigone has more in common with a football game than it does with a film of Antigone” (44).

I began by cheating, I suppose, by reading the epilogue, where Woodruff notes:  “I do not mean this book to be an answer to Plato and Rousseau…, because I think theatre in our time is not powerful enough to have real enemies.  Theatre does have false friends, however, and they would confine it to a precious realm in the fine arts.  We need to pull theatre away from its false friends, but we have a greater task.  We need to defend theatre against the idea that it is irrelevant, that it is an elitist and a dying art, kept alive by a few cranks in a culture attuned only to film and television.  I want to support the entire boldness of my title:  The Necessity of Theatre” (231).

Remembering Harold Pinter

Several of the obituaries for Harold Pinter, the Nobel prize winning playwright who died on Christmas Eve, see the puzzle of his life as centered on the question of how so happy a person could remain so consistently angry.  The sense of anger, or perhaps sullenness is the better word, arises mainly from the diffidence of his theatrical persona and the independence of his best characters even, as it were, from himself, and of course his increasingly assertive left-wing politics.  The image works, despite its limitations, because as he suffered in recent years from a gaunting cancer he remained active and in public view, becoming something of a spectral figure.  And of course many who were not fans of theatrical work (from the hugely controversial Birthday Party, to the critically acclaimed Caretaker, and then further forays into drama and film) mainly knew him through his forceful opposition to Bush and Blair and their Iraqi policies and the larger entanglements of American empire.

But Pinter, and this is true I think of all deeply intellectual figures, cannot be reduced to the terms provocateur or leftist.  In this case, to be sure, simple reductions are wholly inadequate to the task given his very methods of work:  one of his most abiding theatrical legacies is his insistence that dramatic characters are inevitably impenetrable – they owe us no “back story,” nor are their utterances ever finally comprehensible, any more than are our interactions in the real world of performed conversation.  And so Pinter set characters loose who even he could not predict or control, an exercise that often meant his productions were themselves angering as audiences struggled to talk sense into the unfolding stories. As the Economist put it, “his characters rose up randomly… and then began to play taunting games with him.  They resisted him, went their own way.  There was no true or false in them.  No certainty, no verifiable past…  Accordingly, in his plays, questions went unanswered.  Remarks were not risen to.”

So what does all this say about the ends of communication?  For Pinter they are not connected to metaphysical reflection or understanding (this was Beckett’s domain; it is somehow fitting that Pinter’s last performance was in Beckett’s Krapp’s Last Tape, played from a wheelchair), but simple self defense, a cover for the emptiness underneath (Pinter: “cover for nakedness”), a response to loneliness where silence often does just as well as words.  And so this is both a dramatic device (the trait that makes a play Pinteresque) and a potentially angering paradox:  “Despite the contentment of his life he felt exposed to all the winds, naked and shelterless.  Only lies would protect him, and as a writer he refused to lie.  That was politicians’ work, criminal Bush or supine Blair, or the work of his critics” (Economist).  Meanwhile, the audience steps into Pinter’s worlds as if into a subway conversation; as Cox puts it, “The strangers don’t give you any idea of their backgrounds, and it’s up to the eavesdropper to decide what their relationships are, who’s telling the truth, and what they’re talking about.”

The boundaries that lie between speaking and silence are policed by timing, and Pinter once said he learned the value of a good pause from watching Jack Benny performing at the Palladium in the early 1950’s.  One eulogist recalls the “legendary note” Pinter once sent to the actor Michael Hordern:  “Michael, I wrote dot, dot, dot, and you’re giving me dot, dot.”  As Siegel notes:  “It made perfect sense to Hordern.”  The shifting boundaries of communication, which in turn provide traces of the shifting relations of power in a relationship, can devolve into cruelty or competition where both players vie for one-up status even as all the rest disintegrates around them.  As his biographer, Michael Billington, put it, “Pinter has always been obsessed with the way we use language to mask primal urges.  The difference in the later plays is not simply that they move into the political arena, but they counterpoint the smokescreen of language with shocking and disturbing images of torture, punishment, and death.”  At the same time, and this because Pinter was himself an actor and knew how to write for them, the written texts always seemed vastly simpler on paper than in performance – and this is not because simple language suggests symbolic meaning (Pinter always resisted readings of his work that found symbolic power in this or that gesture) but because the dance of pauses and stutters and speaking end up enacting scenes of apparently endless complexity.

For scholars of communication who attend to his work, then, Pinter poses interesting puzzles and even at their most cryptic his plays bump up against the possibilities and limits of language.  One such riddle, illuminated in an essay Dirk Visser, is that while most Pinter critics see his plays as revealing the failures of communication, Pinter himself refused to endorse such a reading, which he said misapprehended his efforts.  And as one moves through his pieces, the realization that language is not finally representational of reality slowly emerges (or, in some cases, with the first line), nor even instrumental (where speakers say certain things to achieve certain outcomes).  Pinter helps one see how language can both stabilize and unmoor meaning, even in the same instant (this is the subject of an interesting analysis of Pinter’s drama written by Marc Silverstein), and his work both reflects and straddles the transition from modernism to postmodernism he was helping to write into existence (a point elaborated by Varun Begley).

His politics were similarly complicated, I think, a view that runs contrary to the propagandists who simply read him as a leftist traitor, and a fascist at that.  His attacks on Bush/Blair were often paired with his defense of Milosevic in the press as implying a sort of left-wing fascism where established liberal power is always wrong.  But his intervention in the Milosevic trial was not to defend the war criminal but to argue for a fair and defensible due process, and this insistence on the truth of a thing was at the heart of his compelling Nobel address.  Critics saw his hyperbole as itself a laughable performative contradiction (here he is, talking about the truth, when he hopelessly exaggerates himself).  I saw a long interview done with Charlie Rose, replayed at Pinter’s death, where Rose’s impulse was to save Pinter from this contradiction, and from himself:  (Paraphrasing Rose) “Surely your criticism is not of all the people in America and Britain, but only made against particular leaders.”  “Surely you do not want to oversimplify things.”  Pinter agreed he was not accusing everyone of war crimes but also refused to offer broader absolution, since his criticism was of a culture that allowed and enabled lies as much as of leaders who perpetuated them without consequence.  Bantering with Rose, that is to say, he refused to take the bait, and the intentional contradictions persisted.  His Nobel speech (which was videotaped for delivery because he could not travel to Stockholm and is thus available for view online) starts with this compelling paragraph:

In 1958 I wrote the following:  “There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false.  A thing is not necessarily either true or false; it can be both true or false.”  I believe that these assertions still make sense and do still apply to the exploration of reality through art.  So as a writer I stand by them but as a citizen I cannot.  As a citizen I must ask:  What is true?  What is false?

What was so angering for many was Pinter’s suggestion that the American leadership (and Blair too) had committed war crimes that had first to be recognized and tallied and then perpetrators held to account:

The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile.  The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven.  Hundreds of thousands of deaths took place throughout these countries.  Did they take place?  And are they in all cases attributable to US foreign policy?  The answer is yes they did take place and they are attributable to American foreign policy.  But you wouldn’t know it.  It never happened.  Nothing ever happened.  Even while it was happening it wasn’t happening.  It didn’t matter.  It was of no interest.  The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them.  You have to hand it to America.  It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good.  It’s a brilliant, even witty, highly successful act of hypnosis.

The argument is offensive to many (when the Nobel was announced, the conservative critic Roger Kimball said it was “not only ridiculous but repellent”), though for a playwright most attentive to the power of the obscuring mask and the underlying and sometimes savage operations of power they obscure, it is all of a piece.  McNulty:  “But for all his vehemence and posturing, Pinter was too gifted with words and too astute a critic to be dismissed as an ideological crank.  He was also too deft a psychologist, understanding what the British psychoanalyst D. W. Winnicott meant when he wrote that ‘being weak is as aggressive as the attack of the strong on the weak’ and that the repressive denial of personal aggressiveness is perhaps even more dangerous than ranting and raving.”

As the tributes poured in, the tensions between the simultaneous arrogance (a writer refuses to lie) and the humility (he felt exposed to all the winds, naked and shelterless) in this arise again and again.  The London theatre critic John Peter gets at this when he passingly noted how Pinter “doesn’t like being asked how he is.”  And then, in back to back sentences:  “A big man, with a big heart, and one who had the rare virtue of being able to laugh at himself.  Harold could be difficult, oh yes.”  David Wheeler (at the ART in Cambridge, Massachusetts):  “What I enjoyed [of my personal meeting with him] was the humility of it, and his refusal to accept the adulation of us mere mortals.”  Michael Billington:  “Pinter’s politics were driven by a deep-seated moral disgust… But Harold’s anger was balanced by a rare appetite for life and an exceptional generosity to those he trusted.”  Ireland’s Sunday Independent:  “Pinter was awkward and cussed… It was the cussedness of massive intellect and a profound sense of outrage.”

Others were more unequivocal.  David Hare:  “Yesterday when you talked about Britain’s greatest living playwright, everyone knew who you meant.  Today they don’t.  That’s all I can say.”  Joe Penhall:  Pinter was “my alpha and beta…  I will miss him and mourn him like there’s no tomorrow.”  Frank Gillen (editor of the Pinter Review):  “He created a body of work that will be performed as long as there is theater.”  Sir Michael Gambon:  “He was our God, Harold Pinter, for actors.”

Pinter’s self-selected eulogy conveys, I think, the complication – a passage from No Man’s Land – “And so I say to you, tender the dead as you would yourself be tendered, now, in what you would describe as your life.”  Gentle.  Charitable.  But also a little mocking.  A little difficult.  And finally, inconclusive.

SOURCES:  Beyond Pinter’s own voluminous work, of course – Marc Silverstein, Harold Pinter and the Language of Cultural Power (Bucknell UP, 1993); Varun Begley, Harold Pinter and the Twilight of Modernism (U Toronto P, 2005); “Harold Pinter,” Economist, 3 January 2009, pg. 69; Ed Siegel, “Harold Pinter, Dramatist of Life’s Menace, Dies,” Boston Globe, 26 December 2008, pg. A1; John Peter, “Pinter:  A Difficult But (Pause) Lovely Man Who Knew How to Apologise,” Sunday Times (London), 28 December 2008, pgs. 2-3; Gordon Cox and Timothy Gray, “Harold Pinter, 1930-2008,” Daily Variety, 29 December 2008, pg. 2; Charles McNulty, “Stilled Voices, Sardonic, Sexy:  Harold Pinter Conveyed a World of Perplexing Menace with a Vocabulary All His Own,” Los Angeles Times, 27 December 2008, pg. E1; Dirk Visser, “Communicating Torture: The Dramatic Language of Harold Pinter,” Neophilologus 80 (1996): 327-340; Matt Schudel, “Harold Pinter, 78,” Washington Post, 26 December 2008, pg. B5; Michael Billington, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 15; Esther Addley, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 14; Frank Gillen, “Farewell to an Artist, Friend,” St. Petersburg Times (Florida), 4 January 2009, pg. 4E; “Unflagging in His Principles and Unrivalled in His Genius,” Sunday Independent (Ireland), 28 December 2008; Dominic Dromgoole, “In the Shadow of a Giant,” Sunday Times (London), 28 December 2008, pgs. 1-2; Mel Gussow and Ben Brantley, “Harold Pinter, Whose Silences Redefined Drama, Dies at 78,” New York Times, 26 December 2008, pg. A1.

The lessons derived from aging backward

I enjoyed seeing The Curious Case of Benjamin Button, but not because the film finally coheres into a memorable totality but rather since the sum of the parts end up actually greater than the whole, where vivid moments linger after the grand narrative arc fades.

The premise on which the story is based, the idea of an anomalous child born physically old who dies decades later after complete disappearance into infancy, is constrained by several challenges, some of which are skillfully handled (the old boy grows up in a retirement home, and so attracts no special notice) and others which strain credulity (including the fact that his former lover, having been essentially abandoned by him, ends up providing years of now-maternal attention without her new husband or daughter ever thinking to ask about the young child for whom the mother nows cares as she takes up residence in the same rest home).  [I am still unpersuaded of the narrative plausibility of this turn of events – the film implies that the cognitively vacant infant(old) Benjamin reconnects because Daisy’s name is all over his diary, but everything in his prior behavior and decision to leave makes implausible the idea that he would want the diary to serve as a child’s name tag, enabling a return to her or the imposition of his own care onto her later life.  And if, by the end, Benjamin can’t remember who he is or anything about his past, why should we believe that his journals and scrapbooks would have been so well preserved?]

His biological father follows Benjamin’s development from a distance but, oddly, when they reconnect at his dying invitation, not much time is spent dwelling on the biological mysteries of the reverse aging.  The fact that Benjamin’s strange trajectory is never discovered (in contrast to the original story, where the Methuselah story makes the papers) allows the terribly abbreviated end-of-life sequences a kind of melancholic privacy – the teen-then-boy-then-infant never raises anyone’s interest and no one apparently ever connects the dots, but the benefit of this is that the more mundane moments of early/late life take on an unexpected sadness, such as the quiet passing observation noting the moment when the boy loses his capacity for speech.  It hadn’t really occurred to me until that instant that this would be so haunting a moment.

The idea that old age is a sort of reversion to infancy is cruel, and apart from those whose physical or mental infirmities cause total end-of-life dependency on others, I always find myself repelled and even afraid of the mentality that leads minimum-wage nursing home attendants to treat their clients as addled or stupid.  The idea of ending my life in a nursing home is less jarring to me, per se, than is the idea that having lived a life of growth and experience and (hopefully ongoing) intellectual stimulation, one is reduced finally to having some 20-something screaming at how I need to finish my oatmeal.  I am skeptical that any death is a good one, and I know many end-of-life professional caregivers are angels in disguise, but it is the possibility of old-age condescension as much as isolation that terrifies me.  But Button, despite his final senility, is able to die a good death, lovingly cared for to the end in a mode of caregiving that recalls what caring for someone with Alzheimer’s must entail:  is the final gleam in his infant eyes a sort of final cognitive reaching out or just the last biological gasp?  And is the senile child’s sad efforts to remember the piano he played for so long a failure or a final point of human contact, or both?

Button’s awful choice to abandon his Daisy and child at a time that seems far too early – after all, Pitt is in his prime, and would any child really think to notice that her father is getting younger all the time for several more years? – raises questions too that are larger than the unique temporal disconnection haunting Benjamin’s relationship to the people he loves.  Evoked are the larger ways in which so much of human destiny is shaped by the randomness of timing and the disconnections that keep people apart.  The audience is rather beaten over the head with this theme, especially in the backwards-edited scene derailing the end of Daisy’s performing career, but it pops up everywhere.  And even with respect to Daisy the issues raised by disconnection are interesting – a scene where she and Benjamin have physically moved within the realm of sexual plausibility ends instead in a disappointing failure to connect, produced not by their respective ages or even by their sexual histories (which by this point in the midnight park have come into sync), but by the awkwardness of seeing a child-as-lover (for Benjamin in this moment an unbridgeable chasm).

The film is bracketed at both ends by disasters – World War I and Katrina – and their denouement, but their visual enactment is oblique and produce their own temporal reversals.   World War I, today remembered (if at all) as a war of horrifically widespread and anonymous slaughter, is reenacted through the very particular personal drama of a blind clock maker and his wife who lose their beloved child to battle, and to the extent the war evokes mass drama it is the exuberance of its conclusion more than the horror of its killing machines that we witness.  And Katrina, which we remember in part for the indignities of the preventable deaths it caused, is here recalled within the confines of a hospital that while impersonal (under threat of the advancing storm) is also a place of close and immediate care.  [Tangent:  Button is an example of how the twists-and-turns of film industrial production can have significant consequence.  The movie only got made, according to an account in the New York Times, because Louisiana offers big movie tax breaks to production companies.  This, in turn, caused the story to shift to New Orleans, and this has yielded a film wholly unimaginable in its originally anticipated location of Baltimore, the setting of the original short story].

Benjamin, born on the day of the Armistice, is raised and dies in a house wholly comfortable with frequent death, a upbringing at odds with a contemporary milieu where even adults are so often separated from end-of-life experiences that when they finally start to happen with friends and family their accompanying rituals and significance seem all the more jarring and derailing.  A baby in an old man’s body, physically caucasian but raised by parents of color, made mightily richer by the manufacture of something as tiny as a button, a boy who attends a faith healing where it turns out the fake patter inspires the boy to walk without changing him physically (one might say the lie actually has healing power) but kills the minister, an American who works many years of the 20th century in Russia or on the water and who wins his own battle with the Nazi subs not on a carrier but on a tugboat, a man who for much of the story seems not self-reflective at all but (it is revealed) kept a detailed daily journal for most of his life – much more than time is narratively reversed.  The familiar is thus made strange.

The opening of the film, with its clock-that-runs-backward allegory, is intriguing too.  The idea of God as a kind of watchmaker who has set into motion a universe of logically connected causes-and-effects and who is the lord of time itself was already in circulation 50 years before Darwin published Origin of Species, now 150 years old, and provides a persistent commonsensical response elaborated today by Intelligent Design Theory.  Read this way, one might see Benjamin’s magical appearance on earth as a divine effort to awaken our sensibilities and unnerve our comfortable sense of time passing.

Or one might take an opposite tack.  It was Richard Dawkins back in the 1980’s who worked to turn the designer idea on its head, arguing for a Blind Watchmaker, which is to say the concept that a universe may be ordered in ways reflective not of a central intelligence but rather a universally available concept (here, natural selection).  In Benjamin Button the blind clockmaker is visually but not cognitively impaired, and his grand backward-running clock is not an error but a commemoration of possibilities lost.  Read this way, Benjamin’s case is more curious than compelling, evidence of the oddities produced by evolutionary caprice.

The F. Scott Fitzgerald short story (written in 1922) on which the film is very loosely based reads more like a fable on medicalization (part of the problem, it seems, may be that in 1860 the Buttons decide to have the birth in a hospital instead of at home) than the allegory of aging and dying that structures the film.  And in the story Benjamin is born talking and with not just the body of an old man but his sensibilities too (“See here, if you think I’m going to walk home in this [baby] blanket, you’re entirely mistaken.” he says hours after birth).  It is inevitably mentioned that the film bears virtually no relationship with the Fitzgerald story; having just read the short, I think this fact is to the credit of the film, whose melancholic aftertaste is far sweeter than the sense of absurdity and only occasional sadness induced by F. Scott original tale.

The death of the literary critic

I’ve just finished Rónán McDonald’s little book, The Death of the Critic (London: Continuum, 2007), the broad point of which is to decry the diminution of the literary critical role in society that was formerly occupied by well trained readers like Calvin Trilling, Matthew Arnold, F.R. Leavis, and writers who also produced criticism, like T.S. Eliot and Virginia Woolf and Susan Sontag.  Criticism has been democratized by the blogosphere, mostly in ways McDonald sees as insidious; as he puts it, We Are All Critics Now (4).   And academic attention to literature, he argues, has been dominated by cultural studies perspectives that mostly insist on reading novels as symptoms of capitalism or patriarchy or racism, and in ways that have made criticism less linguistically accessible to a wider readership.  To those who might counter that criticism is more ubiquitous than ever, and who might immediately think of the New York and London book review publications and others, McDonald replies, but “how many books of literary criticism have made a substantial public impression in the last twenty years?”  “Academics in other subjects with a gift for popularizing their subject, like Richard Dawkins and Stephen Hawking, Simon Schama and A.C. Grayling, command large non-academic audiences and enjoy high media profiles.  However, there are very few literary critics who take on this role for English” (3).

McDonald sidesteps a lot of the traps characterizing other work critiquing academic literary studies.  He is not defending a return to a Great Books Canon or to the pure celebration of high culture.  His review of the historical debates over the value of criticism make clear that he grasps the complexities in the longer tradition.  He is not hostile to Theory, but rather sees it as having made important contributions that now can be superceded not because theory should be rejected but because its central insights have been mainly and rightly accepted.  McDonald sees the value in the proliferation of critical methods (genre, psychoanalytic, Marxist, formalist, semiotic, New Historicist) even as he argues that this expansion was mainly driven by the demands of 20th-century university culture to devise rigorous quasi-scientific perspectives.  He does not by and large (a notable exception is at pgs. 127-129) disparage cultural studies either substantively or by painting with too broad a brush (in fact, he spends some time defending Raymond Williams as doing the very kind of theoretically informed but also interesting work he would like to see more of).  And he is not finally a doomsayer about the culture; in fact the book closes with a sense of optimism that the attention to literary aesthetics he desires is making a sort of comeback.

Having said all this, McDonald still takes a pretty hard line, especially with respect to the culture war debates of the last half century, which in his view too readily dispatched even the merits of a long tradition of debate over the rightful role of criticism.  He thinks Matthew Arnold has been cartooned, at the expense of his insights about the way an intelligent culture of criticism can produce more interesting art.  Arnold’s defense of critical “disinterestedness,” he notes, has been almost absurdly distorted. The quote most often used to beat Arnold over the head (that criticism’s role is “to make the best that has been thought and known in the world everywhere,” a sentiment that reads like pure colonialism) is usually cited without its introduction, which says that true culture “does not try to reach down to the level of inferior classes but rather seeks to do away with classes; to make the best…”).  The correction obviously doesn’t let Arnold off the hook, but read as against the grain of the broader prejudices of his time, his perspective elaborates a more compelling vision for criticism and the capacity of art to undo elitism than a reading that sees him as simply advocating snobbery.

The case against the blogs and the kind of “thumbs up” criticism that characterize so much newspaper book reviewing and the Oprah Book Club is for McDonald situated in his recognition that the institutional practice of criticism arose under peculiar circumstances that are now being transformed.  As capitalism developed (and here he is following Habermas’ claims about the short-lived emergence of a bourgeois public sphere) and industrialization created new middle classes with leisure time and an interest in cultural elevation, a demand was created for sophisticated taste makers.  There is a tendency today to forget how radically democratic these impulses were:  “this early development was an intellectual movement from below, a way of appropriating and redistributing cultural authority from the aristocracy and land-owning classes” (54).

What is today at risk, in McDonald’s perspective, is the essential role critics can play in challenging popular preconceptions and making the world safe for difficult artworks as they defend or enact idiosyncratic perspectives and nudge or argue audiences toward controversial but potentially essential ways of seeing.  This role requires critics who are educated to the possibilities of literary and artistic generation and who are willing to make and defend evaluative judgments about what art is worthwhile or worthless.  His attack on the bloggers and academic critics is that they either insist on reading new work through existing prejudices or refuse to make evaluative claims at all, not wanting to seem elitist or read as disparaging popular culture.  Critical practice has thus been transformed from offering acts of thoughtful judgment into offering acts of clever insight, where the question implicitly answered is not so much what makes this work aesthetically rich and worth your time? and more did you notice such-and-such about this novel/TV show/film?  Skills of observation are thus elevated over skills of interpretation, and the outcomes of critical engagement are more likely to center on how interesting (or not) a text is, at the expense of how engagement with it might better educate its audience.  Taste has trumped judgment, and the demand for books is more than ever driven by the marketing of a dwindling number of books and the ever-tightening circle of I saw Ann Coulter on Fox and she was nasty and funny and so I think I’ll buy her nasty and funny new book.

McDonald does not do enough to specify exactly what sort of criticism he seeks.  He argues for criticism that makes aesthetic judgments and dismisses those who simply connect novels to the broader culture, but he seems to celebrate Virginia Woolf for doing the very thing he dislikes (in fairness to McDonald, he tries to defend Woolf as striking a sensitive balance between these tendencies).  He argues that criticism that takes an evaluative stand will attract readers, but the argument slides around a bit:  at pg. 130, where this claim is articulated, he starts by noting that boring academic writing turns readers off.  Then he says “those critics who examined popular culture alert to its pleasures found the wider public more ready to listen to what they had to say,” though that seems to imply that audiences are best found when one cheerleads (a position I take as antithetical to his larger purposes).  And then he shifts into a case for critics who write “about the value and delights of art” (note how evaluative judgment, which so far did not play in his perspective on attracting readers, is now slipped back in).  But it isn’t clear how critics who defend judgments are supposed to attract audiences in a world where enthusiastic reviews are likely to be more contagious than briefs for the defense.

But even if the cure is underspecified, I found it hard not to be persuaded by McDonald’s broader diagnosis, and the case for more fully reconnecting academic and popular cultures.

Harvard’s Arts Task Force

This past Wednesday a Harvard task force appointed by president Drew Gilpin Faust released a report advocating an expanded role for the arts there.  The report is interesting in large part because it calls attention to a circumstance common on many campuses, where the arts are ubiquitous – theatrical productions and exhibitions running all the time – but also marginalized to the work of the modern research university to the extracurricular and programmatic sidelines.  While Harvard’s circumstances are obviously not generalizable everywhere given its tremendous wealth and status as the nation’s leading private university, the Task Force led by the Shakespeare scholar Stephen Greenblatt makes a compelling claim for artistic centrality.

To those who regularly teach the creative arts none of the main arguments will seem new, but they are eloquently put and I think well suited to the audiences for claims on the collective resources of comprehensive universities who tend, even if only subliminally, to discount the arts (and for that matter the humanities) as mainly doing peripheral or service work while the real useful knowledge emerging from college campuses is being made in science laboratories and in the professional schools.  In addressing such a worldview, and it is pervasive, the report defends the intellectual practices of artistic invention as wholly necessary to intellectual work.  As the report argues:

The quarantining of arts practice in the sphere of the extra-curricular creates a false dichotomy.  It leads students (and, on occasion, their teachers) to assume falsely that the qualities of successful art-making – the breaking apart of the carapace of routine, the empowerment of the imagination, the careful selection and organization of elements that contribute to an overarching, coherent design, the rigorous elimination of all that does not contribute to this design, the achievement of a deepened intelligibility in the external and internal world – do not belong in the work they are assigned to undertake in the curriculum…  On the contrary, the forms of thinking inculcated in art training are valuable both in themselves and in what they help to enhance elsewhere in the curriculum.  These include the development of craft, the sharpening of focus and concentration, and the empowerment of the imagination.  Art-making is an expressive practice:  it nurtures intense alertness to the intellectual and emotional resources of the human means of communication, in all their complexity.  It requires both acute observation and critical reflection.  This self-reflection – the drive to interrogate conventions, displace genres, challenge inherited codes of meaning – encourages risk-taking and an ability to endure repeated failures.  It fosters both intelligent imitation and a desire to conceive and bring forth what has hitherto been unimaginable.

The report also evokes the increasingly accepted claims that the most demanding intellectual problems demand interdisciplinary approaches, and the pedagogical insistence that students learn best by making rather than by hearing, both arguments mobilized to make the case that training in the arts is not just a luxurious supplement but a necessary ingredient to serious scholarly endeavor.  Although the examples are of necessity anecdotal (and for obvious reason limited to Harvard alumni), cases are brought forward where distinguished work was enabled by exposure to the arts:  T.S. Eliot, W.E.B. Du Bois and others are mentioned as having challenged dominant paradigms because of their involvement with a range of disciplines including the arts.

When the arts are mainly championed as extracurricular events in which well rounded individuals will participate but not specialize, another danger is aroused, and “a quite specific view of the arts” is encouraged:  “Art, in this view, is a thing entirely bound up with pleasure.  Purely voluntary, it stands apart from the sphere of obligation, high seriousness, and professional training.”  And when the arts are “deemed… to be extracurricular, many students remain oblivious to the hard work – the careful training, perception, and intelligence – that the arts require.  They know that writing essays is a skilled and time-consuming labor.  They recognize that problem sets in math and science are meant to be difficult.  But ask them to photograph a landscape, compose a short story, or direct a scene rather than write an analytical essay and they will almost universally assume that the exercise will be quickly and easily dispatched.  The problem is not that they believe art-making is trivial but rather that they believe that talent alone, and not thought or diligence, will determine the outcome.”

And yet the report also makes a sustained case for the role the arts can play in nurturing happiness, by which is meant not the fleeting delight that comes from a moving sonata or an entertaining comedy, “something more than the acquisition of technical mastery, something beyond the amassing and exchange of information necessary for the advancement of human learning” – it “entails an intensified participation in the natural and human realms, a vital union of spirit and matter at once facilitated and symbolized by works of art.”

The report obviously moves rather quickly to make Harvard-specific recommendations, including a call for an expanded arts presence on their new Allston campus, concretized support for artists-in-residence, and new graduate programs in the arts.  Some of these will work elsewhere and some won’t.  But even at the level of the specifics, it is hard to imagine that the Harvard Task Force call for such initiatives as using graduate arts programs (especially new MFA degree programs) as a way to leverage artistic excellence, creating an interdisciplinary artistic Hothouse where new collaborations might be nurtured, and thinking of all campus spaces as potential places for exhibition and attention to aesthetic practice would not be well justified on any comprehensive research or liberal arts campus.

These arguments are made with some rhetorical sensitivity, offered in a way I think unlikely either to offend artists who might be inclined to see such a case as slighting their hard work or non-artists whose academic positions would less typically have them thinking seriously about the role art might play.  All the more reason that it should be widely read and its central claims broadly deployed.

Ranking the world’s best orchestras

Thinking I would become better informed about the world’s classical music scene, I’ve recently subscribed to the magazine Gramophone, and the very first issue I received is the December 2008 one whose cover announces a ranking of “the world’s greatest orchestras.”  The list has been deeply controversial, especially in Philadelphia, whose orchestra was omitted from the list.  Here is how they ranked them:

1.  Royal Concertgebouw (their concert hall pictured above)
2.  Berlin Philharmonic
3.  Vienna Philharmonic
4.  London Symphony Orchestra
5.  Chicago Symphony Orchestra
6.  Bavarian Radio Symphony Orchestra
7.  Cleveland Orchestra
8.  Los Angeles Philharmonic
9.  Budapest Festival Orchestra
10.  Dresden Staatskapelle
11.  Boston Symphony Orchestra
12.  New York Philharmonic
13.  San Francisco Symphony Orchestra
14.  Mariinsky Theatre Orchestra
15.  Russian National Orchestra
16.  St. Petersburg Philharmonic
17.  Leipzig Gewandhaus
18.  Metropolitan Opera Orchestra
19.  Saito Kinen Orchestra
20.  Czech Philharmonic

The insult to Philadelphia is especially sharp given that the Orchestra plays in one of the most magnificent symphony halls in the world (the gorgeous cello-shaped Kimmel Center for the Performing Arts), and because Gramophone list them in a box headlined “Past Glories,” along with the NBC Orchestra, which was disbanded decades ago.  (The Gramophone editor did email the Philadelphia Inquirer to note that they had made the top thirty list).  Some have wondered whether the critics are still made that Philadelphia dumped Christoph Eschenbach.

On first opening the article, I thought (fleetingly) that I might see the Atlanta Symphony Orchestra listed, if not as a top twenty then maybe as an up-and-coming – as a novice it seems to me their playing is crisper and their programming more wide-ranging and interesting under Robert Spano – but no such luck.  And how could I defend such a judgment anyway, having never heard most of the symphonies that made the final ranking?

The methodology of the list, which no matter how devised would be impossible to defend, was to have a small number of critics (eleven all told) consider live performance, recorded output, community contributions, and “the ability to maintain iconic status in an increasingly competitive contemporary climate,” whatever that means.  As one might expect, this procedure has many commentators scratching their heads.  Some have articulated alternative criteria, such as the quality of the conductor (certainly the symbiotic relationship between conductor and ensemble cannot be easily reduced to a ranking of this sort).  One blogger has noted that, given how unlikely it is that any of the panel would have seen live performances of all these organizations, the better tack would have been to have a ranking done by conductors in wide circulation (with a presumably better comparative sense of talent).

Does the surprisingly high ranking for the LSO reflect the fact that Gramophone is produced in Great Britain?  Does it matter that Mariisky and Vienna and the Metropolitan Opera ensembles mostly play in the pit?  Doesn’t critical buzz lag actual quality playing, and shouldn’t that be considered?  Should the strong recording histories of some orchestras necessarily fortify their place on the list when others may be arbitrarily restrained?  One commentator noted that Berlin under Karajan got recorded every time they sneezed, whereas New York was (arguably) under-recorded in the same period for reasons beyond their control (union contracts, etc.).  Shouldn’t it matter that some groups will be fabulous playing Bruckner and lousy playing Wagner?  How should one relatively weigh “best playing” with “most interesting programs”?  Don’t reviewing critics get ideas in their heads that sediment and are difficult to change even when the facts on the ground are fast-evolving?  And given the inevitable variability in repertoire, how can one plausibly make apples-and-oranges comparisons with any validity?  As the London Telegraph (November 26) put it,

The exercise is fundamentally preposterous, not because the placings might trigger controversy but because there are no such absolutes in music.  Asked to name the eighth “best” singer in the world or the 15th “best” violinist, we might all be hard pressed.  The same goes for orchestras, where there are considerably more variables to take into account.  Different traditions, different repertoires, halls and conductors all have an impact.  No matter how scientifically this poll was conducted, or who was involved in the voting, it bypasses the fact that the quality to celebrate in orchestras is not their top-twenty status but their diversity and the individual attributes they might bring to the performance of music.

Considering the number one orchestra, which is without doubt an incredible ensemble, brings some of these concerns into sharp relief:  Mariss Jansons has only been conducting in Amsterdam since 2004 and replaced Riccardo Chailly, who started out highly controversial (though the unanimous pick of the players he was their first ever non-Dutch conductor) and was early-on criticized by some for wrecking the Concertgebouw sound.  So is this rating reflective of true world’s-best artistry or of Jansons’ conducting honeymoon or the aftermath of the Chailly Age, which gradually gained fans and won respect?

The classical music blogger for the Guardian (UK), Tom Service, asks:  “Is the Dresden Staatskapelle really almost twice as good as the Leipzig Gewandhaus?  Should the New York Philharmonic be more highly ranked than the San Francisco Symphony when Michael Tilson Thomas’s reign in San Francisco has been infinitely more interesting than Lorin Maazel’s at Lincoln Centre?”

And so all seem agreed that the ranking is an absurdity, yes?  But the impulse to rank (which will undoubtedly move a lot of Gramophone issues) is as hard to resist for editors as it is for those recognized – you can bet the Dutch orchestra put atop the list will forever refer to itself as “having been named the world’s greatest orchestra by the authoritative publication Gramophone.”

On guilty pleasures

In Atlanta they have started to play wall-to-wall Christmas music on some of the radio stations, which in general catches me too early since thirty days of “Grandma Got Run Over By a Reindeer” can yank the Christmas spirit out of just about anyone.  But some of the songs I’ve heard clearly fall into the category of guilty pleasures, by which I mean those I’m usually ashamed to share with others.

Here is a wholly random example that I’m trying to sort out right now:  I am totally wrapped up in the Mormon Tabernacle Choir version of a Christmas hymn called “What Shall We Give to the Babe in the Manger.”  I heard it somewhere last week and then had to research to figure out the song title, and then download it from iTunes, and now can’t get it out of my head.  I’m moved by it, in part I know simply because of the early spirit of the season, and also because the lyrics, basic though they are, are gentle and rather wistful.  I guess I shouldn’t be the least bit embarrassed by this – millions of people around the world are taken with MTC music, not the least among them the seven million Mormons in the United States – but I am, if just a little.

I think it is true of everyone, especially those in the iTunes and MP3 and YouTube universe, but my popular musical habits are lately driven by one kitchy song after the next.  Last year my Christmas music pathology was the Michael Crawford version of “O Holy Night,” which many people found creepy because the idea of the Phantom of the Opera singing about Jesus struck them as just, well, plain wrong, but I found it rather spectacular. The year before it was the Century Men singing “Oh Beautiful Child of Bethlehem,” which dares to pair the all-male choral number with a hammered dulcimer backdrop that makes it sound, gasp!, downright Appalachian.

More broadly, in the last six months alone I’ve worked through (and am now mainly over – see, there’s the denial part) fast obsessions with Shirley Bassey (love the Living Tree and that bad Moonraker theme), the baton-twirling kid who almost won the British version of American Idol (the YouTube video of him on stage with his little grandmother watching from the side stage, which was admittedly edited to achieve a reaction, choked me up every time), Ben Folds singing The Luckiest, Marvin Gaye’s doing the Star Spangled Banner at some basketball game twenty years ago, or the exuberant debut video of Riverdance at the Eurovision contest years ago (especially the very end where the performers can’t seem to believe the reaction they have elicited).  But I can’t blame it all on new media or the YouTube video archive:  my guilty music pleasures – which include Lynn Anderson singing Rocky Top, the Bellamy Brothers’ Let Your Love Flow, Sniff’n the Tears’ Driver’s Seat, Dolly Parton’s Hard Candy Christmas, the O’Jays doing Love Train, that sappy Andy Williams’ song Dear Heart, Nina Simone covering the George Harrison song Here Comes the Sun, Dionne Warwick (before she went all occult) doing almost anything – have been going on for years.

And how am I supposed to defend any of this?  Please help me.

Some I know mask their true guilty pleasures – the number of people who answer questions about their “favorite music” by waxing on about Mozart and Mahler surely exceeds the number who actually have their car radios set to the classical music station – or wear their shameful preferences right there on their sleeve.  But when millions of Americans trumpet their love for achy-breaky country music and it has become the most popular radio format in America, then can one really be guilty about it?  Dollywood and NASCAR and Graceland and Desperate Housewives are at some level pure kitch, but when millions watch or attend and whole cottage industries exist to serve the needs of their fans, these institutions drop off the true guilty pleasure list.

Some portion of anyone’s guilty pleasures derive from events that transport them back to childhood, and we’ve all seen people defending the indefensible (like Baby Got Back! or Tony Basil’s Oh Mickey) with a shrug and a Sorry, high school favorite!  A recent entry in this genre was written into a newspaper column by Melissa Ruggieri:  “Not that I’ve ever hidden my devotion to Duran Duran and Bon Jovi.  Look, I grew up with them.  They were my teen crushes” (can you hear the defensiveness?).  I suspect this throwback logic explains my truly excessive love of Cat Stephens’ Morning Has Broken (it meant a lot to me in college), Neil Sedaka’s Laughter in the Rain (every time I hear it I remember what high school felt like, even though none of my high school peers would have been caught dead listening to it), and Supertramp’s School and Goodbye Stranger (when I hear those I instantly remember my senior year).  Even more than those:  Al Stewart’s Year of the Cat and Crosby/Stills/Nash/Young’s Teach Your Children Well (I think I was the only American who liked Walter Mondale more after he soundtracked a campaign-closing TV ad to that song), and, get ready to cringe, that Harper’s Bizarre 59th Street Bridge Song (you know, “feelin’ groovy”).

And maybe most of all: Andy Williams’ Moon River, which I can’t hear without being instantly transported into the wistfulness of Breakfast at Tiffany’s.  I imagine millions of others must feel the same way:  why else would it have been featured as a key soundtrack moment in Sex and the City (in the episode where Big leaves NYC for wine country it plays in his empty apartment) or in that aching scene from the HBO version of Angels in America where a relationship traumatically ended too soon is remembered as an imagined dance to that song?  So perhaps I get a pass on that one.

But I can’t blame it all on my childhood and neither can you.  Some guilty pleasures just have to be left alone for what they are:  pure moments of pathos that caught you short at the right (or wrong) time.  And this is why they are so hard to admit, since it is embarrassing to confess that one actually fell for the shameless manipulation of emotion in that movie or TV show or Obama (or, maybe for you, Palin) speech.  Or to admit that you secretly enjoyed that gruesome moment when Anthony Hopkins’ Hannibal dines on Ray Liota’s brain.  As Bill Everhart has put it, “a guilty pleasure can be a movie that is so bad it’s good, so unapologetically maudlin, violent, shameless, or ridiculous that you can’t help but love it.”  For me those would include All That Jazz!, Moulin Rouge, The Natural, Fields of Dreams, and yes, Dr. Zhivago and Music Man and the Sound of Music (confession:  I want to go to one of those sing-along Sound of Music screenings).

Some of the explanation for guilty pleasures is contextual:  in some neighborhoods a trip to Disney World is the dream vacation of a lifetime, where in others it signifies a secret shame.  One gets some sense of this from the responses volunteered by academics answering a “guilty pleasures” inquiry from the Chronicle of Higher Education:  professors named People magazine, karaoke, Texas hold ‘em, Jimmy Buffett, cheesy historical fiction, comic books, and Barry Manilow.  Imagine the shock in the faculty lounge!  But in Dallas or Philly or Oakland or Chicago and outside ivy-covered walls, I’m not sure any of these would raise any concerns.  Well, except maybe Barry Manilow.

Others are things from which we derive pleasure even though we know they are bad for us, like Krispy Kreme donuts or clove cigarettes or shopping on QVC.  An entire scholarly cottage industry has arisen to explain why so many women end up compulsively hooked on media images that create impossible weight and beauty standards, and on romance novels that even today celebrate the idea of being swept away by a Prince Charming bad boy figure (an example of this research is a recent essay by Maxine Leeds Craig).  I know I’m not supposed to like the endless-one-take “A Pretty Girl is Like a Melody” sequence in The Great Ziegfeld (1936), but I’m transfixed when I see it.  Maybe the parallel event for you is Titanic.  Henry Jenkins (who has just announced his intention to leave MIT for the University of Southern California, a fact I know because of the guilty pleasure I derive from academic gossip) has written a book, The Wow Climax: Tracing the Emotional Impact of Popular Culture (NYU Press, 2007), that spends considerable effort to account for this impact and in ways often surprisingly sympathetic (especially given the way in which popular culture is so often disparaged even within cultural studies, who dismiss it as reproducing capitalism and racism and sexism).

A good test of one’s genuine and unfabricated guilty pleasures is to uncover the answer to these questions:  What things in the world, because they move you (perhaps even to tears), do you insist on repeatedly experiencing alone?  Or, since tears are not the only measure of strong reaction:  what things rev up or inspire you (and thus bring you back to the experience time after time) that you would never confess to anyone else?  

Go ahead, name them out loud:  Billy Idol’s Rebel Yell.  Anything by ELO, but especially Mr. Blue Sky.  The Carol Burnett Show, particularly at the end every week when she tugged her ear and you knew she was saying hi to her grandmother.  Meet Joe Black.  Shirley Temple movies.  Michael Jackson music made before he went crazy.  The old Andy Griffith Show, especially any episode with Barney or Goober.  Rip Taylor.  V for Vendetta.  College marching bands.  Showboat, especially Ole Man River or the moment when at New Year’s Eve the singing prodigal daughter brings tears to her father’s eyes with her increasingly confident rendition of After the Ball.  The Jerry Lewis Muscular Dystrophy telethon.  Hush Hush Sweet Charlotte.  ABBA.  Any novel by Nora Roberts or Barbara Cartland.  Those old James Bond movies; yes, even the most absurd ones starring Roger Moore and featuring Jaws.  Xanadu (the movie, not the Broadway musical).  The Carpenters. Brian’s Song.

There now:  don’t you feel better?

And if your memory needs to be jogged, you can consult The Encyclopedia of Guilty Pleasures (by Sam Stall, Lou Harry, and Julia Spalding, published by Quirk).  The book “celebrates the joys of cheesy pleasures such as Wayne Newton, Baywatch, motion-sensitive mounted fish that break into song, and those pestilential ‘collectible’ plates and figurines from the Franklin Mint” (Loeffler).  “Going into this,” says Harry, “I had no idea there was a real Chef Boyardee.”

Meanwhile, I’ll be sticking with my official public replies:  the second movement of Beethoven’s Emperor Concerto (which is, I must say, genuinely sublime), Hitchcock’s Vertigo, anything by Sondheim, Tchaikovsky’s Violin Concerto in D Major, Frontline and Charlie Rose and the Lehrer Newshour, Rachmaninoff’s 3rd Piano Concerto (the opening minutes provide the best bang for the buck in the repertoire)…  And don’t let me forget CSPAN and Shakespeare and any film by Godard or Bergman or Renoir or any theatrical production of Beckett…

SOURCES:  Bill Everhart, “Guilty Pleasures,” Berkshire Eagle (Pittsfield, Massachusetts), 7 March 2008, Sunday magazine; Melissa Ruggieri, “Guilty Pleasures Still Sound Sweet,” Richmond Times Dispatch, 16 May 2008, pg. E-9; William Loeffler, “Guilty Pleasures:  Our Secret Shame,” Pittsburgh Tribune Review, 20 March 2005; “Guilty Pleasures Revisited,” Chronicle of Higher Education, 6 June 2008, Chronicle Review, pg. 4; Maxine Leeds Craig, “Race, Beauty, and the Tangled Know of a Guilty Pleasure,” Feminist Theory 7.2 (2006): 159-177.

How global warming imperils our history

C. Brian Rose, president of the Archeological Institute of America, introduced the November/December 2008 issue of Archeology with an editorial that begins as follows:

Global warming is real and it is one of the gravest threats facing our shared cultural heritage.  According to the National Oceanic and Atmospheric Administration, the ten warmest years have all occurred since 1995, and the UN’s Environment Program notes that the world’s glaciers are receding at a record pace.  This situation brings a cascade of problems that are having a catastrophic impact on archeological sites.  Melting of ice and permafrost endanger most frozen sites on the continents, while rising sea levels promote the erosion and submergence of others…  Examples in recent years include Ötzi, the late Neolithic herdsmen discovered in the Italian Alps; the 550-year-old Native American hunter whose body was recovered from a melting glacier in British Columbia; and the Inca human sacrifices found on Andean peaks.  Similarly endangered are the frozen burials of Eurasian nomads… Remains of 5,000 year old stone houses built by Neolithic farmers and hunters at Skara Brae, Orkney, may have to be dismantled and moved inland for protection.  Portions of the ruins of Nan Madol, an ancient political and religious center on the Pacific island of Pohnpei in Micronesia, may soon be submerged.

In the context of the larger consequences of global climate change, these effects on the historical record may seem incidental or modest, but of course the losses might be permanent, and as Rose has noted, not that difficult to document now.  He calls for a UNESCO and NASA and ESA program to do fast satellite imaging to map glaciers, since the ultraviolet readings can lead investigators to burial sites.

Every other year the World Monuments Fund releases a “world monuments watch list” to call attention to endangered sites.  For the first time in 2008, global climate change is named as a cause of urgent concern, noting that “several sites… are threatened right now by flooding, encroaching desert, and changing weather patterns.”  Two examples:  (a) Herschel Island, Canada, “home to ancient Inuit sites and a historic whaling town at the edge of the Yukon Territory that are being lost to the rising sea and melting permafrost in this fastest-warming part of the world”; and (b) Sonargaon-Panam City, Bangladesh, “a former medieval trading hub and crossroads of culture, whose long-neglected and deteriorating architecture is increasingly threatened by flooding in this low-lying country, one of the most vulnerable to the impacts of global warming.”  The dangers, because they are likely to approach gradually, are easy to ignore, and in the context of archeological sites where the main evidence is already obscured and not in plain site, awful losses might be occurring without anyone even knowing it.

Despite such warnings, there is little evidence of policy action to move in ways that would conserve historical preservation sites, perhaps not surprising given the lack of action on climate change’s broader consequences.  A recent study published by the journal Climate Change notes in Great Britain, where some emphasis has been placed on cataloging climate effects, “lack of a widespread consideration of heritage has resulted in a relatively low profile more generally for the subject.”  A 2005 UK Environment Agency report organized to set “a science agenda… in light of climate change impacts and adaptation” never mentions heritage preservation.

The danger does not simply derive from changing levels in oceans and rivers.  A 2006 “Heritage at Risk” report argues that climate change is partly responsible for the summer of 2007 fires that were among “the largest catastrophes in the Mediterranean in the last century.”  Warming was at fault because it made fires more common and intense; research reported by the Athens Observatory notes that global warming also changes soil humidity levels, and this also contributes to fire susceptibility.  While climate change is not the only cause of fires, their 2007 severity raised alarms in the historical preservation community, especially given damage to “our cultural heritage in the Peloponnese.  This included the Arcadian landscapes, Byzantine churches and monasteries, Apollo Epicurius at Bassae (a World Heritage Site), the Antiquities in Ilieia and especially the archeological site of Olympia (also a World Heritage Site).  There was damage to the area surrounding the Olympia archeological site.  The Kladios stream, a tributary of the Alpheios River, was burnt to a great extent, whereas the Kronios Hill was burnt entirely.  The park and the surroundings of the International Olympic Academy were destroyed.  Furthermore, some slopes near the ancient stadium were also burnt.”

The Centre for Sustainable Heritage at University College London released (in 2005) a major report on these issues, Climate Change and the Historic Environment, authored by May Cassar.  The document summarizes a comprehensive effort to catalog the risks, but for me most compelling starts by quoting Titania’s “weather speech,” a part of A Midsummer Night’s Dream (Act II Scene I), which eerily anticipates the threat, and may even have been prompted by the “meteorologically turbulent time when Shakespeare was writing his play” (Cassar):

                …the spring, the summer,
                The chiding autumn, angry winter, change
                Their wonted liveries, and the mazed world
                By their increase, now knows not which is which:
                And this same progeny of evil comes
                From our debate, from our dissension;
                We are their parents and original.

SOURCE:  A.J. Howard et al., “The impact of climate change on archeological resources in Britain: A catchment scale assessment,” Climate Change 91 (2008): 405-422; May Cassar, Climate Change and the Historic Environment (London: English Heritage and the UK Climate Impacts Programme, 2005).

William Eggleston invented color

The Whitney in New York has just opened a major retrospective of William Eggleston’s long career as an innovator in photography (William Eggleston:  Democratic Camera, Photographs and Video, 1961-2008), which perhaps brings full circle a journey that has been mainly centered in the American south and the Mississippi Delta (Memphis most of all) but that in 1976, and connected with an exhibit at the Museum of Modern Art (MOMA), has had galvanizing force for the wider arts.

Although the MOMA had exhibited color photography once before and had shown photos in its galleries as far back as 1932, its decision to showcase Eggleston and his color-saturated pictures in 1976 was exceptionally controversial.  At the time the New York Times said it was “the most hated show of the year.”  “Critics didn’t just dislike it; they were outraged.  Much the way viewers were aghast when Manet exhibited Olympia, a portrait of a prostitute, many in the art community couldn’t figure out why Eggleston was shooting in color” (Belcove).  Eggleston’s subjects can be seen as totally mundane (as in the above) and his public refusal to illuminate how his main works are staged proved infuriating (and actually, to the contrary, Eggleston has long insisted that he never poses his subjects, arguing, astonishingly, that these are in every case single-shot images and that either he gets the shot or moves onto the next without regret).  Prior to Eggleston, art photography was most often black-and-white.  Thus, for students of the art and practice of photography, and given his enormous visual influence, one can say without complete hyperbole that William Eggleston invented color.

Well, maybe that is a little hyperbolic.  After all, those seeking the color founding might better retreat to the period of the “Cambrian Explosion” 543 million years ago, when the diversification of the species was sparked by the evolutionary development of vision; in that time, “color first arose to help determine who ate dinner and who ended up on the plate” (Finlay 389).  Or one might look to the late Cretaceous period – prior to that “plants did not produce flowers and colored leaves.”  Further elaborating this perspective, Finlay (391) writes that:

As primates gained superior color vision from the Paleocene to the Oligocene (65 to 38 million years ago), the world for the first time blossomed into a range of hues.  At the same time, other creatures and plants also evolved and settled into ecological niches.  Flowering plants (angiosperms) radiated, developing colored buds and fruits; vivid insects and birds colonized the plants, attracted by their tints and serving to disperse their pollen and seeds.  Plants, insects, birds, and primates evolved in tandem, with color playing a crucial role in the survival and proliferation of each.  The heart of these developments lay in upland tropical Africa, where lack of cloud cover and therefore greater luminance resulted in selective evolutionary pressure for intense coloration.

It states the obvious, but I’ll do it anyway, to note that colors, along with the human capacity to recognize and distinguish among them, transforms human experience.  Part of the reason why Aristotle so famously preferred drawing to color is that the latter can too easily overwhelm one’s critical capacities (for him this was evidenced by the fact a viewer in the presence of rich color has to step back, color blurring at close range, and in the process a necessary distancing will inevitably divert audiences from attending to the artistic details present in good drawing).  Plato had disdained color too, thinking it was merely an ornamental, ephemeral and surface distraction, a view oddly recalled later by Augustine, who warned against the threat posed by the “queen of colors” who “works by a seductive and dangerous sweetness to season the life of those who blindly love the world” (qtd. in Finlay, 400).  It was only in the 12th century that Christians came fully around to color, at about the time stained glass technology was undergoing fast refinement; suddenly colored lights were seen as evoking the Divine and True Light of God.

But for centuries color was dismissed as feminine and theoretically disparaged since it “is impossible to grasp and evanescent in thought; it transcends language, clouds the intellect, and evades categorization” (Finlay, 401).  Color was thus seen as radically irrational by the thinking and professing classes – Cato the Elder said that colores floridi (florid colors) were foreign to republican virtue – all of this an interesting contrast to the Egyptian kings who saturated their tombs with gorgeous coloration and to the Greeks who ignored Aristotle’s warnings and painted their Parthenon bright blue and their heroic marble sculptures right down to the red pupils we would today prefer to digitize out since they apparently evoke the idea of Satanic possession.

The history of color is regularly bifurcated by scholars into work emphasizing chromophilia (the love of color) and chromophobia, which by contrast has often reflected an elite view that color is garish and low class.  Wittgenstein concluded that the radically subjective response to color could never be adequately specified in a manner adequate to philosophy:  “there is merely an inability to bring the concepts into some kind of order.  We stand there like the ox in front of the newly-painted stall door” (qtd. in Finlay, pg. 383).

In the context of early film production and the industry’s emerging use of color and then Technicolor, colors were seen by some as a “threat to classical standards of legibility and coherence,” necessitating close control:

For instance, filmmakers monitored compositions for unwanted color contrasts, sometimes termed visual magnets, that might vie for attention with the narratively salient details of a scene.  Within a few years the body of conventions for regulating color’s function as a spatial cue had been widely adopted.  The most general guideline was that background information should be carried by cool colors of low saturation, leaving warm, saturated hues for the foreground.  Narrative interest should coincide with the point of greatest color contrast. (Higgins)

The ongoing power of such conventions has recently led Brian Price, a film scholar at Oklahoma State University, to argue that the imposition of saturated and abstracted color in recent films made by Claire Denis and Hou Hsiao-Hsien exemplify a resistive threat to globalized filmmaking and its industrial grip on the world’s imagination.

A paradox in Eggleston’s work is that although his subjects – Elvis’ Graceland, southern strip malls, the run down architecture produced as often by the simple ravages of time and nature as of neglect – are dated and immediately evocative of a completely different though not wholly lost and variously tempoed time, his photographs seem timeless.  Like the man himself, described by one journalist as “out of place and out of time,” Eggleston captures elements of modern life that persist and his attention to the formalistic properties of color and framing make his work arresting even for those uninterested or unimpressed by the odd assemblages of southern culture who constitute his most interesting subjects.  This paradox, in turn, can produce a sense in the viewer of vague dread, as if the contradictions inhabited by the idea of serendipitous composition reveal dangers of which we are customarily unaware.  At the same time, because Eggleston has never seemed interested in documentary reportage and has defaulted to literal photographs that instead accentuate the commonplace, he “belongs to that rare and disappearing breed, the instinctive artist who seems to see into and beyond what we refer to as the ‘everyday’” (O’Hagan).

Technically speaking, Eggleston beat others to the punch because his personal wealth enabled him to produce very high quality and expensive prints of his best work; another benefit of this wealth may be that, as Juergen Teller has put it, “he has never had the pressure of being commercial.”  The dye-transfer print process he has used since the 1960’s (Eggleston resists the shift to the digital camera and image manipulation, simply noting that it is an instrument he does not know how to play) was borrowed from high-end advertising.  And although rejected early on and in some quarters – the conservative art critic Hilton Kramer notoriously described his 1976 New York exhibit as “perfectly banal” – he has been honored late in life as a prophet in his own time – a lifetime achievement award from the Institute of Contemporary Photography and another from Getty, and other honors from the National Arts Club and others to numerous to mention.  Eggleston seems immune to the critiques whether hostile or friendly, a fact reflected in the details of his mercurial and sometimes weird personal life but also in his refusal to talk talk talk about his work:  “A picture is what it is, and I’ve never noticed that it helps to talk about them, or answer specific questions about them, much less volunteer information in words.  It wouldn’t make any sense to explain them.  Kind of diminishes them.”

The distinctive Eggleston aesthetic has influenced David Lynch (readily evident in his Blue Velvet), Gus Van Sant (e.g., Elephant, an explicit homage), Sofia Coppola (the Virgin Suicides; “it was the beauty of banal details that was inspirational”), the band Primal Scream (his “Troubled Waters” forms the cover art for Give Out But Don’t Give Up) and many others.  David Byrne is a friend and Eudora Welty was a fan.  Curiously, despite his influence on avant-garde cinema and his own efforts at videography, Eggleston professes faint interest in film, although he is said to like Hitchcock.

Finlay has noted that “Brilliant color was rare in the premodern world.  An individual watching color television, strolling through a supermarket, or examining a box of crayons sees a larger number of bright, saturated hues in a few moments than did most persons in a traditional society in a lifetime” (398).  What was true of premodernity was also true of photography wings in the world’s major art museums.  Until William Eggleston.

SOURCES:  Holland Cotter, “Old South Meets New, in Living Color,” New York Times, 6 November 2008; Sean O’Hagan, “Out of the Ordinary,” The (London) Observer, 25 July 2004; Rebecca Bengal, “Southern Gothic: William Eggleston is Even More Colorful than His Groundbreaking Photographs,” New York Magazine, 2 November 2008; Julie Belcove, “William Eggleston,” W Magazine, November 2008; Scott Higgins, “Color Accents and Spatial Itineraries,” Velvet Light Trap, no. 62 (Fall 2008)L 68-70; Brian Price, “Color, the Formless, and Cinematic Eros,” Framework 47.1 (Spring 2006): 22-35; Jacqueline Lichtenstein, The Eloquence of Color:  Rhetoric and Painting in the French Classical Age, trans. Emily McVarish (Berkeley:  University of California Press, 1993); Robert Finlay, “Weaving the Rainbow:  Visions of Color in World History,” Journal of World History 18.4 (2007): 383-431; Christopher Phillips, “The Judgment Seat of Photography,” October 22 (October 1982): 27-63.