Amateur Humanist

Home » Books

Category Archives: Books

Mark Noll and the potential contributions of Christian scholarship

Mark Noll’s Scandal of the Evangelical Mind (1994), a book that scandalized the evangelical mind by noting that it wasn’t much in evidence (Noll then scandalized some further when he announced in 2006 that he was leaving Wheaton College after 27 years on the faculty for Notre Dame), was in a sense sequeled in 2011 by Jesus Christ and the Life of the Mind (Eerdmans).  Life of the Mind moves in a more hopeful direction by reconnecting with one of the most ancient of theological questions, often shorthanded as the distinction to be understood between Jerusalem and Athens:  how does one reconcile the life of faith with the life of the mind?

The very question can seem absurd.  Some Christian traditions have revered intellectualism when understood as supplemental or even constitutive of faith, and the world’s great centers of learning include many dedicated to propagation of the faith, but within the contours of profoundly thoughtful efforts to apprehend God’s creation through both the registers of reason as well as the more affectively sensitive mechanisms of intuition or unquestioning simple belief.  For advocates of those traditions – I have in mind the towering scholarly accomplishments of Catholicism and the scholarly products of the Jesuits or the Episcopalians with their metaphorical three-legged stool, but also the textually rigorous insistence that animates many of the Protestant and fundamentalist traditions and brings intellectual coherence to the “priesthood of the believer” (such as the originary impulse of the Churches/Disciples of Christ, founded by the Campbells and Barton Stone, to find converts by way of rigorous actual interdenominational debates) – a faith inconsistent with the dictates of rationality is a belief not worth having.  Why would one worship a God who cannot be apprehended, if only in part, by use of the very mental capacity that most fully distinguishes humans as God’s creation?

But the New Testament itself provides ammunition to those who see the Gospel as requiring a renunciation of the foolish dictates of reason.  The Apostle Paul thunders at the church in Corinth in a tone that taunts the ivory tower elites of his time:

For it is written:  “I will destroy the wisdom of the wise; the intelligence of the intelligent I will frustrate.”  Where is the wise man?  Where is the scholar?  Where is the philosopher of this age?  Has not God made foolish the wisdom of this world?  For since in the wisdom of God the world through its wisdom did not know him, God was pleased through the foolishness of what was preached to save those who believe.  Jews demand miraculous signs and Greeks look for wisdom, but we preach Christ crucified:  a stumbling block to Jews and foolishness to Gentiles, but to those whom God has called, both Jews and Greeks, Christ the power of God and the wisdom of God.  For the foolishness of God is wiser than man’s wisdom, and the weakness of God is stronger than man’s strength.  Brothers, think of what you were when you were called.  Not many of you were wise by human standards; not many were influential; not many were of noble birth.  But God chose the foolish things of the world to shame the wise; God chose the weak things of the world to shame the strong.  He chose the lowly things of this world and the despised things – and the things that are not – to nullify the things that are, so that no one may boast before him.  (1 Cor. 1: 19-29).

There is much to say about this passage, and regarding related passages in the Book of Acts that describe moments of encounter between budding Christian doctrine and the worldly philosophers.  But to some, Paul is here recommending the abandonment of scholasticism and the deep methods of inquiry that can incline humans to hubris.  Augustine and others famously warned against confidence in academic inquiry – how might one have confidence that truth will emerge out of the exchanges conducted among fools? – all presumably to be renounced in preference for the interactions that conducted in prayer bring human frailty into contact with Divine perfection.  And yet the New Testament also recounts multiple scenes of attempted conversion predicated not on the performance of miracles or the enactment of loving care but through the incisive work of public argument (e.g., Acts 6:8-10; 9:28-30; 17:16-17; 18:27-28; 19:8-10).  The message regarding scholarship is thus often read as profoundly mixed:  helpful as a tactic of potential conversion but also dangerous, not only because of its possible inducement to hubris but because clever sophistry (of the type Satan practiced on Jesus as he wandered the wilderness for forty days and nights, or attempted in his jousting with God over Job) can lead the innocent astray.

When it comes to those Christians who have made professional commitments to the work of the public university, the issue is further complicated.  A life built on unwavering adherence to the Christian gospel can be understood as profoundly at odds with the spirit of skepticism and unending inquiry that underwrites the academy.  Only several short steps lead many believers to see secular institutions (like, for example, public universities) as inevitably hostile to Christian discipleship.  Meanwhile expressions of doubt, the very lifeblood of academic inquiry, are too easily read as heretical when articulated in religious settings.  Athens and Jerusalem are thus apprehended as two worlds completely divided and incommensurable one to the other.  (This, I think, is deeply unfortunate, and it has always seemed to me that faith traditions would be made stronger by welcoming and working through expressions of doubt.  There is support for this position in the New Testament gospels – in one case, recounted at Mark 9:24, the father of a demon-possessed boy comes to Jesus and asks that his son be healed.  Jesus says something like Everything is possible for those who believe.  The father replies by expressing a paradox often felt by even the most dedicated believers:  I do believe; help me overcome my unbelief.  Importantly, Jesus is not offended by the contradiction but heals the boy.  And when the famous doubting Thomas expresses his skepticism about the resurrection, Jesus does not throw him out; rather, as recounted in Luke’s gospel, Jesus replies “Look at my hands and feet.  Touch me and see.”  Doubt is heard as an invitation to fellowship and grace and not read as blasphemy).

The risk that the phrase “Christian intellectual” will be thought a contradiction in terms, and the related consequence that Christianity will, if seen as embracing anti-intellectualism, repel brilliant seekers, is what I take as the animating impulse of Noll’s recent work.  His project is to argue the consistency of scholarship with Christianity, and more than that, to assert that Christians who do scholarship importantly enrich academic work.

A common approach in taking up this issue is to cite scripture on the topic of noble work.  In a number of places believers are called, irrespective of the location or nature of their employment, to excellence in the workplace, and I have heard these admonitions cited to induce even professors into offering their dedicated and best work.  Some examples:  The OT Proverbs in several places advocate for diligence (12:24 – “diligent hands will rule”; 14:23 – “all hard work brings a profit”).  Or from the Acts of the Apostles (5:38), a test that much academic work seems easily to pass:  “For if their purpose or activity is of human origin, it will fail.  But if it is from God, you will not be able to stop these men; you will only find yourself fighting against God.”  Or, alternatively, the commendation made in the letter to the Colossian church at 3:17: “Do it all in the name of the Lord Jesus” (which one might read as a command to dedicate all work, especially the work of the mind, to God’s honor); later (3:23), “Whatever you do, work at it with all your heart…”  Or, from the first letter to the Corinthian Christians, an injunction essentially to “bloom where you are planted”:  “Nevertheless, each one should retain the place in life that the Lord assigned to him and to which God has called him.”  Versions of the same idea are repeated three times in that one chapter (7:17, 7:20, 7:24) alone.  In the letter Paul wrote to the church in Ephesus, he writes (6:5-8) “Obey earthly masters with respect and fear and with sincerity of heart, just as you would obey Christ…  like slaves of Christ doing the will of God from your heart.  Serve wholeheartedly, as if you were serving the Lord, not men.”

But this is not the path laid out by Prof. Noll.  Instead, Life of the Mind searches scripture for those places where insights into intellectualism can be abstracted into a philosophy of Christian scholarship.  What Noll finds everywhere are invitations to closer scrutiny and deeper inquiry.  In the Christian creeds and in the major doctrinal worldviews found in the New Testament (such as the first chapter of Colossians (1:16-17) are statements about the created world that he reads as inviting Christians to respond to creation with the impulse to further explore and learn.  In the statements of Jesus to which I’ve alluded already (especially for Noll: “Come, and see!”), Noll apprehends a scholarly impulse which one can credit by faith with always rewarding closer scrutiny.  What Noll advocates is a faithful confidence that deeper engagement with the protocols of learning will lead thinkers closer to God and not further away:

The specific requirements for Christian scholarship all grow naturally from Christian worship inspired by love:  confidence in the ability to gain knowledge about the world because the world was brought into being through Jesus Christ; commitment to careful examination of the objects of study through “coming and seeing”; trust that good scholarship and faithful discipleship cannot ultimately conflict; humility from realizing that learning depends at every step on a merciful God; and gratitude in acknowledging that all good gifts come from above.  If, as Christians believe, “all the treasures of wisdom and knowledge” are hid in Christ (Col 2:3), the time is always past for talking about treasure hunting.  The time is always now to unearth treasure, offer it to others for critique or affirmation, and above all find in it new occasions to glorify the one who gives the treasure and is the treasure himself.  (p. 149).

Shortly after its publication, the great Yale theologian Nicholas Wolsterstorff wrote a positive review that nonetheless wondered whether Noll’s three-chapter discipline-by-discipline case studies were rich enough to make compelling the case for Christian contributions to scholarship.  He wrote:

Let me add that whereas the Christological case that Noll makes for Christians engaging in serious learning seems to me both compelling and rich, the guidelines that he teases out of classic Christology for how we actually engage in learning strike me as rather thin by comparison.  Christians, he says, will affirm contingency.  They will affirm particularity.  With the Incarnation in mind they will insist, by analogy, that ascribing a natural cause to some event is compatible with ascribing it to God as well.  They will resist the pride characteristic of intellectuals.  All true; but very general and abstract.

That point is well taken, although given the common radical separation of secular and sacred intellectual inquiry, it may be that the simple articulation of a Christian alternative itself might engage deeper thinking.

For me, the trickier question is whether, despite the intellectual payoffs to be found in the great faith traditions, they should ever be strongly asserted in the public university.  One need not condemn Christians to silence in the public square to recognize that in an institution aiming to welcome and encourage thinkers from all backgrounds and perspectives, the forceful articulation of Christian theological imperatives risks doing as much damage to the open spirit of inquiry (by silencing those who will wonder if they can freely disagree with faith commitments so deeply held) as good.  I wonder.  It may be that the scholarly work Noll advocates is best undertaken in explicitly religious institutions, from which point its findings and main claims can be disseminated more widely as an implicit corrective to the narrower work of a public educational system that will of rightful necessity orient its efforts to reach more widely.

For me, then, Noll’s work finally raises this question:  Even conceding the strongest case for Christian scholarship (which is to say, the case that an articulated Christian worldview can enliven any disciplinary conversation), does it then follow that Christian commitments should be always and everywhere articulated?  Or, put a bit differently, does every workplace obligate the believer to proselytization?  Is it possible that, as Paul made tents to raise money for his missionary journeys, there were days when he simply quietly engaged in craftsmanship without preaching to his colleagues?  As the New Testament figure Lydia made purple silks, which we are told she did to fund the work of the church, did she try to determine how this or that biblical verse might better inform her artistic practice?  Or were these believers content simply to segment their good work, willing to concentrate their evangelism within other locales where Christian testimony would be more gratefully received than the tent or silk workshops?

The importance of watching

I’m not quite finished with it yet, but Paul Woodruff’s recent The Necessity of Theatre: The Art of Watching and Being Watched (Oxford:  Oxford University Press, 2008) makes a compelling case for treating theatre as central to the human experience.  Woodruff’s point is not to reiterate the now-familiar claim that theatrical drama importantly mirrors human experience, although I assume he would agree with thinkers like Kenneth Burke (who insisted in his own work that theatricality was not a metaphor for human life, but that our interactions are fundamentally dramatically charged).  Rather, theatre, which he defines (repeatedly) as “the art by which human beings make or find human action worth watching, in a measured time and place” (18), enacts much of what is basic to human sociability.

Theatre and life are about watching and the maintenance of appropriate distance, and the way in which collective observation provides validation for human interaction (such as in the ways public witness validates a marriage ceremony or makes justice, itself animated by witnesses, collectively persuasive).

The book is a little frustrating – Woodruff is a philosopher and the book starts by discovering the river (and making its boldest claim up front) and then guiding the reader through all the connected tributaries, and that can be a little tedious when the journey starts to feel less like a riverboat cruise and more like navigating sandbars.  That is, the project proceeds too fully as a definitional typology of theatre, an approach that performatively contradicts the most important think about theatre itself:  finding audiences and keeping them interested.  Woodruff also has a tendency to keep announcing how important his claims are:  “Formally, however, I can point out already that [my] definition has an elegance that should delight philosophers trained in the classics” (39).  “This is bold” (67).  “My proposed definition of theatre is loaded” (68).  And so on.

But along the way Woodruff says a lot of interesting things.  Some examples:

•  “Justice needs a witness.  Wherever justice is done in the public eye, there is theatre, and the theatre helps make the justice real” (9).  

•  “People need theatre.  They need it the way they need each other – the way they need to gather, to talk things over, to have stories in common, to share friends and enemies.  They need to watch, together, something human.  Without this…, well, without this we would be a different sort of species.  Theatre is as distinctive of human beings, in my view, as language itself” (11).

•  “Politics needs all of us to be witnesses, if we are to be a democracy and if we are to believe that our politics embody justice.  In democracy, the people hold their leaders accountable, but the people cannot do this if they are kept in the dark.  Leaders who work in closed meetings are darkening the stage of public life and they are threatening justice” (23).

•  “The whole art of theatre is the one we must be able to practice in order to secure our bare, naked cultural survival” (26).

•  “A performance of Antigone has more in common with a football game than it does with a film of Antigone” (44).

I began by cheating, I suppose, by reading the epilogue, where Woodruff notes:  “I do not mean this book to be an answer to Plato and Rousseau…, because I think theatre in our time is not powerful enough to have real enemies.  Theatre does have false friends, however, and they would confine it to a precious realm in the fine arts.  We need to pull theatre away from its false friends, but we have a greater task.  We need to defend theatre against the idea that it is irrelevant, that it is an elitist and a dying art, kept alive by a few cranks in a culture attuned only to film and television.  I want to support the entire boldness of my title:  The Necessity of Theatre” (231).

The death of the literary critic

I’ve just finished Rónán McDonald’s little book, The Death of the Critic (London: Continuum, 2007), the broad point of which is to decry the diminution of the literary critical role in society that was formerly occupied by well trained readers like Calvin Trilling, Matthew Arnold, F.R. Leavis, and writers who also produced criticism, like T.S. Eliot and Virginia Woolf and Susan Sontag.  Criticism has been democratized by the blogosphere, mostly in ways McDonald sees as insidious; as he puts it, We Are All Critics Now (4).   And academic attention to literature, he argues, has been dominated by cultural studies perspectives that mostly insist on reading novels as symptoms of capitalism or patriarchy or racism, and in ways that have made criticism less linguistically accessible to a wider readership.  To those who might counter that criticism is more ubiquitous than ever, and who might immediately think of the New York and London book review publications and others, McDonald replies, but “how many books of literary criticism have made a substantial public impression in the last twenty years?”  “Academics in other subjects with a gift for popularizing their subject, like Richard Dawkins and Stephen Hawking, Simon Schama and A.C. Grayling, command large non-academic audiences and enjoy high media profiles.  However, there are very few literary critics who take on this role for English” (3).

McDonald sidesteps a lot of the traps characterizing other work critiquing academic literary studies.  He is not defending a return to a Great Books Canon or to the pure celebration of high culture.  His review of the historical debates over the value of criticism make clear that he grasps the complexities in the longer tradition.  He is not hostile to Theory, but rather sees it as having made important contributions that now can be superceded not because theory should be rejected but because its central insights have been mainly and rightly accepted.  McDonald sees the value in the proliferation of critical methods (genre, psychoanalytic, Marxist, formalist, semiotic, New Historicist) even as he argues that this expansion was mainly driven by the demands of 20th-century university culture to devise rigorous quasi-scientific perspectives.  He does not by and large (a notable exception is at pgs. 127-129) disparage cultural studies either substantively or by painting with too broad a brush (in fact, he spends some time defending Raymond Williams as doing the very kind of theoretically informed but also interesting work he would like to see more of).  And he is not finally a doomsayer about the culture; in fact the book closes with a sense of optimism that the attention to literary aesthetics he desires is making a sort of comeback.

Having said all this, McDonald still takes a pretty hard line, especially with respect to the culture war debates of the last half century, which in his view too readily dispatched even the merits of a long tradition of debate over the rightful role of criticism.  He thinks Matthew Arnold has been cartooned, at the expense of his insights about the way an intelligent culture of criticism can produce more interesting art.  Arnold’s defense of critical “disinterestedness,” he notes, has been almost absurdly distorted. The quote most often used to beat Arnold over the head (that criticism’s role is “to make the best that has been thought and known in the world everywhere,” a sentiment that reads like pure colonialism) is usually cited without its introduction, which says that true culture “does not try to reach down to the level of inferior classes but rather seeks to do away with classes; to make the best…”).  The correction obviously doesn’t let Arnold off the hook, but read as against the grain of the broader prejudices of his time, his perspective elaborates a more compelling vision for criticism and the capacity of art to undo elitism than a reading that sees him as simply advocating snobbery.

The case against the blogs and the kind of “thumbs up” criticism that characterize so much newspaper book reviewing and the Oprah Book Club is for McDonald situated in his recognition that the institutional practice of criticism arose under peculiar circumstances that are now being transformed.  As capitalism developed (and here he is following Habermas’ claims about the short-lived emergence of a bourgeois public sphere) and industrialization created new middle classes with leisure time and an interest in cultural elevation, a demand was created for sophisticated taste makers.  There is a tendency today to forget how radically democratic these impulses were:  “this early development was an intellectual movement from below, a way of appropriating and redistributing cultural authority from the aristocracy and land-owning classes” (54).

What is today at risk, in McDonald’s perspective, is the essential role critics can play in challenging popular preconceptions and making the world safe for difficult artworks as they defend or enact idiosyncratic perspectives and nudge or argue audiences toward controversial but potentially essential ways of seeing.  This role requires critics who are educated to the possibilities of literary and artistic generation and who are willing to make and defend evaluative judgments about what art is worthwhile or worthless.  His attack on the bloggers and academic critics is that they either insist on reading new work through existing prejudices or refuse to make evaluative claims at all, not wanting to seem elitist or read as disparaging popular culture.  Critical practice has thus been transformed from offering acts of thoughtful judgment into offering acts of clever insight, where the question implicitly answered is not so much what makes this work aesthetically rich and worth your time? and more did you notice such-and-such about this novel/TV show/film?  Skills of observation are thus elevated over skills of interpretation, and the outcomes of critical engagement are more likely to center on how interesting (or not) a text is, at the expense of how engagement with it might better educate its audience.  Taste has trumped judgment, and the demand for books is more than ever driven by the marketing of a dwindling number of books and the ever-tightening circle of I saw Ann Coulter on Fox and she was nasty and funny and so I think I’ll buy her nasty and funny new book.

McDonald does not do enough to specify exactly what sort of criticism he seeks.  He argues for criticism that makes aesthetic judgments and dismisses those who simply connect novels to the broader culture, but he seems to celebrate Virginia Woolf for doing the very thing he dislikes (in fairness to McDonald, he tries to defend Woolf as striking a sensitive balance between these tendencies).  He argues that criticism that takes an evaluative stand will attract readers, but the argument slides around a bit:  at pg. 130, where this claim is articulated, he starts by noting that boring academic writing turns readers off.  Then he says “those critics who examined popular culture alert to its pleasures found the wider public more ready to listen to what they had to say,” though that seems to imply that audiences are best found when one cheerleads (a position I take as antithetical to his larger purposes).  And then he shifts into a case for critics who write “about the value and delights of art” (note how evaluative judgment, which so far did not play in his perspective on attracting readers, is now slipped back in).  But it isn’t clear how critics who defend judgments are supposed to attract audiences in a world where enthusiastic reviews are likely to be more contagious than briefs for the defense.

But even if the cure is underspecified, I found it hard not to be persuaded by McDonald’s broader diagnosis, and the case for more fully reconnecting academic and popular cultures.

Christianity: Undergoing a Great Emergence?

Phyllis Tickle is the founding editor of the religion section of Publisher’s Weekly, a position created when the market for spiritual books exploded in the late 1980’s (she started in the early 1990’s).  From that vantage point, and given her own theological predispositions, she has had a unique perspective on the unfolding debates within Christendom that are both dividing denominations and arguably creating what she, in a recent book, terms a Great Emergence (Tickle, The Great Emergence: How Christianity is Changing and Why, BakerBooks, 2008).

The book starts with an intriguing premise whose promise is, I think, unfulfilled as Tickle works through the argument.  The idea is that Christianity (she is also willing to concede this may be true of the Islamic and Jewish traditions; pgs. 29-30) moves in roughly 500-year cycles, each concluded by significant ideological upheaval, schism, and regeneration.  Thus roughly 500 years ago was the Protestant Reformation (dated to 1517, when Luther nailed his 95 Theses to the Wittenberg church, portrayed in the image above), another 500 years before that the Great Schism, and another 500 years earlier to the work and aftermath of the Chalcedon Council.   Following the standard accounts, the Great Schism is credited as producing, in no small measure under the example of Gregory the Great, an end to the wars that had split Christendom into three competing regional institutions.  And the debates settled or papered over at Chalcedon in 451 led in turn to the production of a monastic culture that preserved literacy and learning through the aftermath of the collapse of the Roman Empire.  By this historical reckoning, we are roughly due for another rebooting of the Christian faith, or, as Tickle puts it, following the Anglican bishop Mark Dyer, a “giant rummage sale” – all of which will induce Christianity 5.0, as it were.

The term Great Emergence references the phenomenon of religious uncertainty and a crisis of spiritual authority in the modern world, and also broader cultural transformations, such as globalization (15), information overload (15), and the World Wide Web (53).

While one can never be certain of the outcome, Tickle takes comfort from the historical fact that “there are always at least three consistent results or corollary events.  First, a new, more vital form of Christianity does indeed emerge.  Second, the organized expression of Christianity which up until then had been the dominant one is reconstituted into a more pure and less ossified expression of its former self…  The third result is of equal, if not greater significance”:  “…every time the incrustations of an overly established Christianity have been broken up, the faith has spread – and been spread – dramatically into new geographic and demographic areas…” (17).  This leads her to a repeated expression of optimism, even when (as follows) she is recounting the worst aspects of Christian history (here, colonialism):

…the more or less colonialized Church that Reformation Protestantism and Catholicism managed to plant was, obviously more or less colonialized, with all the demeaning psychological, political, cultural, and social overtones and resentments which that term brings with it.  One does not have to be particularly gifted as a seer these days, however, to perceive the Great Emergence already swirling like balm across that wound, bandaging it with genuinely egalitarian conversation and with an undergirding assumption of shared brotherhood and sisterhood in a world being redeemed. (29).

The ferment in the Christian world today is, depending on one’s perspective, evidence of the End of the Age and a coming Rapture/Apocalypse, evidence that rationalism has finally ushered religious superstition into the final death throes announced almost fifty years ago with the phrase God is Dead, evidence of a long overdue urgent need for Christian revival, or, as is argued here, the birth pangs of a reconfigured and stronger faith tradition.  One problem in Tickle’s argument is that she starts by asserting a case that needs to be proved:  why derive confidence from the Episcopal or Anglican schisms, or the increasing divide between mainstream Christianity as understood in, say, North America and Africa?  why believe that the denominational spasms opened by the debates over gay marriage and Terry Schiavo are the happy start of a revitalized faith as opposed to signifying irreparable breaches in the Body of Christ?  One cannot simply point to prior reformations as establishing the case for optimism.

The book goes downhill, not because the author lacks insight, but because the issues it engages are inevitably too complicated to be reduced to the metaphorical images Tickle offers as roadmaps to an ever more fragmented religious scene.  Those maps are just complicated enough to seem awkward (religious signification is like a cable connecting a boat to a dock, where the cable has an outer waterproof covering that is the story of community, an internal mesh sleeve which is the common imagination, and internal woven strands signifying spirituality, corporeality, and morality:  get it?) but not complex enough to do justice to the worlds of faith.  And all this is worsened in the final pages, where a 2-by-2 grid is made more and more complex, such that by the end the picture has been made into an unholy mess.  The grids that organize the book thus give rise to sentences that make no sense:  “Corporeality’s active presence in religion is also the reason why doctrinal differences like those surrounding homosexuality, for example, are more honestly and effectively dealt with as corporeal rather than as moral issues” (39).  Huh?

The book’s middle section, which aims to enumerate the factors that have brought us to this juncture, is the weakest.  While naming all the usual suspects (Darwin, Freud, the pill, industrial transformation, science, Marxism, recreational drug use, womens’ rights organizing that changed the family, and others), the argument sometimes veers into weird territory.  Alcoholics Anonymous is blamed for making God generic.  The automobile is accused of weakening grandma’s Sunday afternoon hegemony over religious training (instead of interrogating the kids about that morning’s Sunday School lesson, the kids took the car on a fast Sunday drive; pg. 87).  The Sony Walkman and the iPod are blamed for ruining worship services (105).  Generation X disenchantment with organized religion is ironically blamed on efforts by the church to extend programs into communities, like after-school basketball (91).

Joseph Campbell (the Hero With a Thousand Faces, Bill Moyers guy) is named the leading suspect in the collapse of Christianity authority, a claim that seems wildly exaggerated (Tickle:  “It would be very difficult, in speaking of the coming of the Great Emergence, to overestimate the power of Campbell in the disestablishment of what is called ‘the Christian doctrine of particularity’ and ‘Christian exclusivity,” pg. 67).  The central claims of Marx’s Das Kapital are significantly caricatured (89).  A couple pages later (90) Tickle implies the Great Society was a communist plot (judge for yourself:  “Twentieth century Christianity in this country met the statism and atheism in communist theory head on, and American political theory militated from the beginning against the heinous brutality inherent in unfettered power.  Nonetheless, we voted in Roosevelt’s New Deal and Johnson’s Great Society.”)  Left out altogether or only passingly mentioned are other events that seem to me a lot more theologically decisive:  the Bomb, the Holocaust, the world wars, Vietnam, the Cold War.   The case starts to feel sloppy, too quickly written.

I regret this because the book raises important questions:  Are we living in a time of religious transformation or evisceration?  Are there resources in the Christian faith sufficient to reconstitute doctrinal authority in an age that resists authority wherever asserted?  To what extent is the cultural elite rejection (sometimes articulated as postmodernism) of capitalism, middle class values, the nuclear family, and the nation-state also evidence of the collapse of institutional religion (or is religion the potential cure)?  Are current upheavals (economic, political, security) more likely to rekindle religious faith or to weaken denominations further by arousing skepticism?

Perhaps a Great Emergence lies close at hand.  Or maybe not.

Alexander Solzhenitsyn’s complex legacy

The death Sunday in Russia of Alexander Solzhenitsyn, the 1970 Nobel Prize winner, and the reaction it has apparently elicited there so far signals the inevitable response to a figure whose life’s work was to bear close witness to an age that is now long gone.  Rightly revered as a man of letters, Solzhenitsyn seems more forgotten and respected by today’s Russians than loved or judged relevant to the popular semi-authoritarianism of the Putin/Medvedev government.  This ambivalence reflects my own reaction to his life’s work:  while I have profound admiration for his literary gifts and for his personal courage in unmasking the absurd and ugly tyrannies of the Soviet system, I disagree with his simplistic diagnosis of the west and his hostility to the Enlightenment and for the impulses of European humanism that he seemed to see as a kind of blasphemy (he believed that Bolshevism itself was a natural outgrowth of Enlightenment hubris).  By contrast, I see the Enlightenment’s legacy, as complex as it is, as a potential ongoing resource for social uplift and the broader expansion of human freedom.

Part of my conflicted response undoubtedly reflects the use made of Solzhenitsyn by Russian conservatives (Pamiat and other sometimes anti-semitic conservative groups have regularly made appeals resting on Solzhenitsyn’s ethical authority), but it may also reflect the extent to which Solzhenitsyn’s religiosity (and his profound detachment from the wider themes of European humanist thought) produced an eviscerating and in my view largely incorrect diagnosis of the West, which he dismissed as animated by atheism and in need of religious revival.  Solzhenitsyn’s case partly resided in his view that the Enlightenment substituted man as a false idol for God:  “everything beyond physical well-being and the accumulation of material goods, all other human requirements and characteristics of a subtler and higher nature, were left outside the area of attention of state and social systems, as if human life did not have any higher meaning” (qtd. in Confino, pg. 613).  This, in turn, led him toward fundamentally anti-democratic directions, since as he put it in a 1980 Foreign Affairs essay, “the truth cannot be determined by voting, since the majority does not necessarily have any deeper insight into the truth.”  Such views have been easy to convert into an often strident defense of the need for the return of Holy Russia, with all its attendant dangers of dictatorship or theocracy (one critic referred to Solzhenitsyn as “the Russian ayatollah”).

My first encounter with Solzhenitsyn’s thought came not by reading his Gulag Archipelago or One Day in the Life of Ivan Denisovich but by hearing about the controversial lecture he delivered at the Harvard Class Day exercises in June 1978.  The theme the address lays out, that Western and Russian civilizations pursue largely distinctive but dangerously parallel trajectories, prefigured both Solzhenitsyn’s increasingly evocative defense of strong Russian nationalism and what was described in today’s eulogies by many commentators as a perhaps inadvertent attack on American society – the passage from the Harvard speech where this indictment is specified is brief but as I recall it received almost all the attention from the American press at the time.

Here is what Solzhenitsyn said:

A decline of courage may be the most striking feature which an outside observer notice in the West in our days.  The Western world has lost its civil courage, both as a whole and separately, in each country, each government, each political party and of course in the United Nations.  Such a decline in courage is particularly noticeable among the ruling groups and the intellectual elite, causing an impression of loss of courage by the entire society.  Of course there are many courageous individuals but they have no determining influence on public life.  Political and intellectual bureaucrats show depression, passivity and perplexity in their actions and in their statements and even more so in theoretical reflections to explain how realistic, reasonable as well as intellectually and even morally warranted it is to base state policies on weakness and cowardice.

For the American right, this talk of Western weakness was red meat and they eagerly devoured it, for these were the Carter years and a period of Ronald Reagan’s national political ascendancy where the case against Carter rested on this theme of American decay and weakness from within; indeed, Solzhenitsyn is arguably most valorized by conservative think tanks and advocates of an assertive (one might say masculine) foreign policy.  A short section devoted to Solzhenitsyn in one of John McCain’s books was reprinted as a eulogy in a New York newspaper today; McCain focuses on the personal courage it took for Solzhenitsyn to write and then deliver Gulag to non-Soviet publishers, but it is hard to miss his sympathies for the assertive moralism of Solzhenitsyn’s policy perspective as well.

Although Ronald Reagan seems to have been more influenced by his reading of Whittaker Chambers (he was able to recite lines from memory out of Witness), he also read Solzhenitsyn in the late 1970’s.  Reagan was offended when news leaked that then-Secretary of State Henry Kissinger had persuaded President Ford not to meet Solzhenitsyn at the White House (Kissinger to Nixon:  “Solzhenitsyn is a notable writer, but his political views are an embarrassment even to his fellow dissidents”).   In Dinesh D’Souza’s hagiography of Reagan, D’Souza argues that Reagan “liked to cite a point that both Chambers and Solzhenitsyn made in different ways: communism is a false religion that seeks to destroy the family, private property, and genuine religious faith in order to achieve a kind of earthly paradise” (75).  These are sentiments which finally made their way into Reagan’s “Evil Empire” speech (Edmund Morris:  “Two foreigners with direct experience of totalitarianism had touched on it before, in ways that seem to have gotten Reagan’s attention.  One was Alexander Solzhenitsyn, who told the AFL-CIO in 1975 that the Soviet Union was ‘the concentration of World Evil.’  The other was Alexandre de Maranches, the chief of French intelligence who flew all the way to Los Angeles in December 1980 to warn Reagan against ‘l’empire du mal’”  [472]).

In a July 1978 radio address, Reagan directly relied on Solzhenitsyn at length to make a larger thematic point:

Remembering the anti-Vietnam war sentiment of the late 60s and 70s, some might find a bit of irony in the fact that Alexander Solzhenitsyn was this June’s Harvard University graduation speaker…  For those who think hopefully that Angola might become the Soviet Union’s Vietnam or that Cuba’s adventuring in Africa can be stopped by being polite to Castro, he has an answer.  He describes their failure to understand the Vietnam War as “the most crucial mistake.  Members of the U.S. antiwar movement wound up being involved in the betrayal of fear eastern nations in a genocide and in suffering today imposed on 30 million people.”   …If the West doesn’t have the will to stand firm, Solzhenitsyn says, nothing is left then but concessions and betrayal to gain time…  Then he said that while the next world war would probably not be an atomic one, still it might very well bury western civilization forever…  Solzhenitsyn told the Harvard graduating class that since our bodies are all doomed to die, our task while on earth must be of a more spiritual nature.  [quoted in Dugger, pg. 515]

Today the extent to which Solzhenitsyn was deployed by American conservatives to bolster the case against communism is sometimes downplayed by scholars interested in Solzhenitsyn-the-Russian-dissident; one scholar interviewed today on the Lehrer Newshour said he thought Solzhenitsyn’s comments at Harvard were more about Russia than America, and that the attention accorded his comments about the United States were overly hyped by both the right and the media.  This reaction conveys some of the ongoing ambivalence in the reaction to Solzhenitsyn, which valorizes his nonfictional fiction (it is a common reaction to say that Gulag actually created a wholly new genre that blurred these categories) even while quietly expressing concern about his lapses into jingoism and arguably worse (a long-term dispute over Solzhenitsyn’s writing centers on whether, for example, despite a lack of explicit evidence, Solzhenitsyn himself trafficked in anti-Semitism).

But the reason Solzhenitsyn was so influential for the right also reflects the broader and special credibility of the witness-from-within, the whistleblower who with meticulous care documents the inside corruption and decay, and because he was so compellingly careful he was also persuasive to the people of the Soviet Union, who sometimes hid his outlawed books in empty detergent boxes since reading his indictment of the Soviet system was to risk arrest for treason.

What made Gulag so compelling was its unrelenting detail, particularly the manner by which, quoting the economic historian Steven Rosefielde, a wholesale historical revision was subsequently required of Stalin’s First Five Year Plan:  “In his thoroughness, in the uncompromising way he exposes all the rationalizations and lies used to conceal and mitigate the significance of Soviet forced labor, Solzhenitsyn conveys a sense of authenticity that cannot be gainsaid even by those who find fault with his work on other grounds” (559).  Solzhenitsyn’s systemic cataloguing of Soviet bookkeeping distortions, which showed how production managers hugely exaggerated industrial production, sometimes showed how laughable fictions were passed off as truth (in the second volume of Gulag, he describes how owners of a lumber plant reported 1500 cubic yards of timber had been harvested but then said it had to be destroyed because no transportation was available to move it out of the production facility; later, masses of timber were carried from annual report onto annual report until someone got the bright idea to say that it had “spoiled,” which meant all of it could be written off without subverting absurdly high national production quotas).

After Solzhenitsyn returned to Russia from exile, he concentrated his energies on his gigantic life’s work, his Red Wheel project, and its magnitude yielded significant publication but at the expense of a diminishing involvement with Russian public affairs as he withdrew to get the writing done.  His public speeches were infrequent near the end and his media persona often hectoring (soon after returning to Russia he was invited to address the parliament; he railed on for roughly an hour and received a muted response from those who stayed for the whole thing).  But his prophetic role, which because of its strong embrace of Russian nationalism today drew the endorsement of Putin himself, lingers not only for American conservatives seeking a more muscular and moralistic foreign policy, but also for those who despise the petty tyrannies of authoritarian bureaucracies and will forever look to Solzhenitsyn as proof positive that even near-total mechanisms of state control can be countered when ethical and eloquent writers tell the truth as they see it.

SOURCES:  Michael Confino, “Solzhenitsyn, the West, and the New Russian Nationalism,” Journal of Contemporary History 26 (1991): 611-636;  Sidney Monas, “Solzhenitsyn’s Life,” Russian Review 44 (1985): 397-402; Steven Rosefielde, “The First ‘Great Leap Forward’ Reconsidered: Lessons of Solzhenitsyn’s Gulag Archipelago,” Slavic Review 39.4 (December 1980): 559-587; Ronnie Dugger, On Reagan:  The Man and His Presidency (New York: McGraw-Hill, 1983); Dinesh D’Souza, Ronald Reagan:  How an Ordinary Man Became an Extraordinary Leader (New York:  Simon and Schuster, 1997); Edmund Morris, Dutch: A Memoir of Ronald Reagan (New York: Random House, 1999); Alexander Solzhenitsyn, “Address at Harvard Class Day Afternoon Exercises,” 8 June 1978; Harvey Fireside, “Dissident Visions of the USSR:  Medvedev, Sakharov, and Solzhenitsyn,” Polity 22.2 (Winter 1989): 213-229.

The visual iconic

A long-standing argument has been conducted by scholars of public argumentation over the specific power exerted in public life by visual images.  No one denies that the brain’s responses are viscerally shaped by visual stimuli in ways that vary from and sometimes exceed the role of rational thinking or emotional reaction.  But there has been a tendency to disparage the role of the image in public life, as always subverting critical thought and deliberation, such as when the evocative appeal of a picture leads viewers to rush to judgment.

These arguably pernicious consequences for careful political reaction may be at work even when it otherwise seems like powerful images (of, say, Katrina or tsunami aftermath photos, or images of protesters being firehosed back or beaten) evoke positive responses, like a rush to donate blood or money or food, or to a sense of righteous indignation (as occurred when the Abu Ghraib photos were first circulated worldwide), since even in those cases a disproportionate reaction may be set in motion.  Audiences are thus moved to allocate resources in finally indefensible ways, expending an exaggerated share of their psychic care to erase injustices that can be seen, at the cost of relatively ignoring broader structural inequities that cannot be so easily visualized.  Given such a view, which I think is pervasive not just among scholars of argumentation but in the wider commentariat, pictures are seen as a shallow and shabby substitute for deliberation, truncating and oversimplifying as they often do more complicated issues.

Countering this perspective is one of the central tasks of Robert Hariman and John Lucaites’ No Caption Needed:  Iconic Photographs, Public Culture, and Liberal Democracy (University of Chicago Press, 2007).  Previewing their argument, they note that “Instead of seeing visual practices as threats to practical reasoning or as ornamental devices that may be a necessary concession to holding the attention of a mass audience, we believe they provide crucial social, emotional, and mnemonic materials for political identity and action.”  Certain key images which achieve iconic status shape and reflect the larger motifs of democratic life; the case studies include images such as “Migrant Mother,” “Times Square Kiss,” “Accidental Napalm,” “Raising the Flag on Mount Suribachi” (the Iwo Jima flag photo) and others whose short titles immediately call to mind unforgettable markers of contemporary culture.

The potentially negative repercussions of the wide circulation of such images might be undone, in part, they note, by the capacity of new digital technologies to yield parody and cruder mockeries likely to stymie the aims of propagandists (“The good news [and they also note the bad news side as well] is that the dreaded political spectacle is broken up, fenced in, or otherwise democratically interdicted by digital media.  Even if an icon became the leading image of a propaganda campaign, it would at the same time be pulled through circuits of appropriation that quickly distort and criticize its intended effect” [304-305]).  Interestingly, a review essay in the new Columbia Journalism Review interrogates such potential optimism, by asking (as does its subtitle), “what will become of photojournalism in an age of bytes and amateurs”?  The implication of the essay, which admits of no easy conclusion, is that the same PhotoShop world that can debunk dictators and their propaganda machines also risks undoing the powerful benefits of investigative photojournalism (Alissa Quart, “Flickring Out,” CJR, July/August 2008, pgs. 14-17).

That certain iconic images acquire so many layers of signification as to finally reflect and then shape a wider polity’s sensibilities does not, of course, vindicate the sea of more banal images that course through a culture and which may alternatively distract (if only by their novelty and bedazzlement) or interrupt practices of thoughtful deliberation.  I wonder whether the potentially tighter nexus of socially accumulating signification between iconic images and clearly explicable public arguments is exceptional in the larger context of publicly mass circulating images.

In a lecture that builds on the book, where significant but more mundane images are scrutinized (the version I heard centered on images of boots and hands), a sort of Miller Analogy is made, where gesture is to oration as photojournalistic image is to ??? (deliberation, liberalism, politics, body politic).  The analogy does important argumentative work by suggesting that the integral relationship between gesture and public speaking is duplicated in the second pair.  In some ways the analogy works perfectly (of course, helped along by the fact that in the boots/hands talk the images are of gestures) – both gestures and images are mediated, constitutive, stylistically supplemental, relatively open-to-interpretation signifiers, performative — but the potential limits are also interesting, too, I think, since it strikes me that the disjunction between the two halfs of the analogy might justify a tilt in a direction more skeptical of these images as democratically productive or enabling or even necessary.

I say this for two reasons:

It seems to me that a gestural economy is oriented around emphasis, whereas a photojournalistic/image economy is oriented around attention – emphasis and attention are overlapping functions of course, but are also I think importantly distinctive.  When editors pick photos to accompany an essay, in the process watching perhaps thousands to get the right one, they are arguably making their choices not because they emphasize some aspect of the reportage’s argument or information, but rather, I suspect, because they grab the editor’s attention (and ours) by the collar.  Gestures are thus invariably more closely attuned to the content of (say) a speech than images are to issue coverage.  The main graphic used here in Atlanta to intro news about our ongoing drought is a good example, I think – the local ABC affiliate is using an image of a parched African desert, which has at best a slight substantive connection to the contours of the debate over the water shortage here except that it grabs attention to think that Atlanta might become a wholly parched wasteland.

Another potentially important distinction between a gestural and imagistic economy is that a speaker’s gestures can be directly connected to considerations of that person’s ethos and character, and are thus able to provide information pertinent to judgment.  Even when we’ve never seen the speaker before, how she gestures (if only because we know or think we know that person controls her own body) can signify character within the cultural code useful to making broader judgments about authenticity and eloquence.  That connection seems wholly severed by more fragmentary images – we not only have no idea who took (and framed) the picture, but obviously someone photographing, say, the President can choose to make him tiny or huge, titanic or irrelevant – so we are presented with exceptionally powerful images but almost by definition in ways that are radically decontextualized from aspects of judging “standpoint” or “perspective” that we would normally and almost unthinkingly access were we watching a politician orating or answering questions.

These issues have a widening significance in the humanities, a point forcefully put in a recent essay by Keith Moxey (“Visual Studies and the Iconic Turn,” Journal of Visual Culture 7.2 [2008]: 131-146).  Professor Moxey, the Olin Professor of Art History at Columbia University, writes to counter what has been identified as an iconic turn (in the work of WJT Mitchell this is referred to as a “pictorial turn”).  Referring to arguments advanced by thinkers like Hans Ulrich Gumbrecht, and which Moxey sees as also reflected in the wider work of Badiou and Deleuze and others, he writes:

Works of art are objects now regarded as more appropriately encountered than interpreted.  This new breed of scholars attends to the ways in which images grab attention and shape reactions for they believe that the physical properties of images are as important as their social function.  In art history and visual studies, the disciplines that study visual culture, the terms ‘pictorial’ and ‘iconic turn’ currently refer to an approach to visual artifacts that recognizes these ontological demands.  Paying heed to that which cannot be read, to that which exceeds the possibilities of a semiotic interpretation, to that which defies understanding on the basis of convention, and to that which we can never define, offers a striking contrast to the dominant disciplinary paradigms of the recent past:  social history in the case of art history and identity politics, and cultural studies in the case of visual studies.  At their most radical, theories that claim access to the ‘real’ argue that perception allows us to ‘know’ the world in a way that may side-step the function of language. (132)

Moxey sees this move as productive but also problematic – productive because the iconic move (which I think one should note is not being defended by Hariman and Lucaites; one might actually read No Caption as a rebuke to such work) is a corrective against the tendency to read all images as representations of other (often textual) ideologies.  “By contrast, the contemporary focus on the presence of the visual object, how it engages with the viewer in ways that stray from the cultural agendas for which it was conceived and which may indeed affect us in a manner that sign systems fail to regulate, asks us to attend to the status of the image as a presentation” (133).

The implications of such work reach ever more deeply into communication scholarship, sometimes directly inspired by Hariman and Lucaites’ deeply impressive book (not to mention their longer term work on these issues) and sometimes shaped by other intellectual currents, such as interest in the relationship between rhetoric and aesthetics or the now wide-ranging influence of research on visual culture.

I was reminded of this in reading an impressive new essay written by one of my Georgia State colleagues, on the film Brokeback Mountain.  Davin Grindstaff, in the lead essay in the new Communication and Critical/Cultural Studies (“The Fist and the Corpse,” CCCS 5.3 [September 2008]: 223-244), connecting the idea of the image whose significance cannot be put into words to the philosophical/aesthetic category of the sublime, argues that a key potential contribution of queer theory for broader communication scholarship is that outlier expressions (discursive and imagistic) of sexual and orientational difference do more than challenge dominant heteronormative ideologies.  Rather, because they express identities that cannot be finally assimilated to the dominant culture (they thus can never be fully translated), such evocations of alternative orientational attraction actually help form constitutive conditions of possibility for the wider culture.  Fully conceptualizing the relationship of such alternative images (depicting as they seem to radically different modes of being) to the dominant structures of public argument (do they simply undo or reveal the limits of rationality traditionally understood?  when do such sublime or terrifying alternatives constitute the limits of dominant culture, and when do they lapse into mirror image caricature? etc.) provides an important supplement in the attempt to specify the conditions under which circulating images enable or derail social controversy.

Loving the library

A great library serves many functions:  research laboratory of the humanities, archival database of human accomplishment, repository of a culture’s stories.  For me the library is also a place of escape, a sacred space where quiet enables clear thinking, and sometimes an overwhelming place.  This last is a little hard to explain, except by noting that the possibilities for mental over-stimulation are ever present in well stocked reading rooms; in this sense Matthew Battles’ Library: An Unquiet History (New York: W.W. Norton, 2003) rightly starts by evoking the library’s capacity to induce vertigo:

As the reader gropes the stacks – lifting books and testing their heft, appraising the fall of letterforms on the title page, scrutinizing marks left by other readers – the more elusive knowledge itself becomes.  All that remains unknown seems to beckon from among the covers, between the lines.  In the library, the reader is wakened from the dream of communion with a single book, startled into a recognition of the word’s materiality by the sheer number of bound volumes; by the sound of pages turning, covers rubbing; by the rank smell of books gathered together in vast numbers.  Of course, the experience of the physicality of the book is strongest in the large libraries, where the accumulated weight of written words seems to exert a gravity all its own.

When I was growing up, during the summer school breaks I would ride my bike down to the public library in West Lafayette, Indiana, and pass the afternoons in the cool air conditioning reading books.  My tastes were pretty lowbrow – most of one summer I spent reading one after another Perry Mason mystery, another I passed reading Lincoln biographies – but I loved the place, and still find I can lose hours in libraries and their for-profit equivalents (Borders, Barnes & Nobles, or any good used book store).   I consider both a nightmare and an inspiration the story retold by Battles (19) about Jorge Luis Borges, whose slowly deteriorating eyesight finally disappeared altogether at about the same time he was appointed to direct Argentina’s National Library:

No one should read self-pity or reproach
            Into this statement of the majesty
            Of God, who with such splendid irony
            Granted me books and blindness in one touch.

Unlike the small library kept by Seneca, whose collection was supposedly limited to only those volumes summarizing the best of human wisdom, the contemporary library is a much vaster archive that often grows by more volumes in a year than any single reader could absorb in a lifetime.  Today the books form a “collection” to be catalogued and counted and tallied, and whose consumption is measured not by the number of actual readers but by metrics that document checkout and reshelving.  Battles (a Harvard University rare books librarian) quotes Thomas Jefferson’s articulation of an alternative creed, where “a library book… is not, then, an article of mere consumption but fairly of capital.”  Jefferson is defending the enlightenment ideal reflected in the impulse of his age to catalog everything and accumulate knowledge and to create encyclopedias and universities organized around the systematic investigation of all the world, social and natural.

Such an aspiration is subverted by the realization of how many bad, mundane, poorly written, and now irrelevant works are forever stored on hard copy compact shelving or in digital databases.  Battles notes the irony that shelved next to the original Shakespeare Folios are whole shelves of conspiracy theories, alternative authorship treatises, little books translating the plays into limerick form, works of mediocre criticism, and so on.  Speaking only of those works that aspire to faithfully reflect or tell the truth (i.e., those self-identified as nonfiction), the irony, of course, is the larger the library the greater are the odds that lies outnumber truths within the collection.  This is irrelevant, of course, to the logic that justifies libraries in the first place – books are worth cherishing because they reflect the most careful intellectual labors of any given generation whether ultimately right or wrong, and serve as our best roadmap to the history of scholarship.  And putting the point into the language of truth versus lie makes it too strong, of course.  But even within the registers of pragmatism it is most likely the case that the vast majority of books in an ambitiously large collection do not contain useful knowledge, and more often than not make claims specifically rejected or subsequently ignored.

This fact doesn’t diminish the extraordinary accomplishment of the contemporary research library, of course.  But it does emphasize both its paradoxically relevant and anachronistic character and the extent to which the work of the library collection is mainly to form a parallel world of representations and theories and information.  Battles writes:  “For any question, the library offers no hope of a definitive answer:  though it necessarily contains prophesies of the lives of everyone who has lived or will live, as well as theories explaining the origins and workings of the universe itself, it must also contain unimaginable numbers of spurious accounts, with no means of sorting the true and immanent from the fallacious and misleading” (18).

I love libraries anyway.

Common wealth

Jeffrey Sachs’ book, Common Wealth:  Economics for a Crowded Planet (New York: Penguin, 2008), lays out the case for strategies that might both temper the inequities of globalization and also generate more wealth for everyone.  The urgency of the argument relies on the fact that (a) more than a billion people on the planet live in abject poverty, (b) sustainable and environmentally sound approaches cannot be any more considered soft or squishy leftist solutions but rather, if ignored, will convert emerging inconveniences into ugly geopolitical crises (expanded disease vectors, water shortages, dirty air, no fossil fuels), and (c) the old dynamics of international rivalry, predicated on nations competing with each other for prized but scarce resources, must give way to strategies predicated more on cooperation than competition.

I confess that on first picking it up, I wondered whether Sachs had anything more to say than was said by circa-1970’s eco-crisis figures like Paul Ehrlich and the Club of Rome types who resurrected neo-Malthusianism.  Although Sachs’ book is suffused with a rhetoric of optimism and updated by the discourses of globalization, one might say the argument is a familiar one nonetheless:  the world’s population is booming, closer contact brings both benefits and peril, the ascendency into middle class culture of two national populations (China, India) risk overwhelming systems of sustenance, there isn’t enough food or fuel to go around – all this sounds familiar.  The green technologies in which so much hope has been invested (solar power and all the rest) have shifted in the national consciousness, it seems, from reflecting a sort of vague crunchy utopianism to now sounding more hard headed and urgently pragmatic, but the basic contours feel the same.

A certain vaguely apocalyptic mindset is at work, reflective I imagine of a whole range of lurking dangers, such as economic unease and climate change and six years of the war, combined with a sense that the nation’s leadership is either too oblivious or gridlocked to do anything while the dangers grow.  One of the science channels I sometimes watch shows documentary after documentary tallying the dangers.  First we were asked to imagine a world without humans, which made for a tantalizing premise but was also unnerving since the production visuals implied that some alien spaceship had airlifted everyone to Venus.  Now it’s the world without oil, a program that enacts a spirit of accumulating anger and paranoia by returning us repeatedly to a nice working class couple victimized by the oil shortages – as the gas station lines get longer they start to snap at each other and some hooded thief siphons gas out of the truck and it gets worse from there.  Then, after that, a show describing a world without water, which may already be here – it turns out that in Spain they have to build water desalinization plants and pipelines that run down the middle of interstate highways to makes sure there’s enough drinking water.  And even if these scenarios fade from view it will be hard to forget Al Gore’s awful mantra:  soon there will be no more Snows of Kilimanjaro.

What makes Sachs’ argument more audacious is his sense of optimism that poverty can be remedied in ways that will not bankrupt the rich, and in so arguing he ends up directly confronting the often unspoken premise (based on the Biblical saying) that “the poor you will have with you always.”  Sachs claims that only modest investments in technological infrastructure are needed to cure poverty:  “For less than one percent of annual income of the high-income countries – the U.S., Europe, Japan, and a few others – we could end poverty once and for all.”

This point, which is also reflected in Al Gore’s optimism about green technology, is increasingly echoed by business school curricula teaching sustainable business practices.  Stanford, which was recently ranked Number One by a 2007 Aspen Institute report (“Beyond Grey Pinstripes”) for its programs in environmental responsibility, is only one of many now teaching these topics (e.g., the Harvard Business School case studies include a Nestle-based one on sustainable agriculture).  At the Presidio School of Management in San Francisco the whole curriculum is sustainability based.

The argument is increasingly made by Sachs and others that corporations who fail to consider the effects of their operations on the environment are not just damaging the planet but also shortchanging their own bottom lines.  Whether the combination of ecological unease and educational optimism can move the issue forward in a fundamental way remains, of course, to be seen.  But the combination of stressing the low cost of investments needed to protect the environment and eradicate poverty and the high profits to be made for those who get there first can’t hurt.

SOURCES:  Jeffrey Sachs, Common Wealth: Economics for a Crowded Planet (New York: Penguin, 2008); Martha Brant and Miyoko Ohtake, “A Growth Industry: Business Schools are Teaching Entrepreneurs How to Get Rich Helping to Save the Environment,” Newsweek, 14 April 2008, p. 64; Kirk Shinkle, “Where Markets Don’t Work:  Economist Says Beating Global Poverty Would be a Bargain,” US News & World Report, 21 April 2008, p. 78.

Reading’s future

I’m staying in a lovely Savannah, Georgia hotel this weekend – here for the Southern Speech Communication Association annual meeting – where the table lamp in my room was constructed by drilling the main support through a stack of books.  As a book addict (one of the few addictions enjoying social approval), the sight caught me off guard, as if the old books having been repurposed from edification to decoration had been speared to death.

The decision just announced by the Borders bookstore chain to downsize their inventory by as much as 15% per store so that books can be increasingly displayed front forward (as opposed to shelving them so that only the spine remains visible) is especially egregious, an saddening concession at the very heart of reading culture to the forces of spectacle.  Yes, I know they have to make money to stay open and times have been getting tougher for the chain.  But the strategy of offering fewer titles to sell more is likely to only further the endangered status of serious and difficult books.  When I complained to a worker at the mega-Borders in Atlanta, knowing it wasn’t her decision but hoping complaints might trickle up the chain, I noted that even if such a change was needed in their smaller shops because of the bottom line, certainly it could be avoided in the better stocked ones frequented by heavy readers.  She was sympathetic but just shrugged and said it came down as a directive from “central.”  They always do.

The question whether the death of the book and of traditional reading practices are to be mourned or simply noticed as another phase in ever-evolving practices of information consumption has been eagerly discussed of late.  Reading scores are, depending on the measure, either stubbornly flat or falling.  After years of evidence that newspaper readership rates are in free fall, and recent attention noting with sadness the decline of the full length book review as a widely disseminated essay form, the attention received by the November 2007 National Endowment for the Arts report was especially noteworthy.  The NEA (“To Read or Not to Read”) argues that declining reading rates subvert citizenship and employability.

I work with colleagues who regularly express with serious conviction and intelligence the merits of all forms of knowledge circulation from novels to philosophical works to middlebrow television hits to podcasts and many others, and my point isn’t to say the sky is falling.  Video games activate cognitive response in important ways, for example.

Ursula LeGuin’s complex reaction to recent reporting on these issues rightly wondered at how sanguine was so often the response to the reading alarm bells; she cited an AP report that asked people about their reading habits and sympathetically commiserated with the Texas guy who shrugged and noted how reading made him sleepy.   But she also expressed the view that the concerns are overrated – “I think…[books] are here to stay.  It’s just that not all those many people ever did read them.  Why should we think everybody ought to now?”  The century of the book, which she identifies as occurring between 1850 to 1950, was a high water mark, an aberration in the context of wider illiteracy and inattention.  The idea that book publishing should produce indefinitely expanding profits is a corporate model that LeGuin says makes no sense, since you can’t induce artificial demand for books in the same way you can rev up candy sales; at some point intelligent readers get bored and stop buying the sequels.

Others are OK with the end of printed texts for reasons that have more to do with celebration of the alternatives, and express concerns more closely connected to the book seen as technology than content.  Matthew Kirschenbaum has argued that reports like those from the NEA shortchange new media options, and to some extent the criticism is certainly well founded since rich data is just coalescing around the topic and so screen-based literacy is still relatively hard to defend in comparison to the hundreds of reports on more traditional modes of reading.  Eager to shift to portable technologies more environmentally friendly than the book (like Kindle), Jeff Bezos has been quoted as disparagingly referring to books as “the last bastion of analog.”

And even those nostalgic for books have sometimes admitted the transition to screen reading is not necessarily bad.  As the editors of the New Republic put it in December (in the context of a broader defense of the book), “the e-book is not the end of civilization.  If readers kindle to the Kindle, splendid:  Any reading is better than no reading.  Nothing valuable was ever preserved solely on Luddite grounds.”  As they put it, “Let us see how many conversions to literacy’s pleasures these gadgets make, and let us be grateful for them; but let us also recognize that we toy with the obsolescence of the book at our mental peril.”

The question relates, of course, to larger questions about the nature of literacy itself.  Modes of comprehension are reflected, to some extent, by the manner in which one consumes information.  When small screens result in bite-sized informational nuggets, does this mean that our broader cognitive capacity to handle more complicated information will be degraded?  To what extent are our critical capacities eroded when we less frequently interrogate texts and increasing, simply, search them?  Do increasing verbal and visual modes of communication (swapped video files, the pervasiveness of TV and iPods) undermine our intelligence?  Is viewing less praiseworthy than reading?  To some extent these are impossible questions, and timeless – debates over the relative merits of orality versus literacy are as old as Plato, and although Adorno was closely connected to the film industry, he still could not resist the comment that he was made to feel stupid by watching movies.

The argument was well summarized by the juxtaposed responses given to a New Yorker essay on the same topic.  In one corner, Maryanne Wolf (a distinguished professor at Tufts):  “My primary concern for the future of reading is that [the brain’s adaptive complexity] will be short-circuited in the next generation of readers, whose formative years may be immersed too early in the digitally driven media.  The addictive immediacy and the overwhelming volume of information available in the ‘Googled world’ of novice readers invite neither time for concentrated analysis and inference nor the motivation for them to think beyond all the information given.  Despite its extraordinary contributions, the digital world may be the greatest threat yet to the endangered reading brain as it has developed over the past five thousand years.”  And in the other, Edwin Battistella, from Southern Oregon University:  “One might argue that, beyond brain chemistry, the ways we engage with old and new media have great similarities.  Understanding a story, joke, film, or cartoon depends on a familiarity with conventions.  This familiarity allows the listener or viewer to be literate in those forms – to analyze, critique, evaluate, and extend them.  It may well be that animation is the next generation’s poetry and films its novels.  But, if that is so, reading will have just changed media, not been lost.”

I do appreciate, by the way, the considerable irony of blogging at such length about this topic.  Perhaps best to close, then, by simply saying:  thanks for reading…

SOURCES:  “The Battle of the Book,” New Republic, 10 December 2007, pgs. 1-2; Ursula LeGuin, “Staying Awake: Notes on the Alleged Decline of Reading,” Harpers, February 2008, pgs. 33-38.

Suicide and the dissertation

In 1910, on October 16, Carlo Michelstaedter mailed his just-completed dissertation on rhetoric and persuasion to his adviser at the University of Florence.  The next day he killed himself.  Michelstaedter was only 23.  The project, never defended, has circulated in something of a spectral afterlife ever since, with a translated edition published in 2004 by Yale University Press.

The phenomenon is rare but obviously not unique, and the relationship between artistic and intellectual production and suicide has been a subject of continuing interest.  But where many such accounts hone in on the relationship between the mania that might simultaneously yield interesting or even brilliant work while also spiraling into madness, I’m more interested in cases where suicide comes across as an analytical or, if romantic, at least an intellectualized romantic gesture.  These, while less frequent, were not as isolated as one might imagine.

Consider the young philosopher Otto Weininger, described in Alex Ross’ recent book, who in 1903 (at the age of 23) shot himself after writing his dissertation, Sex & Character.  The incident made Weininger something of a celebrity since he chose as the location for his suicide the house where Beethoven had died.  Sales of the dissertation book soared.  Alban Berg devoured it, even annotating such apparent non sequiturs as “Everything purely aesthetic has no cultural value.”  And when Wittgenstein later wrote that “ethics and aesthetics are one,” he was quoting Weininger.

One might easily expand the reach of these examples beyond philosophy, and into the wider domain of the arts, though this quickly leads to more ambiguous examples.  The circumstances of Hart Crane’s young suicide at the age of 33 remain clouded, since we know that he killed himself (by jumping off the deck of the ship returning him to the United States from Mexico), but also know that his judgment was likely clouded by alcohol.  Still, as Toíbín recently put it, “His myth as the poète maudit, the doomed, wild homosexual genius, America’s Rimbaud, had begun: his very name was a warning to the young about the dangers and delights of poetry.  It was a myth that even the seriousness and slow force of his poems and the studious tone of many of his letters would do little to dispel.”

These suicidal episodes, whose logic has always seemed to me desperately unconvincing, are thought provoking nonetheless because they so radically challenge our sense of the book or painting or symphony as opening the space for conversation, or as gestures of invitation.  Suicide converts the work into an act of final closure, a self-extinguishing gift even when the text or artwork survives.  Guided by the Christian tradition, I tend to see such gestures as arrogant and grandiose and to agree with Alex Ross, who (speaking of this early twentieth century period and of Weininger in particular) has written that “The bourgeois worship of art had implanted in artists’ minds an attitude of infallibility, according to which the imagination made its own laws.  That mentality made possible the extremes of modern art.”  Taking such artworks seriously is made yet more difficult since they tend to be, if not juvenilia, then at least the products of young and still not fully formed intellects, and so the texts that survive can be easy to pick apart.

But one might more charitably try to work through the logic of the ‘final gift,’ perhaps reading the suicide as enacting the work rather than nullifying, erasing, or overshadowing it.  This is, I admit, something of a hard case to make in the context of Weininger, whose Sex & Character indicted Europe as morally degenerate (he saw women and gays and Jews as markers of a fatal feminization of culture), where the cure was to come in the form of a redeeming Genius (who of course would be a macho man).  The argument is riddled with racism and sexism and homophobia, but beyond all this it is hard to imagine how Otto’s suicide enacts or anticipates or prophetically announces (as did John the Baptist) or prompts the arrival of this savior.

Michelstaedter is perhaps a better example, since his dissertation indictment of contemporary culture might be read as leaving only suicide as a rational response.  The thesis rereads (and I would say upends) the classical rhetorical tradition in redefining in an oppositional way persuasion and rhetoric, concepts more often understood synonymously than opposed.  Persuasion, the favored term in Michelstaedter’s new binary, refers to the sense of deeply settled and authentic conviction that rests at the root of the genuinely human being, Rhetoric, the devil term, names the surface talk that whips audiences into behaviors that, if not dangerous, nonetheless fail to reflect their own true sense of themselves.  Contemporary culture has been overtaken by the rhetoricians.  Read against this position, and given that the characteristics of a life grounded in persuasion are by definition hard to make public or proselytize, perhaps only suicide makes authentic sense as a way of leading by example.

Here following Derrida, we might also speculate that perhaps only the gift of death (or to specify the point, the gift accompanied by death) can ever be finally authentic, for only the gift given under such circumstances renounces or forecloses even the implicit expectation of a response, refusing the imposition of a debt to be repaid by the ‘gift in return.’  The gift of death lies at the heart of Christian theology (and this was the starting point for Derrida’s lectures), because in the presence of God’s overpowering mystery only the act of self-immolation makes sense as a rational response (thus when one is baptized, as the New Testament writer Paul puts it, the Old Creation dies and a New Creation is born).  And likewise, given the infinite insult of humanity’s accumulated sin, for Christians only the infinite gift of Jesus’ death is able to make full atonement.

Derrida, elaborating Levinas and Heidegger, sees the gift of dying for someone else as significant not because it achieves a utilitarian gain (after all, by killing myself I do not avert your eventual death) but rather because it embodies and conveys an act of uniquely individual goodness (unique because it is a gift only I can give, and an act of goodness because it is offered in the necessary absence of any knowledge that its recipient will provide recompense or make an appropriate response).  Read this way, the gesture of suicide is transposed into an act of humility instead of arrogance.

Either way, the gift of death finally exceeds the capacity of rationalization, which of course is the very reason it is so conceptually central to the project of deconstruction.  As Abraham is finally unable to make sense of God’s demand that he offer the gift of his son Isaac, we too are only finally able to tremble in inarticulate response when facing the suicidal gesture.  A constituting paradox of New Testament (more precisely, Pauline) theology is the fact that the sacrificial gift that can never be repaid and which may never have expected a response is translated by Paul into a gift requiring endless and infinite restitution in the form of an impossibly pure response.  Struggling to work through this finally impossible paradox in his own discipleship Paul can only despair and then surrender:  “I do not understand my own actions.  For I do not do what I want, but I do the very thing I hate” (Romans 7:15, NRSV).  Paul’s surrender, his concession that he cannot resolve his own behavior is, finally, a giving way to the impossible mystery of grace.

However one resolves the suicidal problematic, one returns nonetheless to what seems to me the finally failed gesture of attaching a note (a poem, a dissertation) to the unspeakable act, for such a ‘note’ can only fail as explanation or justification or gift card.  And in the meantime, the unceasing collective impulse to talk these impossible constraints on the human condition through are deprived of their most sensitive and insightful contributing voices, silenced too soon.

SOURCES:  The information on Weininger comes from Alex Ross’ wonderful book, The Rest is Noise: Listening to the Twentieth Century (New York: Farrar, Straus and Giroux, 2007) at pages 38-39.  The Hart Crane example was suggested by reading Colm Toíbín’s review of the Library of American edition Hart Crane: Complete Poems and Selected Letters: Toíbín, “A Great American Visionary,” New York Review of Books (17 April 2008), 36-40. Cf., Jacques Derrida, The Gift of Death, trans. David Wills (Chicago: University of Chicago Press, 1995); Carlo Michelstaedter, Persuasion and Rhetoric, trans. by Russell Scott Valentino, Cinzia Blum and David Depew (New Haven: Yale University Press, 2004).