Mark Noll’s Scandal of the Evangelical Mind (1994), a book that scandalized the evangelical mind by noting that it wasn’t much in evidence (Noll then scandalized some further when he announced in 2006 that he was leaving Wheaton College after 27 years on the faculty for Notre Dame), was in a sense sequeled in 2011 by Jesus Christ and the Life of the Mind (Eerdmans). Life of the Mind moves in a more hopeful direction by reconnecting with one of the most ancient of theological questions, often shorthanded as the distinction to be understood between Jerusalem and Athens: how does one reconcile the life of faith with the life of the mind?
The very question can seem absurd. Some Christian traditions have revered intellectualism when understood as supplemental or even constitutive of faith, and the world’s great centers of learning include many dedicated to propagation of the faith, but within the contours of profoundly thoughtful efforts to apprehend God’s creation through both the registers of reason as well as the more affectively sensitive mechanisms of intuition or unquestioning simple belief. For advocates of those traditions – I have in mind the towering scholarly accomplishments of Catholicism and the scholarly products of the Jesuits or the Episcopalians with their metaphorical three-legged stool, but also the textually rigorous insistence that animates many of the Protestant and fundamentalist traditions and brings intellectual coherence to the “priesthood of the believer” (such as the originary impulse of the Churches/Disciples of Christ, founded by the Campbells and Barton Stone, to find converts by way of rigorous actual interdenominational debates) – a faith inconsistent with the dictates of rationality is a belief not worth having. Why would one worship a God who cannot be apprehended, if only in part, by use of the very mental capacity that most fully distinguishes humans as God’s creation?
But the New Testament itself provides ammunition to those who see the Gospel as requiring a renunciation of the foolish dictates of reason. The Apostle Paul thunders at the church in Corinth in a tone that taunts the ivory tower elites of his time:
For it is written: “I will destroy the wisdom of the wise; the intelligence of the intelligent I will frustrate.” Where is the wise man? Where is the scholar? Where is the philosopher of this age? Has not God made foolish the wisdom of this world? For since in the wisdom of God the world through its wisdom did not know him, God was pleased through the foolishness of what was preached to save those who believe. Jews demand miraculous signs and Greeks look for wisdom, but we preach Christ crucified: a stumbling block to Jews and foolishness to Gentiles, but to those whom God has called, both Jews and Greeks, Christ the power of God and the wisdom of God. For the foolishness of God is wiser than man’s wisdom, and the weakness of God is stronger than man’s strength. Brothers, think of what you were when you were called. Not many of you were wise by human standards; not many were influential; not many were of noble birth. But God chose the foolish things of the world to shame the wise; God chose the weak things of the world to shame the strong. He chose the lowly things of this world and the despised things – and the things that are not – to nullify the things that are, so that no one may boast before him. (1 Cor. 1: 19-29).
There is much to say about this passage, and regarding related passages in the Book of Acts that describe moments of encounter between budding Christian doctrine and the worldly philosophers. But to some, Paul is here recommending the abandonment of scholasticism and the deep methods of inquiry that can incline humans to hubris. Augustine and others famously warned against confidence in academic inquiry – how might one have confidence that truth will emerge out of the exchanges conducted among fools? – all presumably to be renounced in preference for the interactions that conducted in prayer bring human frailty into contact with Divine perfection. And yet the New Testament also recounts multiple scenes of attempted conversion predicated not on the performance of miracles or the enactment of loving care but through the incisive work of public argument (e.g., Acts 6:8-10; 9:28-30; 17:16-17; 18:27-28; 19:8-10). The message regarding scholarship is thus often read as profoundly mixed: helpful as a tactic of potential conversion but also dangerous, not only because of its possible inducement to hubris but because clever sophistry (of the type Satan practiced on Jesus as he wandered the wilderness for forty days and nights, or attempted in his jousting with God over Job) can lead the innocent astray.
When it comes to those Christians who have made professional commitments to the work of the public university, the issue is further complicated. A life built on unwavering adherence to the Christian gospel can be understood as profoundly at odds with the spirit of skepticism and unending inquiry that underwrites the academy. Only several short steps lead many believers to see secular institutions (like, for example, public universities) as inevitably hostile to Christian discipleship. Meanwhile expressions of doubt, the very lifeblood of academic inquiry, are too easily read as heretical when articulated in religious settings. Athens and Jerusalem are thus apprehended as two worlds completely divided and incommensurable one to the other. (This, I think, is deeply unfortunate, and it has always seemed to me that faith traditions would be made stronger by welcoming and working through expressions of doubt. There is support for this position in the New Testament gospels – in one case, recounted at Mark 9:24, the father of a demon-possessed boy comes to Jesus and asks that his son be healed. Jesus says something like Everything is possible for those who believe. The father replies by expressing a paradox often felt by even the most dedicated believers: I do believe; help me overcome my unbelief. Importantly, Jesus is not offended by the contradiction but heals the boy. And when the famous doubting Thomas expresses his skepticism about the resurrection, Jesus does not throw him out; rather, as recounted in Luke’s gospel, Jesus replies “Look at my hands and feet. Touch me and see.” Doubt is heard as an invitation to fellowship and grace and not read as blasphemy).
The risk that the phrase “Christian intellectual” will be thought a contradiction in terms, and the related consequence that Christianity will, if seen as embracing anti-intellectualism, repel brilliant seekers, is what I take as the animating impulse of Noll’s recent work. His project is to argue the consistency of scholarship with Christianity, and more than that, to assert that Christians who do scholarship importantly enrich academic work.
A common approach in taking up this issue is to cite scripture on the topic of noble work. In a number of places believers are called, irrespective of the location or nature of their employment, to excellence in the workplace, and I have heard these admonitions cited to induce even professors into offering their dedicated and best work. Some examples: The OT Proverbs in several places advocate for diligence (12:24 – “diligent hands will rule”; 14:23 – “all hard work brings a profit”). Or from the Acts of the Apostles (5:38), a test that much academic work seems easily to pass: “For if their purpose or activity is of human origin, it will fail. But if it is from God, you will not be able to stop these men; you will only find yourself fighting against God.” Or, alternatively, the commendation made in the letter to the Colossian church at 3:17: “Do it all in the name of the Lord Jesus” (which one might read as a command to dedicate all work, especially the work of the mind, to God’s honor); later (3:23), “Whatever you do, work at it with all your heart…” Or, from the first letter to the Corinthian Christians, an injunction essentially to “bloom where you are planted”: “Nevertheless, each one should retain the place in life that the Lord assigned to him and to which God has called him.” Versions of the same idea are repeated three times in that one chapter (7:17, 7:20, 7:24) alone. In the letter Paul wrote to the church in Ephesus, he writes (6:5-8) “Obey earthly masters with respect and fear and with sincerity of heart, just as you would obey Christ… like slaves of Christ doing the will of God from your heart. Serve wholeheartedly, as if you were serving the Lord, not men.”
But this is not the path laid out by Prof. Noll. Instead, Life of the Mind searches scripture for those places where insights into intellectualism can be abstracted into a philosophy of Christian scholarship. What Noll finds everywhere are invitations to closer scrutiny and deeper inquiry. In the Christian creeds and in the major doctrinal worldviews found in the New Testament (such as the first chapter of Colossians (1:16-17) are statements about the created world that he reads as inviting Christians to respond to creation with the impulse to further explore and learn. In the statements of Jesus to which I’ve alluded already (especially for Noll: “Come, and see!”), Noll apprehends a scholarly impulse which one can credit by faith with always rewarding closer scrutiny. What Noll advocates is a faithful confidence that deeper engagement with the protocols of learning will lead thinkers closer to God and not further away:
The specific requirements for Christian scholarship all grow naturally from Christian worship inspired by love: confidence in the ability to gain knowledge about the world because the world was brought into being through Jesus Christ; commitment to careful examination of the objects of study through “coming and seeing”; trust that good scholarship and faithful discipleship cannot ultimately conflict; humility from realizing that learning depends at every step on a merciful God; and gratitude in acknowledging that all good gifts come from above. If, as Christians believe, “all the treasures of wisdom and knowledge” are hid in Christ (Col 2:3), the time is always past for talking about treasure hunting. The time is always now to unearth treasure, offer it to others for critique or affirmation, and above all find in it new occasions to glorify the one who gives the treasure and is the treasure himself. (p. 149).
Shortly after its publication, the great Yale theologian Nicholas Wolsterstorff wrote a positive review that nonetheless wondered whether Noll’s three-chapter discipline-by-discipline case studies were rich enough to make compelling the case for Christian contributions to scholarship. He wrote:
Let me add that whereas the Christological case that Noll makes for Christians engaging in serious learning seems to me both compelling and rich, the guidelines that he teases out of classic Christology for how we actually engage in learning strike me as rather thin by comparison. Christians, he says, will affirm contingency. They will affirm particularity. With the Incarnation in mind they will insist, by analogy, that ascribing a natural cause to some event is compatible with ascribing it to God as well. They will resist the pride characteristic of intellectuals. All true; but very general and abstract.
That point is well taken, although given the common radical separation of secular and sacred intellectual inquiry, it may be that the simple articulation of a Christian alternative itself might engage deeper thinking.
For me, the trickier question is whether, despite the intellectual payoffs to be found in the great faith traditions, they should ever be strongly asserted in the public university. One need not condemn Christians to silence in the public square to recognize that in an institution aiming to welcome and encourage thinkers from all backgrounds and perspectives, the forceful articulation of Christian theological imperatives risks doing as much damage to the open spirit of inquiry (by silencing those who will wonder if they can freely disagree with faith commitments so deeply held) as good. I wonder. It may be that the scholarly work Noll advocates is best undertaken in explicitly religious institutions, from which point its findings and main claims can be disseminated more widely as an implicit corrective to the narrower work of a public educational system that will of rightful necessity orient its efforts to reach more widely.
For me, then, Noll’s work finally raises this question: Even conceding the strongest case for Christian scholarship (which is to say, the case that an articulated Christian worldview can enliven any disciplinary conversation), does it then follow that Christian commitments should be always and everywhere articulated? Or, put a bit differently, does every workplace obligate the believer to proselytization? Is it possible that, as Paul made tents to raise money for his missionary journeys, there were days when he simply quietly engaged in craftsmanship without preaching to his colleagues? As the New Testament figure Lydia made purple silks, which we are told she did to fund the work of the church, did she try to determine how this or that biblical verse might better inform her artistic practice? Or were these believers content simply to segment their good work, willing to concentrate their evangelism within other locales where Christian testimony would be more gratefully received than the tent or silk workshops?
Sam Becker, for whom the University of Iowa Department of Communication Studies building is named, and whose six decades career was highly accomplished, passed away on November 8. While I was a doctoral student at Iowa in the 1990’s, Becker was already retired but still ever-present, and by the sheer randomness of graduate student office assignment two colleagues and I ended up right across the hall from his small retirement office. He was often available for conversation and also extraordinarily productive, a professor who gave the lie to the all-too-popular idea currently dominant in the humanities and some of the social sciences that the only way to get real research done is to work in seclusion either behind a closed office door or, even better, from the house. Looking down hallways of closed faculty office doors, and I mean this as no insult since Iowa is typical, I think, of today’s academic culture, I was always struck by the fact that the most open and accessible professors – Gronbeck, Ochs, Becker – were also among the most research productive.
By my time in the program, Dr. Becker was only occasionally teaching, but he taught one of my very first classes, a one credit hour professionalization seminar that only met, as I recall it, for about 45 or 50 minutes a week. We were coached on the protocols of academic citation, taught the mechanics of the main communication associations (Sam’s lifelong commitment to the National Communication Association meant we heard most about that one), and we talked about how one best organizes one’s research work. I believe it was there that he told the story about how he first got hooked on academic scholarship in communication. He was a young undergraduate at the university in the 1940’s, and was encouraged to turn a classroom paper into a publication, which he landed in one of the lead outlets for communication research. Over the next sixty years he produced more than 100 peer reviewed research essays, advised nearly 70 doctoral dissertations, and won high degrees of acclaim for his work (NCA president, NCA Distinguished Scholar, recipient of the association’s first mentorship award, and many more, including a number of honors on the Iowa campus that include the rare privilege of a namesake building).
Professor Becker’s death evokes in me, against my better judgment perhaps, a nostalgic desire for a sort of academic culture that likely no longer exists. The temptation to nostalgia when it comes to the academic past is fraught. Even as the American public universities threw open their doors and programs in the 1960’s and 70’s, they were far from perfect, and the political constraints under which professors work today are in some respects incomparably different. And universities look a lot different through the eyes of a professor than they do through the eyes of a graduate student. It is easier to imagine public university work as a sort of exotic salon culture, the pure life of the mind where professors think great thoughts in communion with their colleagues, when one’s schedule, overloaded as is graduate student life always, consists of one intellectual interaction after another, seminar to seminar and great book to great book. The academic life performed for graduate students, indeed for all students, is simply not the same as the one lived in a profession as dominated by committee meetings as discussions of big ideas. Comparisons between past and present too often fail.
But my nostalgia lingers.
Sam Becker represented a style of academic life and an extraordinary commitment to building local programmatic excellence, that I find harder to find today (and in my world so infrequent as to be essentially nonexistent), living as we do at a time when many professors understandably find their main intellectual sustenance from longer distance networking – social media, blog- and listserv-centered – and themselves too informationally enriched (or, alternatively, overstimulated) and even overwhelmed by those gushing sources to desire anything but minimal face-to-face engagement with on-campus colleagues. Part of this, I believe, is the characteristic connection of rather-shy-and-life-of-the-mind-driven academics with the more controllable interactions of online and distance encounter; it is easier to present a polished, a more clever persona through Facebook and blogging than in the heat of a tedious faculty meeting, and so as a result many gravitate to the New Comfort Zones of virtual engagement.
Entire academic generations have been mentored to the view that their most assured path to professional success is isolation – keep your head down, don’t over commit, set up a home office and be disciplined about working there, spend as few hours on campus as you can because if word gets out that you’re available then colleagues and students will eat you alive and rob you of all your productive energy. This advice is reinforced because when one resolves only to spend ten hours a week on campus then, not surprisingly, those ten hours quickly fill to capacity as students figure out those are the only opportunities for real access not coordinated by emails. The approach affords little time to linger, for lingering is time wasting. Sorry! I’m drowning; gotta run! becomes an easy refrain.
All this is understandable and not unreasonable. I’m as prone to the temptations as anyone. The seductive blend of intellectual (over) stimulation, where ideas can be consumed at any pace one prefers, and staged (or scripted) encounters managed from the comfort of the computer desk chair, can simply feel more enriching than sitting through a long research presentation or a comprehensive examination defense.
Donavan Ochs, a Becker colleague at Iowa, and Sam Becker, both veterans of military service (I have the sense that had something to do with it), put in pretty regular daily schedules. Ochs, with whom I had the chance to study classical rhetoric and do an independent study on Aristotle, often put in 8-to-5 days. As I recall it Donovan wore a tie every day, even in the 1990’s when few others did, and his door was always open apart from times when he was in private meetings or teaching. When I asked him once how he got any work done under those conditions, he was plainly surprised at the question, and his reply – what I do here is my work – led to wider conversations about academic life. Ochs noted that an open door policy did not prevent his research productivity, since the morning hours typically gave him many undisturbed hours to write and think. His door wasn’t open to enable empty chit-chat – he was always profoundly encouraging but kept conversations mainly work focused. And because he worked, seriously worked, for the duration of the regular day, he avoided the guilt so many of us feel at thinking we should be working at all hours of the night. I always had the sense Ochs went home with a clean conscience – he had a life apart from Aristotle, a healthy set of diverse family and life interests, and retirement presented no apparent trauma for him.
It is simply impossible to generalize about the state of faculty engagement given the diversity of campus environments, and unproductive even to try, and there remain, of course, more pastoral campus settings where relatively smaller student cohorts and perhaps better supported faculty lives enable the creation of close intellectual community that at some level still adheres to the wider mythology of enclaved campus life. But life for students and professors at the big state universities, and I suspect even in places where campus life is residential and intensively communal, is changing. If the National Surveys of Student Engagement are to be trusted, students report less frequent conversations with their classroom professors outside of regular class times. Michael Crow, the president of Arizona State University and a key (and controversial) national advocate for delivering high quality and research intensive educational outcomes to very high numbers of enrolled students (ASU is now one of the nation’s largest universities), often repeats the idea that demographic surges require a model of education that is not numerically exclusive (the backward logic that translates so that the more people a school turns away, the better their reputation). If public state institutions cannot find ways to well serve the students who are academically gifted but not financially or intellectually elite enough to enroll at the most exclusive private schools, Crow often says we’ll end up with a two-tiered system where the rich kids are educated by professors and the rest will be educated by computers.
The truth is that the big public universities are fast veering in the latter direction, not in the sense that MOOC’s educate our students but that the experience, especially in the first couple years, can be awfully impersonal, if not on account of large classes than because so many early classes are taught by graduate students and temporary faculty whose good teaching may nonetheless insufficiently convey a sense of place and local intellectual tradition. The wider incentive structures are too often negative: no pay raises, the demoralized sense that follows from the more frequently expressed taxpayer hostility to higher education, the pressures to win grants and relentlessly publish papers, accountability pressures that seem to require more and more administrative meetings, the idea that one must always stay on the job market or you’ll likely not be able to get a pay raise here, the growing number of students and in some states the expectation of higher instructional workloads, a tendency to think of day-to-day intellectual connectivity as simply more uncompensated service. All this lures professors from the committed work of building local loyalty and into more defensive practices that feel like simple self preservation but are also, I suspect, self-defeating because they only accelerate a vicious cycle of brief and highly focused teaching and mentorship alternated by long stretches away. Participate in a sustained reading group? Sorry, I just don’t have any time for that. Organize a campus colloquium, film or lecture series? Ditto. And since everyone else is overwhelmed too, what would be the point? No one would come. Did you see the lead essay in the new QJS? I’m curious what you thought. Gosh, I’m months behind on that kind of reading – all my energy has to go to my book. What energized you at the last big national conference? Oh, couldn’t make it – and how could I when the university gives so little for professional development support?
The picture I’ve just drawn is exaggerated, thankfully, but I suspect that even as caricature it retains a certain familiarity. Fortunately the energetic participation new faculty bring to academic programs is inspirational, and idealism trumps low morale for so many staff and faculty who sustain both distance networked and local connectivity. Whatever the incentives, every department includes professors at all ranks who pour their energies into building real collective intellectual communities. It might also be added that the struggle I’m describing may be most accentuated in the humanities, where the norms of academic research are only slowly shifting away from the lone-professor-writing-her-latest-book to practices of team-based interdisciplinarity. The very important beneficial consequences of globally networked disciplinary conversation arose for important reasons – the generation of new knowledge is more dynamic than ever before in human history, even despite data that (at least in communication) the research work is increasingly localized in smaller numbers of publishing faculty (a recent analysis in speech communication showed that something like 85% of all professors have not published anything or received external support for their projects in the previous five years). But I wonder if the number of high productivity and communally engaged scholars can be sustained when their morale is under assault too, because the dynamics induced by understandable mentorship and reduced support bring into ever-starker relief the old 20/80 rule, where 20% do 80% of the work. As 20/80 becomes 10/90, this is how intellectual dynamism, and universities, die.
Sam Becker’s career suggests a thought experience that asks whether the considerable benefits of 21st century intellectual life can be improved by some integration of the professional practices of the 20th. I want to hypothesize that what so often seems like the depressing path of today’s stressed system of public higher education need not necessarily be accepted as a New Normal. If public higher education is to retain its historical vitality, changes will have to happen on many fronts. Taxpayers and legislators will need to be persuaded of public education’s value. Reasonable systems of accountability will need to document the outcomes of pedagogical encounter, I know. But there is a role for we faculty to play as well, and Sam Becker’s professional life suggests some of the possibilities. Becker knew that good and committed scholars who simply show up day after day and make themselves open to engaged discussions with others, both online and in person, actually attract other smart students and teachers to join as well in ways that energize the common enterprise, and that calling it quits at the end of the workday creates intellectual sustainability too as people find time away every single day to recharge. He saw, because he so often created it himself, that the vital and passionate sense of connection that emerges as intelligent participants in the educational experience talk to each other and rev up excitement about ideas one discussion at a time. He realized that when everyone is present and engaged in program building, service work is made more manageable by division among larger numbers of connected co-workers. I cannot prove it, but my suspicion is that the great intellectually vital centers of communication scholarship were (are) built more fully by acts of local loyalism than by enterprising free-agent academic nomadism.
The key is not simply hallway cultures of greater presence but also necessarily entail high degrees of intellectual openness, a refusal to see the scholarly enterprise as ideational warfare or zero-sum, even in contexts where resourcing is finite. And this was another of the Becker legacies. During his five decades in the department, communication studies nurtured, developed, and then in some cases spun off new academic units, including theater and film. Those discussions were not always smooth or even friendly, and Becker had strong opinions. But what he always preserved, as I saw it, was a culture of openness to new and productive work – it led him to shift over his own career from interests in quantitative social science to British cultural studies qualitative research and then back again. No department is ever entirely free of intellectual entanglements – smart people will tend always to prefer their own lines of inquiry and can too easily fail to see the value of the efforts undertaken by others. But so long as there are some Beckers around, these inclinations to either/or warfare that have consumed whole programs in acrimony can be channeled productively into both/and collective accomplishment.
Fantasies, perhaps. But these are ideas whose lifelong embodiment in one Samuel L. Becker – Eagle Scout, decorated war hero, “Mr. University of Iowa,” champion of social justice and the idea that public education enriches us all, extraordinary teacher and scholar and administrator – remain for me compelling, even given the New Normals of this new century.
On one of the websites for students of rhetorical theory, conversation has recently focused on the status of psychoanalytic criticism and the question of whether its insights are being willfully ignored by the larger field. Josh Gunn kicked off the discussion, in part, by noting that despite recent interest, “rhetorical theory — at least on the communication [studies] side – is hampered by a certain blind spot caused by the avoidance of psychoanalysis, and more specifically, the inadmissibility of the category of the unconscious.” Gunn rightly wonders at the absurdity of this given how many revered figures in rhetorical theory have been explicitly influenced by or have reacted against Freud, Lacan, Klein, Jung and others.
In the ensuing back-and-forth a range of perspectives have been expressed: some writing to agree that psychoanalysis does seem to provoke unique antipathy from students assigned to encounter it, others speculating on the causes (is it because communication was more a journal than a book field? did the discipline’s work in response to behaviorism inoculate scholars against its insights? has psychoanalysis been more widely tainted, thus deterring investigation from the outset?), and so on. Perhaps not surprisingly, some of the explanations veer to the therapeutic – several responses convey anecdotes of a visceral (and by implication anti-intellectual) refusal to take psychoanalytic work seriously: sneering senior scholars, wink-wink-nudge-nudge sorts of boundary policing behavior, and the (not-so-)subtle steering of graduate students away from the theoretical insights of psychoanalysis.
As I’ve been thinking about all this I don’t find myself in particular disagreement except that I don’t think this phenomenon is unique to psychoanalysis. Rather, I think what we are seeing are the ongoing consequences of theoretical hyper-specialization, where these are simply several of many local occurrences. By contrast to those who continue to announce the Death of Theory, it seems to me that we are still working to live with the consequences of having at our disposal So Many Seriously Elaborated Theories, which in turn gives rise to a mostly frustrating situation where the maps seem richer, or at least larger, than the territory.
I do not note this to endorse hostility to the elaboration of theoretical sophistication, but simply to note how glutted we are with it. I think the symptoms of this, at least in communication studies, are everywhere: A more ready willingness to abandon the mega-conferences in preference for more intimate niche meetings where one can burrow in and keep up. The tendency to assign secondary sources even in doctoral seminars, or, when primary works are struggled with, to isolate them from the conversations in which they participated (which results in an alternative tendency to see originary controversies, such as the big arguments between Foucault and Sartre, or Fraser and Habermas, as pretty much settled history to be filed in the same category with “the earth is round”). A growing impatience with efforts to provide arms-length or peer review or other gate-keeping work undertaken in the effort to make comprehensible the incoming ocean of new material, and a more widespread corrosive cynicism about the larger enterprise. The increasing frequency of major conference presentations, even given by serious senior scholars, that don’t seem to say much of anything new but mostly offer a repetition of the theoretically same. An inclination to see friendly work as fully appreciating the rich nuance of my own tradition, and hostile work as reducing my tradition to caricature. A wider tendency to see the dissertation not as evidencing a student’s ability to undertake a serious research project, but as an indication of the project whose trajectory will forever define a career.
Another key marker is the level of defensiveness, sometimes veering into animus, I hear often expressed by the advocates of every perspective who feel their work is under siege: Marxist theory, argumentation studies, close textual analysis, historical/archival work, postcolonial and queer theory, cultural studies, feminist scholarship, and the list could be considerably lengthened. All feel under attack and to some extent sustain intellectual solidarity by insisting enemies are at the gate. And within these traditions fragmentation continues apace – a longstanding theme in a number of the convention conversations I hear is how scholars who for many years have labored to make visible the cultural contributions of gays and lesbians see themselves as today marginalized by queer theory, and in turn how queer theory seems to be marginalizing bisexual and transgendered approaches. This is a theme not limited to rhetorical studies but is more widely sensed within the broader inquiry of communication scholars: the television studies people feel like they aren’t taken seriously, and so do the performance theorists, the cinema studies scholars, the interpersonal researchers, the quantoids, the public opinion theorists, those who first encountered communication through forensics or theater, the TV and film production faculty, ethnographers, organizational communication scholars, mass communication empiricists, public relations practitioners, and those who teach students for industry work.
As my career has taken me in the direction of administrative work, I see the same trends more widely as they shape conversations within the humanities and beyond. When I first had the audacity in a meeting of chairs from the full range of disciplines to say that external resources are harder to find in the humanities – I thought everyone agreed with that – I was surprised that the most assertive push-back came from a colleague in biology, who was there to argue in detail his relative deprivation within the wider university. His case was not absurd: it is hard to argue anyone is properly supported in the modern public research university.
I don’t see this defensiveness as a reflection of bad faith or of animus. For in a sense all of us are right – one does have to exercise eternal vigilance in defending one’s research perspective, because in a universe of so many well-elaborated accounts of human behavior the most likely danger is being forgotten or overshadowed given the broader cacophony. Thus the paradox that while more journals are now published in the humanities than ever before, the individual researchers I talk with see fewer and fewer outlets available for their sort of work. Or, to further mix the metaphors, there are not only more intellectual fortresses today, but they are better fortified against attack and protected against the wandering tourist and amateur dabbler than ever before.
It is true, I suppose, that within each theoretical community are some who treat, say, Anti-Oedipus or Lacan’s seminars or the Prison Notebooks or the Rhetoric as holy scripture. But the issue is less that each theorist has induced a cult than that, in general, scholars who are otherwise persuaded they cannot possibly know every perspective well, tend to stick with the one rich approach into which they were first acculturated. And so what was and is seen by some as a sort of happy theoretical pluralism, a view still promoted by the wider impulses to boundary-cross and be interdisciplinary and all the rest, has devolved into a more frequently expressed surliness about colleagues who “won’t do the work to stay current,” a wider reliance on secondary sources like the Dummy guides and Cambridge Companions, the more frequent play (in responding to outsider critics) of the “you don’t appreciate the subtlety of my theory when it comes to ___” card, and an even more common resort by the basically friendly to the tactic of heavy-note-taking silence or the helpful “you should really read [insert my theorist],” or, more generally, “have you thought about this?” conference response or query. One of the most common questions i hear my colleagues ask of one another is one I often ask myself: “If you could recommend three or four short and accessible overviews to ____ that would help me get up to speed, what would you suggest?” It’s asking for an academic life preserver.
Less of all this is sparked by ill will or ideological refusal than by the simple unwillingness to confess “I am unable to offer a thoughtful response to your read of Ranciere because I didn’t know he would be discussed today and so I didn’t have the chance to beef up on my Ranciere for Dummies, and because it takes every minute I have available for intellectual work just to keep up on my Burke.” The eye rolling response is sometimes thus less reflective of substantively well-grounded opposition than the expression of a weirdly humble recognition of the game we think everyone is playing: the gotcha strategem of “there s/he goes again showing off everything s/he knows about Cicero.” At a time when credible humanistic research is said to be impossible apart from mastery of all social theory, all of the philosophical and aesthetic traditions, and (increasingly) the life sciences (cognitive theory, evolutionary psychology, accounts of chaos and networks and more), and the globalized set of artifacts that underwrite comparative work, the task seems overwhelming.
My point is not to be alarmist or defeatist about the enterprise. Specialization is not new, and has elicited expressions of concern for generations. To some extent the theoretical proliferation is self correcting – people participating in a bounded academic conversation do move on and not every carefully enunciated perspective finds a following. There remain exceptionally skilled intellectuals who seem to know everything and who are apparently able to keep up with all the wider literatures. And too often the expressed difficulties in “keeping up” exaggerate the challenge in an age when more resources than ever are available to enable one’s informational literacy, and when “I don’t have the time to understand [feminist] [critical race] [queer] theory” is a too-convenient excuse to ignore perspectives that elites brushed off even when Renaissance Giants Walked the Earth and only had to stay current on the sum of human knowledge contained in fifty books.
And because the challenges of surfing the sea of new literature and getting others interested in one’s work are by now so universal, I have little to offer to the range of problematic correctives. The idea of reinstating a common canon holds little appeal, and for good reason. Nor is announcing the Death of Theory, or insisting on the Priority of the Local or the Case, especially compelling. My own preference, given a background in debating, is to “teach the controversies,” but that approach isn’t ideologically innocent either. If book publishers survive, I think the impetus to anthologies that now characterizes cinema studies is likely to expand more widely within communication scholarship. But there are dangers in too readily recommending hyper-specialization in doctoral students, paper writing revved up out of fast tours of JSTOR and Project Muse, and too quickly acceding to happy talk about theoretical pluralism. Better, in our own intellectual labors, to insistently listen to and reach out to other perspectives and work like hell to keep up with the wider world of humanistic scholarship.
And sometimes, if only as a mechanism to preserve one’s sanity, a little eye rolling may also be in order. Just keep it to yourself please.
A 2008 special issue of New Literary History (vol. 39) is focused on the future of literary history (and, relatedly, comparative literary studies) given globalization. To some extent one can track the complicated history of World Literature through the early and influential essays of Rene Wellek, who advocated for comparative scholarship even as he warned against the dangers of investing disciplinary energy in the search for covering laws and causal relationships between literature and the wider society. The titles of Wellek’s much-cited 1958 talk, “The Crisis of Comparative Literature,” and his 1973 essay “The Fall of Literary History,” convey some sense of his pessimism about the prospects for defensible work.
Of course the very term World Literature has to be carefully used since one must always demarcate the multiple possibilities implied by the phrase. Some use World Literature to reference all the literature produced in the world, some see it as referring to Kant and Goethe’s dream (Goethe in 1827: “a universal world literature is in the process of being constituted”) of an international body of transcendently superb literature, and still others to reference those few novels that have found truly international fame. And so some who are invested in comparative work today, often undertaken to throw American cultural productions into a wider perspective of circulation and resistance, prefer terms like transcultural literary history (Pettersson). In the context of the theoretical care one must take even to begin this kind of work (the complications of which are unwittingly revealed in Walter Veit’s summation of Linda Hutcheon’s call for a “new history of literature” which “has to be constructed as a relational, contrapuntal, polycommunal, polyethnic, multiperspectival comparative history”), the project remains inherently appealing: who would oppose the idea of research that induces cross-cultural sensitivity and understanding, even realizing its final impossibility?
After buzzing along for decades, or at least since the 1950’s when the International Comparative Literature Association first met to debate the potential for doing literary historical work, new attention has been given to transcultural literary studies thanks to two much-discussed interventions: Franco Moretti’s essay, “Conjectures on World Literature” (which forms the anchor for a 2004 anthology on Debating World Literature released by Verso and which formed a sort of introduction to his widely read book a year later) and Pascale Casanova’s The World Republic of Letters (trans. M.B. DeBevoise; Cambridge, Mass.: Harvard UP, 2004). Moretti’s work has gotten a lot of attention given his heretical view that the sheer quantity of the world’s literature, which now escapes the possibilities of close textual analysis, now requires distant reading, which is to say sophisticated forms of macro-data analysis that can reveal patterns of novelistic diffusion worldwide.
But things get tricky fast. Fredric Jameson, who leads off and who has long expressed skepticism about the work of literary historians (noting in an address to a 1984 Hong Kong conference on Rewriting Literary History that “few of us think of our work in terms of literary history,” and having subsequently called repeated attention to the essentially ahistorical nature of postmodernity), argues that the dialectical impulses of economic globalization simultaneously promise cultural liberation even as the economic chains are slowly tightened, and in ways that finally limit the range of cultural productions as well. To be concrete, Jameson highlights how global capital appears to open all cultures to all populations, even as, over time, a shrinking number of transnational conglomerates end up ultimately stifling all but the handful of mainly English-language novels able to turn a profit. He is especially keen on Michael Mann’s argument that the global economy is “encaging” – that is, as Jameson describes it, “the new global division of labor is” organized so that “at first it is useful for certain countries to specialize…. Today, however, when self-sufficiency is a thing of the past, and when no single country, no matter what its fertility, any longer feeds itself, it becomes clearer what this irreversibility means. You cannot opt out of the international division of labor any longer” (376).
The cage ensnares more tightly – and not only because “smaller national publishers are absorbed into gigantic German or Spanish publishing empires,” but because a handful of mega-publishers end up publishing all the textbooks kids read even as budding authors everywhere are subtly persuaded to buy in because of their “instinctive desire to be read by the West and in particular in the United States and in the English language: to be read and to be seen and observed by this particular Big Other” (377). So what are literary historians to do that will not invariably make them simply complicit in all this? Jameson, a little bizarrely I think, argues for a sort of criticism that imagines the world-making possibilities of novels-yet-unwritten-that-one-imagines-as-ultimately-failing-to-liberate. This sort of creative criticism “raises the ante,” according to Jameson, because it helps its audiences recognize the actual “persistence, if insufficiently imagined and radicalized, of current stereotypes of literary history” (381).
Brian Stock, at the University of Toronto, reads the current scene from within the larger new traditions of developmental and cognitive psychology and cognitive neuroscience. What work done in these areas suggests is that reading has a profound cognitive (and universal) influence on human beings, whose plastic minds are essentially reconfigured by repeated participation in practices of literacy. As Stock see is,”the only way in which reading can be related to the ubiquitous problem of globalization in communications, without running the risk of new types of intellectual colonization, is by demonstrating that it is in the genetic inheritance for interpreting language in its written or ideographic form that is the truly ‘global’ phenomenon, since it is potentially shared by everyone who can read… [I]f this approach can be agreed upon, the natural partner of globalization will become a scientifically defended pluralism” (406).
Walter Veit, at Monash University, sees the interpretive key as residing in temporality, which can never be linguistically articulated (Paul Ricouer: “temporality cannot be spoken of in the direct discourse of phenomenology”) except in novelistic narrative, where the arc of the narrative makes some sense of time’s passage and where, following Hayden White, the linguistic operations of rhetorical tropes and figures provide metaphorical access to the otherwise inexpressible. One is left with a more sanguine sense of the future within these terms: both for an analysis of the multiple ways in which the world’s literatures construct time and its passing, and with respect to literary criticism, which is always embedded in the particular and always changing practices of its time and audiences. Such a view is well supplemented by Nirvana Tanoukhi’s claim that the challenge of understanding transnational literature is also foundationally a question of scale and locale and spaces of production.
The work of literary history, and the conceptualization even of its very possibility, is, finally, a representative anecdote for the broader work of the humanities. This is a theme apprehended both by Hayden White, who notes that the questions raised in the symposium reflect the larger conditions of historical knowledge as such, and by David Bleich, who notes the close affinity between the work of literary historians and the broader work of the university (where “scholars have always been involved in the mixing of societies, the sharing of languages and literatures, and the teaching of their findings and understandings,” pg. 497). The university plays a culturally central role in translating other cultures (for students, for the audiences of its research) that is fraught with all the perils of the work of writing intercultural history – hubris, caricature, misapprehension. But the effort to make sense of the wider world, however risky, is also indispensable, if only because the alternatives – unmitigated arrogance and blinkered ignorance – are so much worse.
Sigismund Thalberg’s piano performance tour of the United States prior to the Civil War came at a key point in the nation’s cultural emergence on the world scene. By the 1830’s the United States’ cultural and social elite knew the musical works of Haydn, Beethoven, Mozart, and Bach, but the acquired tastes of the refined classical tradition had not reached the American masses and European musical refinement was often caricatured. While in New York City the Philharmonic had been organized as a voluntary association since 1842, even by the late 1850’s (on the brink of the Civil War) efforts to expand concerts into mid-day matinees struggled to find audiences. It wasn’t until 1881 that “the Boston Symphony became the nation’s first permanent full-time orchestra” (Horowitz). Meanwhile, as the director of the Parisian Opera had but it in the 1840’s, in language characteristic of the European prejudice: “We look upon America as an industrial country – excellent for electric telegraphs and railroads but not for Art.”
The mismatch between American and European musical sensibilities created a mutual cycle of mistrust and disparagement that did not really begin to crumble until Anton Rubenstein’s tour of the states in the 1870’s. Prior to his tour, European impresarios often appealed to their publics in ways more reminiscent of circus performers (as when Jenny Lind was debuted as an “angel,” descended from heaven, by P.T. Barnum), where publicity machines were wildly ramped up. But the irony of the Paris Opera director’s comment is that it was precisely America’s midcentury industrialization that enabled its cultural transformation. The frenetic pace of Thalberg’s American concertizing was only possible because he was able to perform in the evening and then often travel all night on the trains, on a rapidly expanded and precisely organized transportation system that gave predictability to one’s efforts to reach small towns like Kenosha and Sandusky and Natchez and Atlanta.
Thalberg’s tour – his first American concert was in November 1856 in New York City, his last, provoked by a sudden and still unexplained return to Europe, took place in Peoria in June 1858 – was a noticeable contrast to the earlier hype of Jenny Lind, partly because his reputation as a European master needed little exaggeration. Training in piano had already emerged as a marker of middle-class respectability, and Thalberg’s sheet music was well known to American music students. News of Thalberg’s status as the most credible European pianist, second only to Franz Liszt, had been long circulated by the time of his arrival (and of course Liszt’s refusal to tour the United States left the stage to Thalberg’s crowd-pleasing sensibilities).
The generally exuberant reaction Thalberg received from American audiences reiterated the enthusiasm Europe had shown for his virtuosity twenty years earlier. Reacting to a Parisian performance given in 1836, one reviewer described Thalberg this way: “From under his fingers there escape handfuls of pearls.” As time passed, the early rapture was revved into a feud, where partisans of Liszt (most notably Berlioz) took sides against Thalberg partisans (most notably Mendelssohn). The whole thing came to head in a famously reported Paris recital where both Lizst and Thalberg played. The March 1837 event, where tickets sold for charity cost an extravagant $8 apiece, featured each playing one of their famous fantasias: Liszt played his volcanic transcription of Pacini’s Niobe and Thalberg his subtler version of Rossini’s Moise. The outcome, although historically dominated by Liszt, was judged a close call at the time. Some viewed the contest a tie. A 1958 account by Vera Mikol argued that the winner was, “in the eyes of the ladies, Thalberg; according to the critics, Liszt.”
Thalberg’s reputation, though sustained by sold out European performances, faded on the continent, even as his worldwide reach (with concert tours in Russia, Holland, Spain, and Brazil) expanded. Robert Schumann was notoriously hostile, and in his writing used the term “a la Thalberg” as a slur to describe lightweight compositions. Mendelssohn remained an admirer. The sparkling romanticism of Thalberg’s compositions made them simultaneously popular (and stylistically imitated) and critically panned. It is evidence of both impulses that when Jenny Lind launched her American tour, the concert opened with a two-piano performance of Thalberg’s transcription of Norma.
But what made Thalberg extraordinary was not necessarily best displayed in his compositions, and this is undoubtedly why his reputation has so seriously abated. Even in his day his compositions were often criticized for their repetitive impulse to showcase technique. The key, for his audiences, was a compositional trick popularized and perhaps even invented by Thalberg and imitated everywhere: the melodic line was switched thumb to thumb while the other fingers ranged widely across arpeggios above and below in an effect that made the player sound as if he had three hands. Audiences were so impressed with this illusion that in some cities they reportedly stood up to get a better glimpse of his hands on the keys.
The key for his admiring critics, meanwhile, lay in his technique. For Mendelssohn, Thalberg “restores one’s desire for playing and studying as everything really perfect does.” Ernest Legouve wrote that “Thalberg never pounded. What constituted his superiority, what made the pleasure of hearing him play a luxury to the ear, was pure tone. I have never heard such another, so full, so round, so soft, so velvety, so sweet, and still so strong.” Arthur Pougin, memorializing Thalberg at his death, said it was he “who, for the first time we had seen, made the piano sensitive,” which was to say that in the eyes of other players, he had mastered the art of pressing against the limits of the instrument so as to make it sound most like the singing human voice. It was this Thalberg himself was seeking to highlight when he entitled his own piano text, L’Art du chant applique au piano, or The Art of Song Applied to the Piano.
Academic debate continues about the role of Thalberg and the other early European virtuosos who toured the states. Some defend him as representing the necessary first step in tutoring America in musical sophistication, all while softening the more difficult numbers with crowd pleasing fantasias that classed up songs like “The Last Rose of Summer” and “Home Sweet Home.” Others render a harsher judgment – Mikol closed her 1958 essay on Thalberg with this highly negative assessment: “we should not underestimate the part he played a hundred years ago in delaying our musical coming-of-age.” Part of the sharp discrepancy relates to differing views of the emergence of high culture. R. Allen Lott’s From Paris to Peoria credits the impresario experience as laying the groundwork for a richer American culture, where the audience experience of the classical repertoire was “sacralized” over time. Contra Lawrence Levine and others, who have argued that this period shows how cultural norms were imposed by rich elites as a method of bringing the masses under disciplined control, Lott and Ralph Locke credit not elite control but a widely shared eagerness for intensive aesthetic experiences that transcended class divisions. Still others point to the deeply gendered responses this sort of musical performance elicited – women were not allowed entry into the evening theatre without a male escort, and so Thalberg and others added afternoon matinees where women could attend unaccompanied.
Today Thalberg is mainly forgotten – my own interest in him came from seeing one of his works performed on campus two months ago by a Mississippi musicologist – but In towns across America he and the other touring virtuosos provoked both class antagonisms and the enthralled reactions of the spiritually “slain in the spirit,” which makes it difficult to judge either perspective uniquely correct. In Boston a huge controversy erupted when Thalberg’s manager sought to limit ticket sales to the upper class (by briefly requiring patrons to provide a “correct address”); the papers had a field day about foreign snobbery and defended music for its democratizing potential.
Meanwhile, audiences were enthralled and carried to heights of emotional ecstasy by the actual concerts; the press accounts often noted that listeners wept. Thalberg managed to achieve this response, amazingly, without resort to the usual theatrics apart from his pure technique; as a Boston reviewer put it, “no upturning of the eyes or pressing of the hand over the heart as if seized by a sudden cramp in that region, the said motions caused by a sudden fit of thankfulness.” Others, sometimes in small towns but even in New York City, already the nation’s cultural capital, reacted with disdain, a fact that led one of the city’s preeminent critics to ask, “Why will mere glitter so far outweigh solid gold with the multitude?” Still others attended not to hear the music but display their social status.
Such reactions persist to this day in the nation’s symphony halls, but even as audiences reproduce the prejudices of their time, it is hard not to be moved by the more singular reaction of that same New York correspondent who, upon hearing Thalberg play the opening of Beethoven’s Emperor, said that even as “it fell dead upon the audience, …I drank it in as the mown grass does the rain. A great soul was speaking to mine, and I communed with him.”
SOURCES: R. Allen Lott, From Paris to Peoria: How European Piano Virtuosos Brought Classical Music to the American Heartland (Oxford: Oxford University Press, 2003); Vera Mikol, “The Influence of Sigismund Thalberg on American Musical Taste, 1830-1872,” Proceedings of the American Philosophical Society 102.5 (20 October 1958), pgs. 464-468; Joseph Horowitz, online review of Vera Lawrence’s Strong on Music: The New York Music Scene in the Days of George Templeton Strong, vol. 3 (Chicago: University of Chicago Press, 1999), at http://www.josephhorowitz.com; Lawrence Levine, Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America (Cambridge, Mass.: Harvard University Press, 1988); Ralph Locke, “Music Lovers, Patrons, and the ‘Sacralization’ of Culture in America,” 19th-Century Music 17 (Fall 1993), pgs. 149-173; E. Douglas Bomberger, “The Thalberg Effect: Playing the Violin on the Piano,” Musical Quarterly 75.2 (Summer 1991), pgs. 198-208.
I’m not quite finished with it yet, but Paul Woodruff’s recent The Necessity of Theatre: The Art of Watching and Being Watched (Oxford: Oxford University Press, 2008) makes a compelling case for treating theatre as central to the human experience. Woodruff’s point is not to reiterate the now-familiar claim that theatrical drama importantly mirrors human experience, although I assume he would agree with thinkers like Kenneth Burke (who insisted in his own work that theatricality was not a metaphor for human life, but that our interactions are fundamentally dramatically charged). Rather, theatre, which he defines (repeatedly) as “the art by which human beings make or find human action worth watching, in a measured time and place” (18), enacts much of what is basic to human sociability.
Theatre and life are about watching and the maintenance of appropriate distance, and the way in which collective observation provides validation for human interaction (such as in the ways public witness validates a marriage ceremony or makes justice, itself animated by witnesses, collectively persuasive).
The book is a little frustrating – Woodruff is a philosopher and the book starts by discovering the river (and making its boldest claim up front) and then guiding the reader through all the connected tributaries, and that can be a little tedious when the journey starts to feel less like a riverboat cruise and more like navigating sandbars. That is, the project proceeds too fully as a definitional typology of theatre, an approach that performatively contradicts the most important think about theatre itself: finding audiences and keeping them interested. Woodruff also has a tendency to keep announcing how important his claims are: “Formally, however, I can point out already that [my] definition has an elegance that should delight philosophers trained in the classics” (39). “This is bold” (67). “My proposed definition of theatre is loaded” (68). And so on.
But along the way Woodruff says a lot of interesting things. Some examples:
• “Justice needs a witness. Wherever justice is done in the public eye, there is theatre, and the theatre helps make the justice real” (9).
• “People need theatre. They need it the way they need each other – the way they need to gather, to talk things over, to have stories in common, to share friends and enemies. They need to watch, together, something human. Without this…, well, without this we would be a different sort of species. Theatre is as distinctive of human beings, in my view, as language itself” (11).
• “Politics needs all of us to be witnesses, if we are to be a democracy and if we are to believe that our politics embody justice. In democracy, the people hold their leaders accountable, but the people cannot do this if they are kept in the dark. Leaders who work in closed meetings are darkening the stage of public life and they are threatening justice” (23).
• “The whole art of theatre is the one we must be able to practice in order to secure our bare, naked cultural survival” (26).
• “A performance of Antigone has more in common with a football game than it does with a film of Antigone” (44).
I began by cheating, I suppose, by reading the epilogue, where Woodruff notes: “I do not mean this book to be an answer to Plato and Rousseau…, because I think theatre in our time is not powerful enough to have real enemies. Theatre does have false friends, however, and they would confine it to a precious realm in the fine arts. We need to pull theatre away from its false friends, but we have a greater task. We need to defend theatre against the idea that it is irrelevant, that it is an elitist and a dying art, kept alive by a few cranks in a culture attuned only to film and television. I want to support the entire boldness of my title: The Necessity of Theatre” (231).
When Ray Kurzweil published his bestseller, The Singularity is Near, in 2005, the skeptical response reverberated widely, but his track record when it comes to having made accurate predictions has been uncanny. In the late 1980’s it was Kurzweil who anticipated that soon a computer could be programmed to defeat a human opponent in chess; by 1997 Big Blue was beating Garry Kasparov. His prediction that within several decades humans will regularly assimilate machines to the body seemed, as Michael Skapinker recently put it, “crazy,” “except that we are already introducing machines into our bodies. Think of pacemakers – or the procedure for Parkinson’s disease that involves inserting wires into the brain and placing a battery pack in the chest to send electric impulses through them.”
Kurzweil obviously has something more dramatic in mind than pacemakers. The term singularity both describes the center of a black hole where the universe’s laws don’t hold and that turning point in human history where the forward momentum of machine development (evolution?) will have so quickly accelerated as to outpace human brainpower and arguably human controls. For Kurzweil the potential implications are socially and scientifically transformational: as Skapinker catalogs them, “We will be able to live far longer – long enough to be around for the technological revolution that will enable us to live forever. We will be able to resist many of the diseases, such as cancer, that plague us now, and ally ourselves with digital versions of ourselves that will become increasingly more intelligent than we are.”
Kurzweil’s positions have attracted admirers and detractors. Bill Gates seems to be an admirer (Kurzweil is “the best person I know at predicting the future of artificial intelligence”). Others have criticized the claims as hopelessly exaggerated; Douglas Hofstadter admires elements of the work but has also said it presents something like a mix of fine food and “the craziest sort of dog excrement.” A particular criticism is how much of Kurzweil’s claim rests on what critics call the “exponential growth fallacy.” As Paul Davies put it in a review of The Singularity is Near: “The key point about exponential growth is that it never lasts. The conditions for runaway expansion are always peculiar and temporary.” Kurzweil responds that the conditions for a computational explosion are essentially unique; as he put it in an interview: “what we see actually in these information technologies is that the exponential growth associated with a particular paradigm… may come to an end, but that doesn’t stop the ongoing exponential progression of information technology – it just yields to another paradigm.” Kurzweil’s projection of the trend lines has him predicting that by 2027, computers will surpass human intelligence, and by 2045 “strictly biological humans won’t be able to keep up” (qtd. in O’Keefe, pg. 62).
Now Kurzweil has been named chancellor of a new Singularity University, coordinated by a partnership between NASA and Google. The idea is simultaneously bizarre and compelling. The institute is roughly modeled on the International Space Unversity in Strasbourg, where the idea is to bring together Big Thinkers who can, by their interdisciplinary conversations and collaboration, tackle the impossible questions. One wonders at whether the main outcome will be real research or wannabe armchair metaphysical speculation – time will tell, of course. NASA’s role seems to be simply that they have agreed to let the “university” rent space at their Moffett Field Ames Research Center facility in California. The money comes from Peter Diamandis (X Prize Foundation chair), Google co-founder Larry Page, Moses Znaimer (the media impresario), and tuition revenue (the nine week program is charging $25,000, scholarships available). With respect to the latter the odds seem promising – in only two days 600 potential students applied.
The conceptual issues surrounding talk of a Singularity go right to the heart of the humanistic disciplines, starting with the manner in which it complicates anew and at the outset what one means by the very term human. The Kurzweil proposition forces the issue by postulating that the exponential rate of information growth and processing capacity will finally result in a transformational break. When one considers the capacity of human beings to stay abreast of all human knowledge that characterized, say, the 13th century, when Europe’s largest library (housed at the Sorbonne) held only 1,338 volumes, and contrasts that with the difficulty one would encounter today in simply keeping up with, say, research on William Shakespeare or Abraham Lincoln, the age-old humanistic effort to induce practices of close reading and thoughtful contemplation can seem anachronistically naive.
One interesting approach for navigating these issues is suggested in a 2007 essay by Mikhail Epstein. Epstein suggests that the main issue for the humanities lies less in the sheer quantity of information and its potentially infinite trajectory (where, as Kurzweil has implied, an ever-expanding computational mind finally brings order to the Universe) than in the already evident mismatch between the finite human mind and the accumulated informational inheritance of humanity. Human beings live for a short period of time, and within the limited timeline of even a well-lived life, the amount of information one can absorb and put to good use will always be easily swamped by the accumulated knowledge of the centuries. And this is a problem, moreover, that worsens with each generation. Epstein argues that this results in an ongoing collective trauma, first explained by Marxist theory as inducing both vertigo and alienation, then by the existentialists as an inevitability of the human condition, and now by poststructuralists who (and Epstein concedes this is an oversimplification) who take reality itself “as delusional, fabricated, or infinitely deferred” (19). Epstein sees all this as evidencing the traumatizing incapacity of humans to comprehend in any detailed way their own collective history or thought. The postmodern sensibility revealed in such aesthetic traditions as Russian conceptualism, “which from the 1970s to the 1990s was occupied with cliches of totalitarian ideology,” and which “surfaced in the poetry and visual art of Russian postmodernism” in ways “insistently mechanical, distant, and insensitive” (21). There and elsewhere, “the senses are overwhelmed with signs and images, but the intellect no longer admits and processes them” (22).
The problem to which Epstein calls attention – the growing gap between a given human and the total of humanity – is not necessarily solved by the now well-established traditions that have problematized the Enlightenment sense of a sovereign human. In Epstein’s estimation, the now-pluralized sense of the human condition brought into being by multiculturalism has only accentuated the wider social trends to particularization and hyper-specialization: the problem is that “individuals will continue to diversify and specialize: they will narrow their scope until the words humans and humanity have almost nothing in common” (27).
The wider work on transhumanism and cyborg bodies reflects a longer tradition of engagement with the challenge posed by technological transformation and the possibilities it presents for physical reinvention. At its best, and in contrast to the more culturally salient cyborg fantasies enacted by Star Trek and the Terminator movies, this work refuses the utopian insistence in some of the popular accounts that technology will fully eradicate disease, environmental risk, war, and death itself. This can be accomplished by a range of strategies, one of which is to call attention to the essentially religious impulses in the work, all in line with long-standing traditions of intellectual utopianism that imagine wholesale transformation as an object to be greatly desired. James Carey used to refer to America’s “secular religiosity,” and in doing so followed Lewis Mumford’s critique of the nation’s “machano-idolatry” (qtd. in Dinerstein, pg. 569). Among the cautionary lessons of such historical contextualization is the reminder of how often thinkers like Kurzweil present their liberatory and also monstrous fantasies as inevitabilities simply to be managed in the name of human betterment.
SOURCES: Michael Skapinker, “Humanity 2.0: Downsides of the Upgrade,” Financial Times, 10 February 2009, pg. 11; Mikhail Epstein, “Between Humanity and Human Beings: Information Trauma and the Evolution of the Species,” Common Knowledge 13.1 (2007), pgs. 18-32; Paul Davies, “When Computers Take Over: What If the Current Exponential Increase in Information-Processing Power Could Continue Unabated,” Nature 440 (23 March 2006); Brian O’Keefe, “Check One: __ The Smartest, or __ The Nuttiest Futurist on Earth,” Fortune, 14 May 2007, pgs. 60-69; Myra Seaman, “Becoming More (Than) Human: Affective Posthumanisms, Past and Future,” Journal of Narrative Theory 37.2 (Summer 2007), pgs. 246-275; Joel Dinerstein, “Technology and Its Discontents: On the Verge of the Posthuman,” American Quarterly (2006), pgs. 569-595.
Several of the obituaries for Harold Pinter, the Nobel prize winning playwright who died on Christmas Eve, see the puzzle of his life as centered on the question of how so happy a person could remain so consistently angry. The sense of anger, or perhaps sullenness is the better word, arises mainly from the diffidence of his theatrical persona and the independence of his best characters even, as it were, from himself, and of course his increasingly assertive left-wing politics. The image works, despite its limitations, because as he suffered in recent years from a gaunting cancer he remained active and in public view, becoming something of a spectral figure. And of course many who were not fans of theatrical work (from the hugely controversial Birthday Party, to the critically acclaimed Caretaker, and then further forays into drama and film) mainly knew him through his forceful opposition to Bush and Blair and their Iraqi policies and the larger entanglements of American empire.
But Pinter, and this is true I think of all deeply intellectual figures, cannot be reduced to the terms provocateur or leftist. In this case, to be sure, simple reductions are wholly inadequate to the task given his very methods of work: one of his most abiding theatrical legacies is his insistence that dramatic characters are inevitably impenetrable – they owe us no “back story,” nor are their utterances ever finally comprehensible, any more than are our interactions in the real world of performed conversation. And so Pinter set characters loose who even he could not predict or control, an exercise that often meant his productions were themselves angering as audiences struggled to talk sense into the unfolding stories. As the Economist put it, “his characters rose up randomly… and then began to play taunting games with him. They resisted him, went their own way. There was no true or false in them. No certainty, no verifiable past… Accordingly, in his plays, questions went unanswered. Remarks were not risen to.”
So what does all this say about the ends of communication? For Pinter they are not connected to metaphysical reflection or understanding (this was Beckett’s domain; it is somehow fitting that Pinter’s last performance was in Beckett’s Krapp’s Last Tape, played from a wheelchair), but simple self defense, a cover for the emptiness underneath (Pinter: “cover for nakedness”), a response to loneliness where silence often does just as well as words. And so this is both a dramatic device (the trait that makes a play Pinteresque) and a potentially angering paradox: “Despite the contentment of his life he felt exposed to all the winds, naked and shelterless. Only lies would protect him, and as a writer he refused to lie. That was politicians’ work, criminal Bush or supine Blair, or the work of his critics” (Economist). Meanwhile, the audience steps into Pinter’s worlds as if into a subway conversation; as Cox puts it, “The strangers don’t give you any idea of their backgrounds, and it’s up to the eavesdropper to decide what their relationships are, who’s telling the truth, and what they’re talking about.”
The boundaries that lie between speaking and silence are policed by timing, and Pinter once said he learned the value of a good pause from watching Jack Benny performing at the Palladium in the early 1950’s. One eulogist recalls the “legendary note” Pinter once sent to the actor Michael Hordern: “Michael, I wrote dot, dot, dot, and you’re giving me dot, dot.” As Siegel notes: “It made perfect sense to Hordern.” The shifting boundaries of communication, which in turn provide traces of the shifting relations of power in a relationship, can devolve into cruelty or competition where both players vie for one-up status even as all the rest disintegrates around them. As his biographer, Michael Billington, put it, “Pinter has always been obsessed with the way we use language to mask primal urges. The difference in the later plays is not simply that they move into the political arena, but they counterpoint the smokescreen of language with shocking and disturbing images of torture, punishment, and death.” At the same time, and this because Pinter was himself an actor and knew how to write for them, the written texts always seemed vastly simpler on paper than in performance – and this is not because simple language suggests symbolic meaning (Pinter always resisted readings of his work that found symbolic power in this or that gesture) but because the dance of pauses and stutters and speaking end up enacting scenes of apparently endless complexity.
For scholars of communication who attend to his work, then, Pinter poses interesting puzzles and even at their most cryptic his plays bump up against the possibilities and limits of language. One such riddle, illuminated in an essay Dirk Visser, is that while most Pinter critics see his plays as revealing the failures of communication, Pinter himself refused to endorse such a reading, which he said misapprehended his efforts. And as one moves through his pieces, the realization that language is not finally representational of reality slowly emerges (or, in some cases, with the first line), nor even instrumental (where speakers say certain things to achieve certain outcomes). Pinter helps one see how language can both stabilize and unmoor meaning, even in the same instant (this is the subject of an interesting analysis of Pinter’s drama written by Marc Silverstein), and his work both reflects and straddles the transition from modernism to postmodernism he was helping to write into existence (a point elaborated by Varun Begley).
His politics were similarly complicated, I think, a view that runs contrary to the propagandists who simply read him as a leftist traitor, and a fascist at that. His attacks on Bush/Blair were often paired with his defense of Milosevic in the press as implying a sort of left-wing fascism where established liberal power is always wrong. But his intervention in the Milosevic trial was not to defend the war criminal but to argue for a fair and defensible due process, and this insistence on the truth of a thing was at the heart of his compelling Nobel address. Critics saw his hyperbole as itself a laughable performative contradiction (here he is, talking about the truth, when he hopelessly exaggerates himself). I saw a long interview done with Charlie Rose, replayed at Pinter’s death, where Rose’s impulse was to save Pinter from this contradiction, and from himself: (Paraphrasing Rose) “Surely your criticism is not of all the people in America and Britain, but only made against particular leaders.” “Surely you do not want to oversimplify things.” Pinter agreed he was not accusing everyone of war crimes but also refused to offer broader absolution, since his criticism was of a culture that allowed and enabled lies as much as of leaders who perpetuated them without consequence. Bantering with Rose, that is to say, he refused to take the bait, and the intentional contradictions persisted. His Nobel speech (which was videotaped for delivery because he could not travel to Stockholm and is thus available for view online) starts with this compelling paragraph:
In 1958 I wrote the following: “There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false. A thing is not necessarily either true or false; it can be both true or false.” I believe that these assertions still make sense and do still apply to the exploration of reality through art. So as a writer I stand by them but as a citizen I cannot. As a citizen I must ask: What is true? What is false?
What was so angering for many was Pinter’s suggestion that the American leadership (and Blair too) had committed war crimes that had first to be recognized and tallied and then perpetrators held to account:
The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile. The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven. Hundreds of thousands of deaths took place throughout these countries. Did they take place? And are they in all cases attributable to US foreign policy? The answer is yes they did take place and they are attributable to American foreign policy. But you wouldn’t know it. It never happened. Nothing ever happened. Even while it was happening it wasn’t happening. It didn’t matter. It was of no interest. The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them. You have to hand it to America. It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good. It’s a brilliant, even witty, highly successful act of hypnosis.
The argument is offensive to many (when the Nobel was announced, the conservative critic Roger Kimball said it was “not only ridiculous but repellent”), though for a playwright most attentive to the power of the obscuring mask and the underlying and sometimes savage operations of power they obscure, it is all of a piece. McNulty: “But for all his vehemence and posturing, Pinter was too gifted with words and too astute a critic to be dismissed as an ideological crank. He was also too deft a psychologist, understanding what the British psychoanalyst D. W. Winnicott meant when he wrote that ‘being weak is as aggressive as the attack of the strong on the weak’ and that the repressive denial of personal aggressiveness is perhaps even more dangerous than ranting and raving.”
As the tributes poured in, the tensions between the simultaneous arrogance (a writer refuses to lie) and the humility (he felt exposed to all the winds, naked and shelterless) in this arise again and again. The London theatre critic John Peter gets at this when he passingly noted how Pinter “doesn’t like being asked how he is.” And then, in back to back sentences: “A big man, with a big heart, and one who had the rare virtue of being able to laugh at himself. Harold could be difficult, oh yes.” David Wheeler (at the ART in Cambridge, Massachusetts): “What I enjoyed [of my personal meeting with him] was the humility of it, and his refusal to accept the adulation of us mere mortals.” Michael Billington: “Pinter’s politics were driven by a deep-seated moral disgust… But Harold’s anger was balanced by a rare appetite for life and an exceptional generosity to those he trusted.” Ireland’s Sunday Independent: “Pinter was awkward and cussed… It was the cussedness of massive intellect and a profound sense of outrage.”
Others were more unequivocal. David Hare: “Yesterday when you talked about Britain’s greatest living playwright, everyone knew who you meant. Today they don’t. That’s all I can say.” Joe Penhall: Pinter was “my alpha and beta… I will miss him and mourn him like there’s no tomorrow.” Frank Gillen (editor of the Pinter Review): “He created a body of work that will be performed as long as there is theater.” Sir Michael Gambon: “He was our God, Harold Pinter, for actors.”
Pinter’s self-selected eulogy conveys, I think, the complication – a passage from No Man’s Land – “And so I say to you, tender the dead as you would yourself be tendered, now, in what you would describe as your life.” Gentle. Charitable. But also a little mocking. A little difficult. And finally, inconclusive.
SOURCES: Beyond Pinter’s own voluminous work, of course – Marc Silverstein, Harold Pinter and the Language of Cultural Power (Bucknell UP, 1993); Varun Begley, Harold Pinter and the Twilight of Modernism (U Toronto P, 2005); “Harold Pinter,” Economist, 3 January 2009, pg. 69; Ed Siegel, “Harold Pinter, Dramatist of Life’s Menace, Dies,” Boston Globe, 26 December 2008, pg. A1; John Peter, “Pinter: A Difficult But (Pause) Lovely Man Who Knew How to Apologise,” Sunday Times (London), 28 December 2008, pgs. 2-3; Gordon Cox and Timothy Gray, “Harold Pinter, 1930-2008,” Daily Variety, 29 December 2008, pg. 2; Charles McNulty, “Stilled Voices, Sardonic, Sexy: Harold Pinter Conveyed a World of Perplexing Menace with a Vocabulary All His Own,” Los Angeles Times, 27 December 2008, pg. E1; Dirk Visser, “Communicating Torture: The Dramatic Language of Harold Pinter,” Neophilologus 80 (1996): 327-340; Matt Schudel, “Harold Pinter, 78,” Washington Post, 26 December 2008, pg. B5; Michael Billington, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 15; Esther Addley, “Harold Pinter 1930-2008,” Guardian (London), 27 December 2008, pg. 14; Frank Gillen, “Farewell to an Artist, Friend,” St. Petersburg Times (Florida), 4 January 2009, pg. 4E; “Unflagging in His Principles and Unrivalled in His Genius,” Sunday Independent (Ireland), 28 December 2008; Dominic Dromgoole, “In the Shadow of a Giant,” Sunday Times (London), 28 December 2008, pgs. 1-2; Mel Gussow and Ben Brantley, “Harold Pinter, Whose Silences Redefined Drama, Dies at 78,” New York Times, 26 December 2008, pg. A1.
I enjoyed seeing The Curious Case of Benjamin Button, but not because the film finally coheres into a memorable totality but rather since the sum of the parts end up actually greater than the whole, where vivid moments linger after the grand narrative arc fades.
The premise on which the story is based, the idea of an anomalous child born physically old who dies decades later after complete disappearance into infancy, is constrained by several challenges, some of which are skillfully handled (the old boy grows up in a retirement home, and so attracts no special notice) and others which strain credulity (including the fact that his former lover, having been essentially abandoned by him, ends up providing years of now-maternal attention without her new husband or daughter ever thinking to ask about the young child for whom the mother nows cares as she takes up residence in the same rest home). [I am still unpersuaded of the narrative plausibility of this turn of events – the film implies that the cognitively vacant infant(old) Benjamin reconnects because Daisy’s name is all over his diary, but everything in his prior behavior and decision to leave makes implausible the idea that he would want the diary to serve as a child’s name tag, enabling a return to her or the imposition of his own care onto her later life. And if, by the end, Benjamin can’t remember who he is or anything about his past, why should we believe that his journals and scrapbooks would have been so well preserved?]
His biological father follows Benjamin’s development from a distance but, oddly, when they reconnect at his dying invitation, not much time is spent dwelling on the biological mysteries of the reverse aging. The fact that Benjamin’s strange trajectory is never discovered (in contrast to the original story, where the Methuselah story makes the papers) allows the terribly abbreviated end-of-life sequences a kind of melancholic privacy – the teen-then-boy-then-infant never raises anyone’s interest and no one apparently ever connects the dots, but the benefit of this is that the more mundane moments of early/late life take on an unexpected sadness, such as the quiet passing observation noting the moment when the boy loses his capacity for speech. It hadn’t really occurred to me until that instant that this would be so haunting a moment.
The idea that old age is a sort of reversion to infancy is cruel, and apart from those whose physical or mental infirmities cause total end-of-life dependency on others, I always find myself repelled and even afraid of the mentality that leads minimum-wage nursing home attendants to treat their clients as addled or stupid. The idea of ending my life in a nursing home is less jarring to me, per se, than is the idea that having lived a life of growth and experience and (hopefully ongoing) intellectual stimulation, one is reduced finally to having some 20-something screaming at how I need to finish my oatmeal. I am skeptical that any death is a good one, and I know many end-of-life professional caregivers are angels in disguise, but it is the possibility of old-age condescension as much as isolation that terrifies me. But Button, despite his final senility, is able to die a good death, lovingly cared for to the end in a mode of caregiving that recalls what caring for someone with Alzheimer’s must entail: is the final gleam in his infant eyes a sort of final cognitive reaching out or just the last biological gasp? And is the senile child’s sad efforts to remember the piano he played for so long a failure or a final point of human contact, or both?
Button’s awful choice to abandon his Daisy and child at a time that seems far too early – after all, Pitt is in his prime, and would any child really think to notice that her father is getting younger all the time for several more years? – raises questions too that are larger than the unique temporal disconnection haunting Benjamin’s relationship to the people he loves. Evoked are the larger ways in which so much of human destiny is shaped by the randomness of timing and the disconnections that keep people apart. The audience is rather beaten over the head with this theme, especially in the backwards-edited scene derailing the end of Daisy’s performing career, but it pops up everywhere. And even with respect to Daisy the issues raised by disconnection are interesting – a scene where she and Benjamin have physically moved within the realm of sexual plausibility ends instead in a disappointing failure to connect, produced not by their respective ages or even by their sexual histories (which by this point in the midnight park have come into sync), but by the awkwardness of seeing a child-as-lover (for Benjamin in this moment an unbridgeable chasm).
The film is bracketed at both ends by disasters – World War I and Katrina – and their denouement, but their visual enactment is oblique and produce their own temporal reversals. World War I, today remembered (if at all) as a war of horrifically widespread and anonymous slaughter, is reenacted through the very particular personal drama of a blind clock maker and his wife who lose their beloved child to battle, and to the extent the war evokes mass drama it is the exuberance of its conclusion more than the horror of its killing machines that we witness. And Katrina, which we remember in part for the indignities of the preventable deaths it caused, is here recalled within the confines of a hospital that while impersonal (under threat of the advancing storm) is also a place of close and immediate care. [Tangent: Button is an example of how the twists-and-turns of film industrial production can have significant consequence. The movie only got made, according to an account in the New York Times, because Louisiana offers big movie tax breaks to production companies. This, in turn, caused the story to shift to New Orleans, and this has yielded a film wholly unimaginable in its originally anticipated location of Baltimore, the setting of the original short story].
Benjamin, born on the day of the Armistice, is raised and dies in a house wholly comfortable with frequent death, a upbringing at odds with a contemporary milieu where even adults are so often separated from end-of-life experiences that when they finally start to happen with friends and family their accompanying rituals and significance seem all the more jarring and derailing. A baby in an old man’s body, physically caucasian but raised by parents of color, made mightily richer by the manufacture of something as tiny as a button, a boy who attends a faith healing where it turns out the fake patter inspires the boy to walk without changing him physically (one might say the lie actually has healing power) but kills the minister, an American who works many years of the 20th century in Russia or on the water and who wins his own battle with the Nazi subs not on a carrier but on a tugboat, a man who for much of the story seems not self-reflective at all but (it is revealed) kept a detailed daily journal for most of his life – much more than time is narratively reversed. The familiar is thus made strange.
The opening of the film, with its clock-that-runs-backward allegory, is intriguing too. The idea of God as a kind of watchmaker who has set into motion a universe of logically connected causes-and-effects and who is the lord of time itself was already in circulation 50 years before Darwin published Origin of Species, now 150 years old, and provides a persistent commonsensical response elaborated today by Intelligent Design Theory. Read this way, one might see Benjamin’s magical appearance on earth as a divine effort to awaken our sensibilities and unnerve our comfortable sense of time passing.
Or one might take an opposite tack. It was Richard Dawkins back in the 1980’s who worked to turn the designer idea on its head, arguing for a Blind Watchmaker, which is to say the concept that a universe may be ordered in ways reflective not of a central intelligence but rather a universally available concept (here, natural selection). In Benjamin Button the blind clockmaker is visually but not cognitively impaired, and his grand backward-running clock is not an error but a commemoration of possibilities lost. Read this way, Benjamin’s case is more curious than compelling, evidence of the oddities produced by evolutionary caprice.
The F. Scott Fitzgerald short story (written in 1922) on which the film is very loosely based reads more like a fable on medicalization (part of the problem, it seems, may be that in 1860 the Buttons decide to have the birth in a hospital instead of at home) than the allegory of aging and dying that structures the film. And in the story Benjamin is born talking and with not just the body of an old man but his sensibilities too (“See here, if you think I’m going to walk home in this [baby] blanket, you’re entirely mistaken.” he says hours after birth). It is inevitably mentioned that the film bears virtually no relationship with the Fitzgerald story; having just read the short, I think this fact is to the credit of the film, whose melancholic aftertaste is far sweeter than the sense of absurdity and only occasional sadness induced by F. Scott original tale.
Last week the American Academy of Arts and Sciences released a long-anticipated prototype of its Humanities Indicators project. The initiative – organized a decade ago by the American Council of Learned Societies, the National Endowment for the Humanities, and the National Humanities Alliance, and funded by the Hewlett and Mellon Foundations – responds to the accumulating sense that (and I guess this is ironic) the humanities haven’t paid enough attention to quantifying their impact and history. As Roger Geiger notes, “gathering statistics on the humanities might appear to be an unhumanistic way to gain understanding of its current state of affairs.” But noting the value of a fuller accounting, the HI project was proposed as a counterpart to the Science and Engineering Indicators (done biennially by the National Science Board), which have helped add traction to the now widely recognized production crisis in the so-called STEM disciplines.
The Chronicle of Higher Education summarized the interesting findings this way (noting that these were their extrapolations; the Indicators simply present data without a narrative overlay apart from some attached essays):
In recent years, women have pulled even with men in terms of the number of graduate humanities degrees they earn but still lag at the tenure-track job level. The absolute number of undergraduate humanities degrees granted annually, which hit bottom in the mid-1980s, has been climbing again. But so have degrees in all fields, so the humanities’ share of all degrees granted in 2004 was a little less than half of what it was in the late 1960s.
This published effort is just a first step, and the reported data mainly usefully repackage data gleaned by other sources (such as from the Department of Education and the U.S. Bureau of Labor Statistics). Information relating to community colleges is sparse for now. Considerably more data have been originally generated by a 2007-2008 survey, and that will be added to the website in coming months.
The information contained in the tables and charts confirm trends long suspected and more anecdotally reported at the associational level: the share of credit hours and majors and faculty hired who connect to the humanistic disciplines has fallen dramatically as a percentage of totals. The percentage of faculty hired into tenure lines, which dropped most significantly in the late 1980s and 1990s, is still dropping, though more modestly, today. Perhaps most telling, if a culture can be said to invest in what it values, is the statistic that in 2006, “spending on humanities research added up to less than half a percent of the total devoted to science and engineering research” (Howard). As Brinkley notes, in 2007, “NEH funding… was approximately $138.3 million – 0.5 percent of NIH funding and 3 percent of NSF… [And] when adjusted for inflation, the NEH budget today is roughly a third of what it was thirty years ago.” Even worse: “[T]his dismal picture exaggerates the level of support for humanistic research, which is only a little over 13% of the NEH program budget, or about $15.9 million. The rest of the NEH budget goes to a wide range of worthy activities. The largest single outlay is operating grants for state humanities councils, which disburse their modest funds mostly for public programs and support of local institutions.” And from private foundations, “only 2.1% percent of foundation giving in 2002 went to humanities activities (most of it to nonacademic activities), a 16% relative decline since 1992.” Meanwhile, university presses are in trouble. Libraries are struggling to sustain holdings growth.
Other information suggests interesting questions. For instance: why did the national production of humanities graduates climb so sharply in the 1960’s (doubling between 1961 and 1966 alone)? Geiger argues the bubble was a product of circa-1960s disillusionment with the corporate world, energy in the humanistic disciplines, the fact that a humanities degree often provided employment entree for women (especially to careers in education), and a booming economy that made jobs plentiful regardless of one’s academic training. After 1972, Geiger argues, all these trends were flipped: the disciplines became embroiled in theoretical disputes and thus less intellectually compelling for new students (some attracted by Big Theory and arguably more antagonized), universities themselves became the target of disillusion, business schools expanded fast and became a more urgent source of competition, and so on. Today, although enrollments are booming across the board in American universities, the humanities remain stable in generating roughly 8% of B.A. degrees, which may mean the collapse has reached bottom.
One interesting suggestion is posed by David Laurence, who reads the Indicators as proving that the nation can be said to have produced a “humanities workforce,” which in turn “makes more readily apparent how the functioning of key cultural institutions and significant sectors of the national economy depends on the continued development and reproduction of humanistic talent and expertise.” This infrastructure includes (as listed by Laurence) schools and teachers, libraries, clergy, writers, editors, museums, arts institutions, theater and music, publishing, entertainment and news (where the latter involve the production of books, magazines, films, TV, radio, and Internet content). And this gives rise to some potential confidence: humanities programs continue to attract brilliant students, good scholarship is still produced, and the “’rising generation’ of humanities scholars is eager to engage directly with publics and communities” (Ellison), implying that the public humanities may grow further. An outreach focus for humanists is a double-edged sword, of course, but might enhance the poor standing university humanities programs have, for example, with state funding councils.
SOURCES: Jennifer Howard, “First National Picture of Trends in the Humanities is Unveiled,” Chronicle of Higher Education, 16 January 2009, pg. A8; Jennifer Howard, ‘Early Findings From Humanities-Indicators Project are Unveiled at Montreal Meeting,” Chronicle of Higher Education, 18 May 2007, pg. A12; Essays attached to the AAAS Humanities Indicators website, including Roger Geiger, “Taking the Pulse of the Humanities: Higher Education in the Humanities Indicators Project,” David Laurence, “In Progress: The Idea of a Humanities Workforce,” Alan Brinkley, “The Landscape of Humanities Research and Funding,” and Julie Ellison, “This American Life: How Are the Humanities Public?”