Sausages, enlightenment, and "critical thinking"

I was very interested to read Elizabeth Scalia’s piece about truth, relativism, and critical thinking. And I see from the lively responses to the post that I was not the only one. Personally, whenever I hear the phrase “critical thinking,” I tend to break out in a bit of a sweat. I do not like the phrase. Whenever people use it, I tend to think they mean . . . something else. Why? Well, it’s a long story. Have you got a minute–or, rather twenty minutes?

Advertisement

Let me start with Otto von Bismarck, a chap who would also have cast a cold eye upon the phrase “critical thinking,” had he ever chanced to encounter it. “We must never,” Bismarck warned, “look into the origins of laws or sausages.” Sage advice, I’ve always thought, but how much at odds it is with the dominant current of modern thought, which is to say Enlightenment thought, which is to say the sort of thought that champions phrases like “critical thinking.” Immanuel Kant, a great hero of the Enlightenment, summed up the alternative to Bismarck’s counsel when, in an essay called “What is Enlightenment?,” he offered as its motto the imperative “Sapere Aude”: “Dare to know!” Enlightened man, Kant thought, was the first real adult: the first to realize his potential as an autonomous being—a being, as the etymology of the word implies, who “gives the law to himself.” As Kant stressed, this was a moral as well as an intellectual achievement, since it involved courage as much as insight: courage to put aside convention, tradition, and superstition (how the three tended to coalesce for Enlightened thinkers!) in order to rely for guidance on the dictates of reason alone.

Bismarck’s observation cautions reticence about certain matters; it implies that about some things it is better not to inquire too closely. What Walter Bagehot said about the British monarchy—“We must not let in daylight upon magic”—has, from this point of view, a more general application. The legend “Here be monsters” that one sees on certain antique maps applies also to certain precincts of the map of our moral universe.

Enlightened man, by contrast, is above all a creature who looks into things: he wants to “get to the bottom” of controversies, to dispel mysteries, to see what makes things “tick,” to understand the mechanics of everything from law to sausages, from love to society. Who has the better advice, Bismarck or Kant?

Of course, it is not a simple choice. For one thing, it might be argued that Kant’s own attitude toward the imperative “Dare to Know!” was complex. In a famous passage toward the beginning of The Critique of Pure Reason, for example, Kant tells us that he was setting limits to reason in order to make room for faith. Exactly what Kant meant by this . . . what to call it? this admission? this boast? this concession? Well, whatever Kant meant by his invocation of faith, it has been an abiding matter of debate. Nevertheless, it is fair to say that Kant’s “critical philosophy” is itself a monument of Enlightenment thought, as much in its implied commendation of the “critical attitude” as as in the specific philosophical bureaucracy he recommends.
Today, we can hardly go to the toilet without being urged to cultivate “critical thinking.” Which does not mean, I hasten to add, that we are society of Kantians.

Nevertheless, what we are dealing with here is an educational watchword, not to say a cliché, that has roots in some of the Enlightenment values that Kant espoused. It’s a voracious, quick-growing hybrid. A search for the phrase “critical thinking” using the Google search engine brings up 2,290,200 references in .08 seconds. The first match, God help us, is to something called “The Critical Thinking Community,” whose goal is “to promote essential change in education and society through the cultivation of fair-minded critical thinking.” (Why is it, I wonder, that the conjunction of the phrase “critical thinking” with the word “community” is so reliably productive of nausea?)

Everywhere you look, in fact, you will find the virtues of “critical thinking” extolled: Colleges and universities claim to be stuffed with the thing, and even high schools— even, mirabile dictu, primary schools —brag about instilling the principles of “critical thinking” in their charges. There’s “critical thinking” for bankers, for accountants, for cooks, gardeners, haberdashers, and even advanced toddlers. A couple of summers ago, my wife and I took our son, then 5 years old, to an orientation meeting for parents considering sending their children to a local kindergarten. School officials enthusiastically told us about how they would bring the principles of critical thinking to Sally’s play pen and little Johnnie’s sport. Absolutely everyone is enjoined to scrutinize his presuppositions, reject conventional thinking, and above all, to be original and/or “creative.” (Ponder, if your stomach is strong enough, a “Creative Critical Thinking Community.”)

To some extent, we owe the infestation of “critical thinking” to that great twentieth-century movement to empty minds while at the same time inflating the sense of self-importance, or, to give it its usual name, Progressive Education. It was John Dewey, after all, who told us that “education as such has no aims,” warned about “the vice of externally imposed ends,” urged upon his readers the notion that “an individual can only live in the present.” (The present, Dewey said, “is what life is in leaving the past behind it,” i.e., a nunc stans of perfect ignorance.)

The first thing to notice about the vogue for “critical thinking” is that it tends to foster not criticism but what one wit called “criticismism”: the “ism” or ideology of being critical, which, like most isms, turns out to be a parody or betrayal of the very thing it claims to champion. Criticismism is an attitude guaranteed to instill querulous dissatisfaction, which is to say ingratitude, on the one hand, and frivolousness, on the other. Its principal effect, as the philosopher David Stove observed, has been “to fortify millions of ignorant graduates and undergraduates in the belief, to which they are already only too firmly wedded by other causes, that the adversary posture is all, and that intellectual life consists in ‘directionless quibble. ’”

The phrase “directionless quibble” is from Jacques Barzun’s The House of Intellect, and a fine book it is, too, not least in its appreciation of the ways in which unanchored intellect can be “a life-darkening institution.” I suggest, however, that the phrase “directionless quibble” is not entirely accurate, since the habit of quibble cultivated by “critical thinking” does have a direction, namely against the status quo. The belief, as Stove puts it, “that the adversary posture is all” is at the center of “critical thinking,” of criticismism. Lionel Trilling spoke in this context of “the adversary culture of the intellectuals.” I well remember the day I received word of a long article in Teachers College Record, a journal from Indiana University which describes itself as “the voice of scholarship in education.” The featured article is a 30,000 word behemoth by a professor of “inquiry and philosophy” called “Ocularcentrism, Phonocentrism and the Counter Enlightenment Problematic: Clarifying Contested Terrain in our Schools of Education.” I am too charitable to subject you to a sample of its almost comically reader-proof prose (you can see for yourself here), but it is worth pausing to note that such work is absolutely typical in the academic establishment today. It really is “the voice of scholarship,” or what’s become of scholarship.

Advertisement

How we got here is makes for a long story. I’d like to dip into a few chapters of that story and then speculate briefly about what an alternative might look like.

It seems obvious that criticismism is a descendant or re-enactment of the Enlightenment imperative “Dare to Know!” In this sense, it is a precursor or adjunct of that “hermeneutics of suspicion” that the French philosopher Paul Ricoeur invoked when discussing the intellectual and moral demolition carried out by thinkers like Darwin, Marx, Freud, and Nietzsche. It would be hard to exaggerate the corrosive nature of these assaults. Often, indeed, what we encounter is less a hermeneutics of suspicion than a hermeneutics of contempt. The contempt expresses itself partly in a repudiation of the customary, the conventional, the habitual, partly in the cult of innovation and originality. Think, for example, of John Stuart Mill’s famous plea on behalf of moral, social, and intellectual “experiments in living.” Part of what makes that phrase so obnoxious is Mill’s effort to dignify his project of moral revolution with the prestige of science—as if, for example, his creepy relationship with the married Harriet Taylor was somehow equivalent to Michael Faraday’s experiments with electro-magnetisim. You see the same thing at work today when young hedonists in search of oblivion explain that they are “experimenting” with drugs.

It is worth pausing over Mill’s brief on behalf of innovation. You’ve heard it a thousand times. But familiarity should not blind us to its fatuous malevolence. Throughout history, Mill argues, the authors of such innovations have been objects of ridicule, persecution, and oppression; they have been ignored, silenced, exiled, im¬prisoned, even killed. But (Mill continues) we owe every step of progress, intellectual as well as moral, to the daring of innovators. “Without them,” he writes, “human life would become a stagnant pool. Not only is it they who introduce good things which did not before exist; it is they who keep the life in those which already exist.” Ergo, innovators—“developed human beings” is one phrase Mill uses for such paragons —should not merely be tolerated but positively be encouraged as beacons of future improvement. David Stove called this the “They All Laughed at Christopher Columbus” argument. In a penetrating essay in his book Cricket Versus Republicanism (1995), Stove noted that “the Columbus argument” (as he called it for short) “has swept the world.”

With every day that has passed since Mill published it, it has been more influential than it was the day before. In the intellectual and moral dissolution of the West in the twentieth century, every step has depended on conserva¬tives being disarmed, at some critical point, by the Columbus argument; by revolutionaries claiming that any resistance made to them is only another instance of that undeserved hostility which beneficial innovators have so regularly met with in the past.

The amazing thing about the success of the Columbus argument is that it depends on premises that are so obviously faulty. Indeed, a moment’s reflection reveals that the Columbus argument is undermined by a downright glaring weakness. Granted that every change for the better has depended on someone embarking on a new departure: well, so too has every change for the worse. And surely, Stove writes, there have been at least as many proposed innovations which “were or would have been for the worse as ones which were or would have been for the better.” Which means that we have at least as much reason to discourage innovators as to encourage them, especially when their innovations bear on things as immensely complex as the organization of society. As Lord Falkland admonished, “when it is not necessary to change, it is necessary not change.”

The triumph of Millian liberalism—one of the main “active ingredients” in “critical thinking”—shows that such objections have fallen on deaf ears. But why? Why have “innovation,” “originality,” etc., become mesmerizing charms that neutralize criticism before it even gets started when so much that is produced in the name of innovation is obviously a change for the worse? An inventory of the fearsome social, political, and moral innovations made in this past century alone should have made every thinking person wary of unchaperoned innovation.

One reason that innovation has survived with its reputation intact, Stove notes, is that Mill and his heirs have been careful to supply a “one-sided diet of examples.” You mention Columbus, but not Stalin, Copernicus, but not the Marquis de Sade, Socrates, but not Robespierre. Mill never missed an opportunity to expatiate on the value of “originality,” “eccentricity,” and the like. “The amount of eccentricity in a society,” he wrote, “has generally been proportional to the amount of genius, mental vigor, and moral courage it contained.” But you never caught Mill dilating on the “improvement on established practice” inaugurated by Robespierre and St. Just, or the “experiments in living” conducted by the Marquis de Sade.

Still, in order to understand its world-conquering success, one has to go beyond simple credulity and an abundance of one-sided examples. Flattery comes into it. Mill was exceptionally adroit at appealing to his readers’ moral vanity. When he spoke (as he was always speaking) of “persons of decided mental superiority” he made it seem as though he might actually be speaking about them. Mill said that there was “no reason that all human existence should be constructed on some one or some small number of pat¬terns.” Quite right! Even if persons of genius are always likely to be “a small minority,” still we must “preserve the soil in which they grow.” Consequently, people have a duty to shun custom and nurture their individual “self-development” if they are not to jeopardize “their fair share of happiness” and the “mental, moral, and aesthetic stature of which their nature is capable.”

Mill’s blandishments went even deeper. In On Liberty, Mill presented himself as a prophet of individual liberty. He has often been regarded as such, especially by liberal academics, who of course have been instrumental in propagating the gospel according to Mill. And “gospel” is the mot juste. Like many radical reformers, Mill promised almost boundless freedom, but he arrived bearing an exacting new system of belief. In this sense, as Maurice Cowling argues, On Liberty has been “one of the most influential of modern political tracts,” chiefly because “its purpose has been misunderstood.” Contrary to common opinion, Cowling wrote, Mill’s book was

Advertisement

not so much a plea for individual freedom, as a means of ensuring that Christianity would be superseded by that form of liberal, rationalising utilitarianism which went by the name of the Religion of Humanity. Mill’s liberalism was a dogmatic, religious one, not the soothing night-comforter for which it is sometimes mistaken. Mill’s object was not to free men, but to convert them, and convert them to a peculiarly exclusive, peculiarly in-sinuating moral doctrine. Mill wished to moralize all social activity. . . . Mill, no less than Marx, Nietzsche, or Comte, claimed to replace Christianity by “something better.” Atheists and agnostics, humanists and free-thinkers may properly give thanks to Mill.

This tension in Mill’s work—between Mill the libertarian and Mill the moralistic utilitarian—helps to account for the vertiginous quality that suffuses the liberalism for which On Liberty was a kind of founding scripture. Mill’s announced enemy can be summed up in words like “custom,” “prejudice,” “established morality.” All his work goes to undermine these qualities—not because the positions they articulate are necessarily in error but simply because, being customary, accepted on trust, established by tradition, they have not been subjected to the acid-test of his version of the utilitarian calculus.

The tradition that Mill opposed celebrated custom, prejudice, and established morality precisely because they had prevailed and given good service through the vicissitudes of time and change; their longevity was an important token of their worthiness. Let us by all means acknowledge, as Edmund Burke acknowledged, that “a state without the means of some change is without the means of its conservation.” Still, Burke was right to extol prejudice as that which “renders a man’s virtue his habit. . . . Through just prejudice, his duty becomes a part of his nature.”

Mill overturned this traditional view. Indeed, he was instrumental in getting the public to associate “prejudice” indelibly with “bigotry.” He epitomized what the German philosopher Hans Georg Gadamer called the Enlightenment’s “prejudice against prejudice.”

For Mill, established morality is suspect first of all just because it is established. His liberalism is essentially corrosive of existing societal arrangements, institutions, and morality. At bottom, Mill’s philosophy is a kind of inversion of Alexander Pope’s optimism: “Whatever is, is suspect” might have been Mill’s motto. He constantly castigated such things as the “magical influence of custom” (“magical” being a negative epithet for Mill), the “despotism of custom [that] is everywhere the standing hindrance to human advancement,” the “tyranny of opinion” that makes it so difficult for “the progressive principle” to flourish. According to Mill, the “greater part of the world has, properly speaking, no history because the sway of custom has been complete.”

Such passages reveal the core of moral arrogance inhabiting Mill’s liberalism. Liberty was always on Mill’s lips; a new orthodoxy was ever in his heart. There is an important sense in which the libertarian streak in On Liberty is little more than a prophylactic against the coerciveness that its assumption of virtuous rationality presupposes.

Such “paradoxes” (to put it politely) show themselves wherever the constructive part of Mill’s doctrine is glimpsed through his cheerleading for freedom and eccentricity. Mill’s doctrine of liberty begins with a promise of emancipation. The individual, in order to construct a “life plan” worthy of his nature, must shed the carapace of inherited opinion. He must learn to become adept at “critical thinking,” to subject all his former beliefs to rational scrutiny. He must dare to be “eccentric,” “novel,” “original.” At the same time, Mill notes, not without misgiving, that

As mankind improve, the number of doctrines which are no longer disputed or doubted will be constantly on the increase; the well-being of mankind may almost be measured by the number and gravity of the truths which have reached the point of being uncontested. The cessation, on one question after another, of serious controversy is one of the necessary incidents of the consolidation of opinion—a consolidation as salutary in the case of true opinions as it is dangerous and noxious when the opinions are erroneous.

In other words, the partisan of Millian liberalism undertakes the destruction of inherited custom and belief in order to construct a bulwark of custom and belief that can be inherited. As Mill put it in his Autobiography (posthumously published in 1873),

I looked forward, through the present age of loud disputes but generally weak convictions, to a future . . . [in which] convictions as to what is right and wrong, useful and pernicious, [will be] deeply engraven on the feelings by early education and general unanimity of sentiment, and so firmly grounded in reason and in the true exigencies of life, that they shall not, like all former and present creeds, religious, ethical, and political, require to be periodically thrown off and replaced by others.

So: a “unanimity of sentiment” (a.k.a. custom) is all well and good as long as it is grounded in the “true exigencies of life”—as defined, of course, by J. S. Mill.

Mill’s utilitarianism provides one major model for criticismism. Another is found in the work of that modern Thrasymachus, Friedrich Nietzsche. In a celebrated epigram, Nietzsche wrote that “we have art lest we perish from the truth.” His disturbing thought was that art, with its fondness for illusion and make-believe, did not so much grace life as provide grateful distraction from life’s horrors. But Nietzsche’s real radicalism came in the way that he attempted to read life against truth.

Inverting the Platonic-Christian doctrine that linked truth with the good and the beautiful, Nietzsche declared truth to be “ugly”—a statement that, even now, has the capacity to bring one up short. Suspecting that “the will to truth might be a concealed will to death,” Nietzsche boldly demanded that “the value of truth must for once be experimentally called into question.” This ambition to put truth itself under the knife of human scrutiny is as it were the moral source of all those famous Nietzschean formulae about truth and knowledge—that “there are no facts, only interpretations,” that “to tell the truth is simply to lie according to a fixed convention,” etc.

Advertisement

As Nietzsche recognized, his effort to provide a genealogy of truth led directly “back to the moral problem: Why have morality at all when life, nature, and history are ‘not moral’?” Nietzsche’s influence on contemporary intellectual life can hardly be overstated. “I am dynamite,” he declared shortly before sinking into irretrievable madness. He was right. In one way or another, his example is an indispensable background to almost every destructive intellectual movement the last century witnessed: Deconstruction, post-structuralism, just about anything followed by the word “studies” (gender studies, science studies, post-colonial studies): all trace a large part of their pedigree to Nietzsche’s obsession with power, in particular his subjugation of truth to scenarios of power. Foucault’s insistence that truth is always a coefficient of “regimes of power,” for example, is simply Nietzsche done over in black leather. And where would our deconstructionists and poststructuralists be without Nietzsche’s endlessly quoted declaration that truth is “a moveable host of metaphors, metonymies, and anthropomorphisms”?

The philosopher Richard Rorty summed up Nietzsche’s importance when he enthusiastically observed that “it was Nietzsche who first explicitly suggested that we drop the whole idea of ‘knowing the truth. ’” Add a dollop of Marx for the appropriate degree of politization and presto: you have the formula for contemporary redactions of critical thinking.

Conceptually, such signature Nietzschean observations as “everything praised as moral is identical in essence with everything immoral” add little to the message that Thrasymachus was dispensing twenty-five-hundred years ago. They are the predictable product of nominalism and the desire to say something shocking, a perennial combination among the intellectually impatient. Nietzsche’s real radicalism arises from the grandiosity of his hubris. His militant “God is dead” atheism had its corollary: the dream of absolute self-creation, of a new sort of human being strong enough to dispense with inherited morality and create, in Nietzsche’s phrase, its “own new tables of what is good.” This ambition is at the root of Nietzsche’s goal of effecting a “transvaluation of all values.” It is also what makes his philosophy such an efficient solvent of traditional moral thought.

Truth vs. life: it was Nietzsche’s startling conclusion that science was at bottom allied with nihilism because of its uncompromising commitment to truth. “All science,” he wrote, “has at present the object of dissuading man from his former respect for himself.” In order to salvage life from science “the value of truth must for once be experimentally called into question.” It is one of the curious features of Nietzsche’s mature thought that he wished to question the value of truth while upholding honesty as his one remaining virtue. Traditionally, the moral virtues have been all of a piece. For example, Aquinas observes that “nearly all are agreed in saying” that the moral virtues are interconnected, that “discernment belongs to prudence, rectitude to justice,” and so on. It is worth asking whether honesty, sundered from the family of virtues, remains a virtue—whether, in the end, it even remains honest. Untempered by other virtues, honesty functions not so much to reveal truth as to expose it. Is that honest?

Nietzsche clung to honesty after abandoning the other virtues because it allowed him to fashion the most ruthless instrument of interrogation imaginable. Difficulty, not truth, became his criterion of value. Thus he embraced the horrifying idea of the Eternal Recurrence primarily because he considered it “the hardest possible thought”—whether it was also true didn’t really matter.

Nietzsche opposed honesty to truth. He looked to art as a “countermovement to nihilism” not because he thought that art could furnish us with the truth but because it accustomed us to living openly with untruth. Ultimately, Nietzsche’s ideal asks us to transform our life into a work of art. Accepting Schopenhauer’s inversion of the traditional image of man, Nietzsche no longer finds human life dignified in itself: if man is essentially an expression of irrational will, then in himself he is morally worthless.

This is the dour irony that attends Nietzsche’s effort to burden man with the task of creating values rather than acknowledging them. And it is here, too, that Nietzsche’s aestheticism and his rejection of morality intersect. For Nietzsche, man is not an end in himself but only “a bridge, a great promise.” In order to redeem that promise, man must treat life with the same imperiousness and daring that the artist brings to his work. If, as Nietzsche argued, “life itself is essentially appropriation, injury, overpowering what is alien and weaker; suppression, hardness, . . . and at least, at its mildest, exploitation,” then it is hardly surprising that the perfect aesthete will also be the perfect tyrant.

Nietzsche never tired of pointing out that the demands of traditional morality fly in the face of life. One might say, Yes, and that is precisely why morality is so valuable: it acknowledges that man’s allegiance is not only to life but also to what ennobles life—that, indeed, life itself is not the highest court of appeals. But for Nietzsche the measure of nobility is the uninhibited pulse of life: hence his penchant for biological and physiological metaphors, his invocation of “ascending” and “descending” forms of art and life. He defines the good as that which enhances the feeling of life. If “to see others suffer does one good, to make others suffer even more,” then violence and cruelty may have to be granted the patent of morality and enlisted in the aesthete’s palette of diversions. In more or less concentrated form, Nietzsche’s ideal is also modernity’s ideal. It is an ideal that subordinates morality to power in order to transform life into an aesthetic spectacle. It promises freedom and exaltation. But as Novalis points out, it is really the ultimate attainment of the barbarian.

Advertisement

The impulse of criticismism comes in a variety of flavors, from bitter to cloyingly sweet, and it can be made to serve a wide range of philosophical outlooks. That is part of what makes it so dangerous. One of the most beguiling and influential American practitioners was Richard Rorty, who until his death in June of 2007 was probably the most influential American academic philosopher of his generation. Once upon a time, Rorty was a serious analytic philosopher. Since the late 1970s, however, he increasingly busied himself explaining why philosophy must jettison its concern with outmoded things like truth and human nature. According to him, philosophy should turn itself into a form of literature or—as he sometimes put it—“fantasizing.” He was set on “blurring the literature-philosophy distinction and promoting the idea of a seamless, undifferentiated ‘general text, ’” in which, say,

Aristotle’s Metaphysics, a television program, and a French novel might coalesce into a fit object of hermeneutical scrutiny. Thus it is that Rorty believes that “the novel, the movie, and the TV program have, gradually but steadily, replaced the sermon and the treatise as the principal vehicles of moral change and progress.”

As almost goes without saying, Rorty’s attack on philosophy and his celebration of culture as an “undifferentiated ‘general text’” earned him many honors. Indeed, Richard Rorty was widely regarded as he regarded himself: as a sort of secular sage, dispensing exhortations on all manner of subjects, as readily on the op-ed page of major newspapers as between the covers of an academic book of philosophical essays. The tone was always soothing, the rhetoric impish, the message nihilistic but cheerful. It has turned out to be an unbeatable recipe for success, patronizing the reader with the thought that there is nothing that cannot be patronized.

Rorty did not call himself a utilitarian or a Nietzschean. That might be too off-putting. Instead, he called himself a “pragmatist” or, in the last decade of his life, a “liberal ironist.” What Rorty wanted, as he explained in his book Philosophy and the Mirror of Nature, is “philosophy without epistemology,” that is, philosophy without truth. In brief, Rorty wanted a philosophy (if we can still call it that) which “aims at continuing the conversation rather than at discovering truth.” He can manage to abide “truths” with a small “t” and in the plural: truths that we don’t take too seriously and wouldn’t dream of foisting upon others: truths, in other words, that are true merely by linguistic convention: truths, that is to say, that are not true. What he could not bear—and could bear to have us bear—was the idea of Truth that is somehow more than that.

Rorty generally tried to maintain a chummy, easygoing persona. This was consistent with his role as a “liberal ironist,” i.e., someone who thinks that “cruelty is the worst thing we can do” (the liberal part) but who, believing that moral values are utterly contingent, also believes that what counts as “cruelty” is a sociological or linguistic construct. (This is where the irony comes in: “I do not think,” Rorty wrote, “there are any plain moral facts out there . . . nor any neutral ground on which to stand and argue that either torture or kindness are [sic] preferable to the other.”)

Accordingly, one thing that was certain to earn Rorty’s contempt is the spectacle of philosophers without sufficient contempt for the truth. “You can still find philosophy professors,” he witheringly continued, “who will solemnly tell you that they are seeking the truth, not just a story or a consensus but an honest-to-God, down-home, accurate representation of the way the world is.” That’s the problem with liberal ironists: they are ironical about everything except their own irony, and are serious about tolerating everything except seriousness.

As Rorty was quick to point out, the “bedrock metaphysical issue” here is whether we have any non-linguistic access to reality. Does language “go all the way down”? Or does language point to a reality beyond itself, a reality that exercises a legitimate claim on our attention and provides a measure and limit for our descriptions of the world? In other words, is truth something that we invent? Or something that we discover?

The main current of Western culture has overwhelmingly endorsed the latter view. But Rorty firmly endorsed the idea that truth is merely a human invention. He wanted us to drop “the notion of truth as correspondence with reality altogether” and realize that there is “no difference that makes a difference” between the statement “it works because it’s true” and “it’s true because it works.” He told us that “Sentences like . . . ‘Truth is independent of the human mind’ are simply platitudes used to inculcate . . . the common sense of the West.” Of course, Rorty was right that such sentences “inculcate . . . the common sense of the West.” He was even right that they are “platitudes.” The statement “The sun rises in the east” is another such platitude.

Rorty looked forward to a culture—he calls it a “liberal utopia”—in which the “Nietzschean metaphors” of self-creation are finally “literalized,” i.e., made real. For philosophers, or people who used to be philosophers, this would mean a culture that “took for granted that philosophical problems are as temporary as poetic problems, that there are no problems which bind the generations together in a single natural kind called ‘humanity.’”

Rorty recognized that most people are not yet liberal ironists. Many people still believe that there is such a thing as truth independent of their thoughts. Some even continue to entertain the idea that their identity is more than a distillate of biological and sociological accidents. Rorty knew this. Whether he also knew that his own position as a liberal ironist crucially depended on most people being non-ironists is another question. One suspects not. In any event, he was clearly impatient with what he refers to as “a particular historically conditioned and possibly transient” view of the world, that is, the pre-ironical view for which things like truth and morality still matter. Rorty, in short, was a connoisseur of contempt. He could hardly have been more explicit about this. He told his readers in the friendliest possible way that he wanted them to “get to the point where we no longer worship anything, where we treat nothing as a quasi divinity, where we treat everything—our language, our conscience, our community—as a product of time and chance.”

Advertisement

In short, what Rorty wanted was philosophy without philosophy. The “liberal utopia” he envisioned is a utopia in which philosophy as traditionally conceived has conveniently emasculated itself, abandoned the search for truth, and lives on as a repository of more or less bracing exercises in fantasy. In his book Overcoming Law, the jurist and legal philosopher Richard Posner criticizes Rorty for his “deficient sense of fact” and “his belief in the plasticity of human nature,” noting that both are “typical of modern philosophy.” They are typical, anyway, of certain influential strains of modern philosophy. And it is in the union of these two things—a deficient sense of fact and a utopian belief in the unbounded plasticity of human nature —that the legacy of Nietzsche bears its most poisonous fruit.

The cognitive pessimism espoused by figures such as Rorty has moral as well as intellectual implications. When Rorty, expatiating on the delights of his liberal utopia, said that “a postmetaphysical culture seems to me no more impossible than a postreligious one, and equally desirable,” he perhaps spoke truer than he purposed. For despite the tenacity of non-irony in many sections of society, there is much in our culture that shows the disastrous effects of Nietzsche’s dream of a postmetaphysical, ironized society of putative self-creators. And of course to say that such a society would be as desirable as a postreligious society amounts to saying also that it would be just as undesirable.

Like his fellow liberal ironists, Rorty takes radical secularism as an unarguable good. For him, religion, like truth—like anything that transcends our contingent self-creations—belongs to the childhood of mankind. Ironists are beyond all that, and liberal ironists are beyond it with a smile and a little joke.

But of course whether our culture really is “postreligious” remains very much an open question. That liberal ironists such as Richard Rorty make do without religion does not tell us very much about the matter. In an essay called “The Self-Poisoning of the Open Society,” the Polish philosopher Leszek Kolakowski observes that the idea that there are no fundamental disputes about moral and spiritual values is “an intellectualist self-delusion, a half-conscious inclination by Western academics to treat the values they acquired from their liberal education as something natural, innate, corresponding to the normal disposition of human nature.” Since liberal ironists like Richard Rorty do not believe that anything is natural or innate, Kolakowski’s observation has to be slightly modified to fit him. But his general point remains, namely that “the net result of education freed of authority, tradition, and dogma is moral nihilism.” Kolakowski readily admits that the belief in a unique core of personality “is not a scientifically provable truth.” But he argues that, “without this belief, the notion of personal dignity and of human rights is an arbitrary concoction, suspended in the void, indefensible, easy to be dismissed,” and hence prey to totalitarian doctrines and other intellectual and spiritual deformations.

The Promethean dreams of writers such as Nietzsche and Rorty depend critically on their denying the reality of anything that transcends the prerogatives of their efforts at self-creation. Traditionally, the recognition of such realities has been linked to a recognition of the sacred. It is a mistake typical of intellectuals to believe that this link can be severed with impunity. As Kolakowski notes elsewhere, “Culture, when it loses its sacred sense, loses all sense.”

With the disappearance of the sacred . . . arises one of the most dangerous illusions of our civilization—the illusion that there are no limits to the changes that human life can undergo, that society is “in principle” an endlessly flexible thing, and that to deny this flexibility and this perfectibility is to deny man’s total autonomy and thus to deny man himself.

It is a curious irony that proponents of criticismism from from Mill and Nietzsche to Richard Rorty are reluctant children of the Enlightenment. Remember Kant’s motto for the Enlightenment: sapere aude, “Dare to know!” For the proponent of “critical thinking,” the liberal ironist, and other paragons of disillusionment, that motto has been revised to read “Dare to believe that there is nothing to know.” The Enlightenment sought to emancipate man by liberating reason and battling against superstition. It has turned out, however, that when reason is liberated entirely from tradition—which means also when it is liberated entirely from any acknowledgment of what transcends it—reason grows rancorous and hubristic: it becomes, in short, something irrational.

Philosophy itself has been an important casualty of this development. It is no accident that so much modern philosophy has been committed to bringing us the gospel of the end of philosophy. Once it abandons its vocation as the love of wisdom, philosophy inevitably becomes the gravedigger of its highest ambitions, interring itself with tools originally forged to perpetuate its service to truth.

It is an axiom of criticismism that the extent of our disillusionment is a reliable index of our wisdom: the idea that somehow the less we believe the more enlightened we are. There is, however, an curious irony here. For there is an important sense in which philosophy must contribute to the reduction of human experience. At least, it must begin by contributing to it, and this for the same reason that philosophy cannot proceed without a large element of doubt. There is something essentially corrosive about the probing glance of philosophy: something essentially dis-illusioning. If our goal is a human understanding of the world, then the activity of philosophy must itself be part of what philosophy investigates and criticizes.

Yet if philosophy begins by interrogating our everyday understanding of the world, all of its fancy conceptual footwork is for naught if it does not in the end lead us to affirm a fully human world. It is a delicate matter. In one sense, philosophy is the helpmeet of science. It aids in the task of putting our conceptual household in order: tidying up arguments, discarding unjustified claims. But in another sense, philosophy peeks over the shoulder of science to a world that science in principle cannot countenance. The problem is that we do not, cannot, inhabit the abstract world that science describes. Scientific rationality replaces the living texture of experience with a skeleton of “causes,” “drives,” “impulses,” and the like.

Advertisement

The enormous power over nature that science has brought man, is only part of its attraction. Psychologically just as important is the power it gives one to dispense with the human claims of experience. How liberating to know that kindness is just another form of egotism! That beauty is merely a matter of fatty tissues being ar¬ranged properly! That every inflection of our emotional life is nothing but the entirely predictable result of glandular activity! Just another, merely, nothing but . . . How liberating, how dismissive are these instruments of dispensation—but how untrue, finally, to our experience.

In this sense, scientific rationality is a temptation as well as an accomplishment because inherent in its view of the world is an invitation to forget one’s humanity. It is this Promethean aspect of science that links it with evil. As the Austrian novelist Robert Musil observed, the feeling that “nothing in life can be relied on unless it is firmly nailed down is a basic feeling embedded in the sobriety of science; and though we are too respectable to call it the Devil, a slight whiff of brimstone still clings to it.”

Reason allows us to distinguish between appearance and reality; but our reality turns out to be rooted firmly in the realm of appearance. As the English philosopher Roger Scruton observed,

The scientific attempt to explore the “depth” of human things is accompanied by a singular danger. For it threatens to destroy our response to the surface. Yet it is on the surface that we live and act: it is there that we are created, as complex appearances sustained by the social interaction which we, as appearances, also create. It is in this thin top-soil that the seeds of human happiness are sown, and the reckless desire to scrape it away —a desire which has inspired all those “sciences of man,” from Marx and Freud to sociobiology—deprives us of our consolation.

Consolation? Indeed, more: it threatens to deprive us of our humanity. In Plato’s phrase, philosophy turns out in the end to be an effort to “save the appearances.”

We all of us inhabit a world irretrievably shaped by science; we know that the sun does not really move from east to west, just as we know that the stars are not really hung like lamps from the sky. And yet, and yet: we recognize the legitimacy of that reality—our reality—every time we wake and find that the sun, once again, has risen. Enlightenment is a grand idea. But Bismarck was right about laws and sausages.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement