"The Truth in Ethical Relativism"

by Hugh LaFollette

Journal of Social Philosophy, 1991, 146-54.
[pdf version]

 

Feel free to read the essay here, download it for private use, or 
link to it.  But do not place copies of the file on any other server.


Ethical relativism is the thesis that ethical principles or judgments are relative to the individual or culture. When stated so vaguely relativism is embraced by numerous lay persons and a sizeable contingent of philosophers. Other philosophers, however, find the thesis patently false, even wonder how anyone could seriously entertain it. Both factions are on to something, yet both miss something significant as well. Those who whole-heartedly embrace relativism note salient respects in which ethics is relative, yet erroneously infer that ethical values are noxiously subjective. Those who reject relativism do so because they think ethics is subject to rational scrutiny, that moral views can be correct or incorrect. But in rejecting objectionable features of relativism they overlook significant yet non-pernicious ways in which ethics is relative.

In short, each side harps on the opponent's weaknesses while overlooking its own flaws. That is regrettable. We are not forced to choose between relativism and rationality. We can have both. There are ways in which ethical principles and behavior vary legitimately from culture to culture and individual to individual. That we must recognize. However this in no way suggests we cannot reason about ethics. Rather we should strive for a rational yet relativistic ethic which emphasizes the exercise of cultivated moral judgement rather than the rote application of extant moral rules. Or so I shall argue.

 


Situation Sensitivity

Most if not all ethicists recognize that ethical principles are relative in one sense, namely that they are situation-sensitive. Proffered moral rules like "Don't lie" are objectionable in undiluted form, we are told, since allegiance to them invites morally horrendous consequences. Textbook wisdom has it that these rules are not absolute prescriptions to be unwaveringly followed. Instead they are rules of thumb, abridgements of unexpurgated moral principles with specific qualifications or ceteris paribus clauses. Thus "Don't lie" is short for "Don't lie unless one must do so to avert great moral harm," or even more vaguely, "Don't lie, other things being equal." These "complete" principles are presumably general (i.e., relatively context-free) and exceptionless (applicable to all cases).[1]Thus, although the principles are absolute, what they prescribe varies, depending on the relevant features of the case.

Most philosophers recognize it is difficult, if not impossible, to delineate all and only such principles. We are too limited intellectually. Nonetheless we must assume there are such principles, and strive to formulate (rough approximations) of them. Without this regulative assumption, they argue, we must conclude there are conflicting ethical opinions that are equally valid. That, Brandt says, is the deleterious essence of relativism.[2]

This textbook explanation of the situation relativity of moral rules is correct as far as it goes. No specific rules can handle all the situations we face. It is dangerous to let simple maxims masquerade as full-blown moral principles. People mistake the substitute for the original and thereby ignore relevant moral complexities. We need general situation-sensitive rules.

However, such rules are not enough. General situation sensitive rules will be effective only if we know in advance which features are morally relevant. That we cannot invariably know since what is morally relevant may emerge only in the circumstances. Consequently, different individuals may legitimately act differently. Furthermore, moral relevance depends not only on the circumstances, but also on the personality of the moral agent. Let me explain.

 


Two relativities

a) Advantages of Moral Diversity

Most ethicists acknowledge we are often uncertain about moral means. We know we should respect another's autonomy or know that we should be kind, but we may not know how. To be moral we must determine how to do what we know we should do.

This model suggests, however, that although we have difficulty ascertaining the appropriate moral means we usually or always know the moral ends. Not so. Moral ends are frequently indeterminate and sometimes conflicting. Although some purported moral ends (e.g., "purifying the Aryan race") are morally intolerable, we should not assume there is only one moral norm. Some variation in ends contributes to human flourishing, and should be embraced.

The point can be illustrated in another context. Philosophy professors have roughly similar teaching goals. Nonetheless they often disagree about these goals' relative importance. Some professors think their principal task is to introduce students to the great thinkers; others, to expose students to a particular array of philosophical problems; still others, to help students to think, to critically examine their own lives.

Most think these goals are mutually supportive. Nonetheless, they do rank goals differently and these differences modulate their teaching. The first wants students to get their hands on the classic texts. The second emphasizes the pervasiveness of traditional philosophical problems. The third wants students to adopt a critical attitude toward life.

Although most professors have a favored view, they recognize the appeal of alternatives. They realize education and the philosophic community profit from diversity. Some students learn more from teachers with one style; others thrive under the tutelage of competitors. Therefore to serve students well we must encourage teachers to develop their own styles. Our discipline likewise benefits from diversity. Ongoing discussions about philosophy's proper goals keeps us honest.

Similarly for ethics. Following Mill's arguments iny On Liberty, we should see divergence in moral ends not as an unavoidable evil, but as a factor contributing to human advance and moral excellence. We should not merely tolerate diversity, we should embrace it; we should seek exposure to views different from our own; we should encourage variety of thought and action. Otherwise we will stagnate; we will fail to achieve our human potential.

Let me offer a historical example which illustrates my point. The Civil Rights Movement was comprised of people with conflicting ideologies. Although all agreed blacks were mistreated by the white majority, agreement went little further. Some were separatists; others, integrationists; still others wanted merely to diminish the onerous restraints on blacks. Even those who endorsed similar ends clashed over means. Some advocated open rebellion; others shunned rebellion though made it clear they would use violence to protect themselves; still others renounced all violence and adopted a Ghandian stance of pacifistic resistance. Advocates of each stance doubtless thought others were mistaken, perhaps even insincere or malicious. Depending on their perspective, "opponents" were deemed revolutionaries, hotheads, chickens, or Toms. Most assumed their and only their approach was correct. Typical ethics textbook explanations would concur: only one approach is morally proper. Persons who think otherwise are relativists.

I question this appraisal. There are limits to what is (was) morally worthy, though I am often uncomfortable with attempts to strictly delimit those boundaries. In particular, although I find some of the mentioned options defective by my lights, when I step back and observe the historical movement, I am impressed by the necessity of these different styles. Without them the movement would not have made the progress it has (which, as it turns out, is altogether inadequate.) The "hotheads" demonstrated to recalcitrant whites just how serious they were, thereby forcing whites to acknowledge the systematic mistreatment of blacks. If all had been hotheads, however, the movement would have been crushed as a rebellion. Cooler, more temperate members made the movement less threatening, more palatable, to the white majority. If a preponderance of members had been at either extreme, the movement would have faltered, if not collapsed.

My point can be stated differently by putting a twist on the generalization argument. Kant, M. Singer, and others have argued that we can determine if an action is morally acceptable by asking "What if everyone did it?" The generalization argument helps show why we should all follow a single norm even when my violating the norm does not have detrimental consequences. It explains, for example, why I should not walk on the grass even though my doing so will not hurt the grass. If everyone walked on the grass, the grass would die. Moreover, it would be unfair to allow me to walk on the grass while forbidding you from doing so. Thus, the generalization argument shows that it is wrong for me (and you and everyone else).

Using this argument (in a way the authors did not intend), we can also show that there is no one way all people in the Civil Rights Movement should have acted. For no matter how they acted, had all others acted the same, the results would have been disastrous. Therefore, variant behaviors would not have been wrong; everyone's following a single moral norm would have been wrong.

All large-scale movements thrive on diversity. Movements need dreamers, for even if the dreams are unrealizable, they enable others in the movement to envision and fashion a better world. And there must be others who blend dreaming and action. If all behaved similarly, the movement will fail.[3]

Before I move on to the next point I need to block one inference some will draw from what I have said. I have argued that morality thrives on diversity; without it important moral advances would never have been achieved. That is not to say, however, that all behavior which may contribute to moral goals is morally legitimate. Bull Conner's decision to sic the dogs on demonstrators may have helped speed Civil Rights legislation, someone might contend. Yet that doesn't show his actions were moral. Agreed. My contention is not that all contributory actions are moral, only that some range of them are. The movement would have continued had Bull Conner reacted differently. I have only claimed that it would not have advanced had everyone acted identically. I have no algorithm for specifying which contributory features are moral, but then, on my account there aren't moral algorithms, so that is neither surprising nor objectionable.

 


b) Ethical Relevance of Personality Traits

How we should act will also depend to some degree on our personalities. As I noted earlier ethicists acknowledge that moral principles are situation sensitive. Relevant shifts in external circumstances justify acting differently. However, many ethicists steadfastly deny that personality differences make a moral difference. That is a mistake. To illustrate, let me borrow an example developed in an earlier paper.[4] Suppose I have a depressed friend; she is beset with personal troubles. How should I relate to her? Should I be a non-judgmental listener, sensitive to her current lot? Should I offer advice, even if it is not requested? Or should I simply ignore, or at least downplay, the trauma to try to help her "get on with her life"? Doubtless at that time she will find some of these responses more than a bit annoying. Nonetheless, a range of reactions is likely important for helping her deal with her troubles. If all her friends were sensitive listeners, she might become mired in her trauma, ignoring her possible contributions to them. If all her friends dished out advice she might lose self-respect. One pat set of reactions will not do. She would suffer and, more generally, the world would be morally diminished, if everyone played identical roles.

Realizing that, how should I relate to my friend? On my account, no rule will provide a preemptory directive. I must decide what I, with my particular temperament and abilities, can best do to sensitively respond to her. If I have inculcated sensitivity and kindness, I may respond appropriately. Yet there is no moral theory to specify exactly how I should act.

Of course we do have clear rules forbidding flagrant ethical offenses, e.g., killing and rape. Rules against such actions set the "bottom line" below which no one should fall. However most ethical questions are not so easily resolved. That has led some thinkers to conclude that this "bottom line" exhausts morality proper, that all other concerns are matters of supererogation. Perhaps we can make suitable distinction between duty and supererogation. However if we do, we should not draw the distinction so that it characterize all interpersonal interactions as merely supererogatory.[5]

However, we must recognize that some and perhaps most of our moral judgments are debatable and require considerable defense. In these cases, diversity of action not only benefits us directly, as in the case of mass movements, it also helps make each of us vividly aware of the options. In short, diversity is morally advantageous.

 


Universalizability

Doubtless someone will note that my account ignores or even undermines universalizability. Universalizability (some call it generalizability) is often forwarded as a central tenet of morality: "What is right or wrong for one person must be right (or wrong) for any person in similar circumstances."[6] It is, Mackie tells us, "in some sense, beyond dispute."[7] Philosophers forward it, not as one moral rule among many, but as a meta-rule any possible morality must satisfy. Yet my proffered conception apparently runs afoul of universalizability.

Certainly it conflicts with most typical descriptions of it. In the previously discussed example, I argued that moral duties can vary because of specific personality traits. For instance, it is sometimes morally proper for two people to relate differently to the same person in the same circumstances. Thus, one might justifiably criticize her for the same behavior another praises or at least tolerates. Or within large scale movements, it is morally proper for two people to act differently. In neither the personal relationship nor the political/social movement should everyone feel morally compelled to act in identical ways. If they did, the friend -- or the movement -- would suffer.

Universalizability apparently rules such divergence out. Though it does not outlaw variant behavior, it does require that deviations be based on general features of the situation so that others in like circumstances should act similarly.[8] Thus, the principle forbids any deviation in one's moral duties because of "variable inclinations"[9] or "generic differences between persons.[10] "The class of persons alleged to be an exception to the rule cannot be a unit class."[11] Yet that is exactly what my view countenances. Because of Jack's particular character and relationship to Jill, Jack's moral duties to Jill may be unique --it may constitute a unit class.

The principle further conflicts with my observations about mass movements. Presumably the principal would countenance only one approach to the Civil Rights Movement. Yet if my earlier analysis was correct, divergence was essential to the success of the movement.

Those enamored with universalizability would say so much the worse for my view; I would say: so much the worse for universalizability. The phenomena I am describing are familiar to morally sensitive individuals. If universalizability conflicts with these phenomena, that indicates the deficiencies of the principle, not vice-versa.

Doubtless some defenders of universalizability might argue that the principle is compatible with these phenomena. They might contend the differences cited are morally relevant in the sense required. Thus, Jill might properly relate differently to Jack than to John. But anyone like Jill should relate to him the same as she does.

This response will not do. For someone might have the same personality and character traits as Jill, yet have a different history with Jack, and on that basis alone, should relate differently to him. We recognize the moral relevance of personal histories, but the principle of universalizability cannot countenance them. If they are permissible then everyone would constitute a unit class; the principle of universalizability might be true but trivial. On the other hand, if only some are morally relevant, there must be some basis other than universalizability for determining relevance. Universalizability would likewise be trivial inasmuch as it fails to help us make moral decisions.[12]

Despite these limitations, we should retain universalizability. The principle properly emphasizes the need to reason about morality. Even if we cannot provide an algorithmic procedure explaining why some features are relevant or especially weighty, we should morally and rationally evaluate our actions. Moreover, it emphasizes the centrality of fairness: We should not tailor moral principles for our own selfish interests. These are important lessons we should not forget. However, we should recognize that the principle gives us no substantive moral guidance.

 


The nature of language

We can better understand the senses in which ethics is relative if we compare it with a familiar practice which is similarly relativistic: language. We have general rules of grammar, standards for proper prose. Sentences should have a noun and verb; the subject and verb should agree in number; a pronoun should refer univocally to its antecedent; prose should be clear and concise. These express sage wisdom to the novice, continued guidance to the veteran. Although these principles distinguish powerful prose from ineffectual scribbling, they are not algorithms. In no context can they tell us which, of all the possible sentences or phrases, is most appropriate. To use E. B. White's example, no rule explains why Thomas Paine's "These are the times that try men's souls" is preferable to grammatically acceptable alternatives.[13] What is clear, however, is that only someone who knows grammatical rules could pen such a gem.

Rules of grammar and style not only fail to distinguish minimally acceptable prose from masterful style, occasionally they offer direction we should ignore. The best writers do: Some occasionally use grammatically incomplete sentences. Others may employ seemingly awkward or circuitous prose. Although there are no precepts specifying when these deviant forms are appropriate, the overriding aim of effective communication legitimizes them. But this is not to make "effective communication" a hierarchical rule which adjudicates between competing grammatical rules. To say we wish to communicate effectively merely states our aspirations; it does not prescribe a procedure for realizing those aspirations. We have no precise account of effective communication -- that is likewise amorphous. The stylist helps determine, albeit tentatively, what effective communication is and thus, what good grammar and forceful style are.[14]

All this seems obvious once we recall how grammar evolved. No one emerged from the primeval slime imploring us to be clear or warning us about misplaced modifiers. It was millennia after our ancestors began speaking that they even knew what a modifier or clarity was. These notions emerged from their attempts to communicate.

Doubtless something like this happened: astute speakers realized communication was hampered by inappropriately placed modifiers. (Though if language had developed differently we might not have had modifiers, let alone misplaced ones.) They informed other speakers of their "discovery." When enough people discerned the wisdom in their observation, a convention outlawing misplaced modifiers emerged. Nothing mysterious.

Rules arose because people thought they served important communicative functions; they may be discarded or modified if later discoveries suggest otherwise. For instance, it is no longer strictly forbidden to split an infinitive. Modifications occur when enough people realize blind adherence to a rule is undesirable.

We developed language to enhance communication; we change it to make communication more effective. But this does not imply that language usage is subjective or a matter of personal whim. We may not speak just anyway we wish -- at least not if we wish to communicate. Of course if language had evolved differently what we consider good grammar and eloquent prose would differ. But that's not relativistic, at least not noxiously so. It merely recognizes the mundane truth that a different array of rules could also enable us to communicate effectively. No one language is indisputably superior to all contenders.

In summary, 1) language developed to enhance communication. 2) Although there are limits on how language could have evolved, no language is privileged. 3) No set of linguistic rules covers all cases; nonetheless 4) knowing those rules is vital for effective communication. Finally 5) we can debate the wisdom of rules of grammar: we can determine when it is reasonable to ignore those rules; we can decide if the rules no longer serve their original purposes and therefore ought to be discarded.

As it turns out these are just the senses in which ethics is relative without being subjective. Both ethics and language usage face "questions" with patently obvious answers. For ethics those may be: "Should I beat my children for exercise?" Or "should I yell at my neighbor to boost my ego?" For language usage it could be: Is the sentence "My teeths badly hurted yesterday" grammatically proper? Or, is the sentence "The really very good story was nice and rather interesting" illuminating? Other questions about prose and ethics are seemingly intractable. Some are so complex that they elude pre-established categories. Relevant features of the case do not fall neatly under the rubric of extent rules. Or, even when we are moderately certain which rules are relevant, may remain unable to deduce the preferable action. Different writers follow different rules and, within a certain range, all communicate effectively. Different people act differently, yet all may act morally.

Non-determinantness does not render language usage subjective. Neither need it do so for ethics. Both types of judgments are fallible, but fear of error or subjectivism should not drive us to embrace a rigid set of grammatical rules or a simplistic morality. We should appreciate and understand the ways in which language and ethics are relative, while recognizing that they are still subject to rational scrutiny.

 


A Rational Relativistic Ethic

It is high time that I explain in more details what I mean by a rational but relativistic ethic. That is easier said than done. Given the way the debate has been framed in the past, most people assume that we must either embrace a rigid absolutism or else run headlong into the arms of those who say ethics is non-rational.

These are not our only options. Nonetheless, it is difficult to specify what a non-traditional ethic would look like; it is difficult to explain how one could reason about ethics once we have abandoned the traditional conception. Although alternatives proposed by Schneewind, Altman, Pincoffs (all cited earlier) and others [15] strike a responsive chord, they initially seem unacceptably vague.

I suspect they seem vague, however, because we are subconsciously wedded to the model manufactured and sold by modern philosophy. We assumed ethics needed the seal of certainty, else it was non-rational. And certainty was to be produced by a deductive model: the correct actions were derivable from classical first principles or a hierarchically ranked pantheon of principles. This model, though, is bankrupt. We must abandon it and begin to think about ethics differently.

I suggest we think of ethics as analogous to language usage. As my previous analysis suggested, there are no univocal rules of grammar and style which uniquely determine the best sentence for a particular situation. Nor is language usage universalizable. Although a sentence or phrase is warranted in one case does not mean it is automatically appropriate in like circumstances -- unless "like" is so circumscribed that no situation is like another. Nonetheless, language usage is not subjective.

This should not surprise us in the least. All intellectual pursuits are relativistic in just these senses. Political science, psychology, chemistry, and physics are not certain, but they are not subjective either. As Shapere puts it, science "involves no unalterable assumptions whatever, whether in the form of substantive beliefs, methods, rules or concepts" Everything is up for grabs, including the notions of "discovery" and understanding."[16]

As I see it, ethical inquiry proceed like this: we are taught moral principles by parents, teachers, and society at large. As we grow older we become exposed to competing views. These may lead us to reevaluate presently held beliefs. Or we may find ourselves inexplicably making certain valuations, possibly because of inherited altruistic tendencies.[17] We may "learn the hard way" that some actions generate unacceptable consequences. Or we may reflect upon our own and others' "theories" or patterns of behavior and decide they are inconsistent. The resulting views are "tested"; we act as we think we should and evaluate the consequences of those actions on ourselves and on others. We thereby correct our mistakes in light of the test of time.

Of course we may not like such a ragtag process. We may yearn for the "good ole days" when we thought our ethical principles had the stamp of certainty, when we thought we had a foolproof univocal procedure for determining right and wrong. But those days, like the noumenal world, are well lost.[18] They are mere dreams, flights of philosophical fancy. It is time to grow up, to recognize that certainty is not on the menu -- nor was it ever.

That should not worry us. For if certainty is not on the menu neither is full-blown relativism. Of course people make different moral judgments; of course we cannot resolve these differences by using some algorithm which is itself beyond judgement. We have no vantage point outside human experience where we can judge right and wrong, good and bad. But then we don't have a vantage point from where we can be philosophical relativists either.[19]

We are left within the real world, trying to cope -- with ourselves, with each other, with the world, and with our own fallibility. We do not have all the moral answers; nor do we have an algorithm to discern those answers. Neither do we possess a algorithm for determining correct language usage but that does not make us throw up our hands in despair because we can no longer communicate.

If we understand ethics in this way, we can see, I think, the real value of ethical theory. Ethical theory is important, although in ways different than many people suppose. Some people talk as if ethical theories give us moral prescriptions. They think we should apply ethical principles as we would a poultice: after diagnosing the ailment, we apply the appropriate dressing.

But that is a mistake. No theory provides a set of abstract solutions to apply straightforwardly. But then, I doubt if most ethical theorists ever thought they did. Ethical theories are important not because they solve all moral dilemmas but because they help us notice salient features of moral problems and help us understand those problems in context. They help us see problems we had not seen, to understand problems we had not understood, and thereby empower us to make informed moral judgments, judgement we could not have made without an appreciation of moral theories. In that respect ethical theories and grammar serve similar functions: good grammarians may not be effective communicators; however, a grasp of grammar empowers us to communicate effectively.

Thus, we should instruct each other in the basic principles inherited from the past (respect for persons, reverence for human life, etc.) and act upon those as circumstances warrant. Then, we must listen and talk. We must non-defensively hear other's evaluations of our actions and non-condemnatorily offer reactions to theirs -- all the while acknowledging our and their fallibility. When certain actions seem especially horrendous to many (e.g., murder) we should legally prohibit them. Less obvious harms we will leave to the arena of ideas. In short: I only urge that we replicate our procedures for language usage. This, of course, puts a burden on everyone to evaluate inherited "moral wisdom" as well as our own actions. And it demands that we govern our behavior in accordance with what we find. But isn't that exactly the central theme of philosophy?[20]

 



NOTES

[1] J.B. Schneewind, "Moral Knowledge and Moral Principles," Revisions, ed. S. Hauerwas and A. MacIntyre (Notre Dame: University of Notre Dame Press., 1983) argues these are erroneously thought to be essential requirements of any ethical system.

[2] B. Brandt, Ethical Theory (Englewood Cliffs: Prentice-Hall, Inc. 1959), p. 272.

[3] Berelson (Bernard R. et. al., "Democratic Practice and Democratic Theory," Voting (Chicago: University of Chicago Press, 1954) has convincingly argued this is likewise true of citizen participation in democracies. We criticize those who fail to participate in democratic decision making, yet a country could not survive, he claims, with full participation; the system would be unmanageable. The traditional conception fails even here.

[4] Applied Philosophy Misapplied," in The Applied Turn in Contemporary Philosophy, ed. N. Rescher, Et. al., (Bowling Green, OH: Bowling Green State University Press, 1983). The following themes were anticipated there.

[5] There are important ways in which it sounds as if Edmund Pincoffs advocates this view in "Quandary Ethics," Revisions. There he defends the relevance of personality and ideological differences to ethical decisions, but seems to limit this relevance only to moral issues which, in some important way, go beyond the call of duty. That is right as far as it goes. But it just doesn't go far enough.

[6] M. Singer, Generalization in Ethics (New York: Athenium, 1971). Similar views are echoed by most writers, including: W. Frankena, Ethics (Englewood Cliffs,, NJ: Prentice-Hall, Inc., 1973), p. 25; A. Gewirth, Reason and Morality (Chicago: University of Chicago Press, 1978), p. 105; and Brandt, p. 21.

[7] Ethics (New York: Penguin Books, 1977).

[8] Singer p. 17; Hare, (1983) Freedom and Reason (Oxford: Oxford University Press, 1963), p. 107; Gewirth, p. 111.

[9] Gewirth, p. 106, 170.

[10] Mackie, p. 97.

[11] Singer, p. 87.

[12] This is in stark contrast to an earlier paper, "Moral Kinds and Natural Kinds," Journal of Value Inquiry (Spring 1981). There I assumed universalizability was a fundamental moral notion.

[13] The Elements of Style (New York: Macmillan Publishing Co., 1959).

[14] The situation parallels the pragmatists' concepts of "coping." What is true, according to the pragmatist, is that which enables us to cope. But coping is not, then, some new epistemological notion to replace "correspondence with reality." It, too, is pragmatically determined. See Andrew Altman's "Pragmatism and Applied Ethics," American Philosophical Quarterly 20 (April 1983).

[15] Consult the majority of articles in Revisions, particularly Bergmann's and Murdoch's. Also see Ted Benditt's discussion in Rights (Totowa, NJ: Rowman and Littlefield, 1982).

[16] "The Character of Scientific Change," pp. 52-54, as yet unpublished.

[17] The latest scientific defense of a biologically based altruism comes from Edward O. Wilson's Sociobiology (Cambridge, Mass: Harvard University Press, 1975) and On Human Nature (New York: Bantam Books, 1982).

[18] Rorty, Richard, "The World Well Lost," Journal of Philosophy LXIX (1972) 649-65.

[19] Rorty (Ibid) and Davidson, Donald, "On the Very Idea of a Conceptual Scheme." Proceedings of the American Philosophical Association 17 (1973-4), 11.

[20] I would like to thank Joel Feinberg, Robert Evans, and Eva LaFollette for helpful comments and criticisms of previous drafts of this paper.