The Moral Wager: Evolution and Contract by Malcolm Murray (Philosophical Studies Series: Springer) illuminates and sharpens moral theory, by analyzing the evolutionary dynamics of interpersonal relations as analyzed in a variety of games. We discover that successful players in evolutionary games operate as if following this piece of normative advice: Don't do unto others without their consent.
From this advice, some significant implications for moral theory follow. First, we cannot view morality as a categorical imperative. Secondly, we cannot hope to offer rational justification for adopting moral advice. This is where Glaucon and Adeimantus went astray: they wanted a proof of the benefits of morality in every single case. That is not possible. Moral constraint is a bad bet taken in and of itself. But there is some good news: moral constraint is a good bet when examined statistically. Murray’s game-theory ethics offers some practical calculus for a more nuanced use of contract theory in the development of moral norms. His theory is less compelling when he attempts to account for altruism within this evolutionary nexus. It is hoped that Murray’s analysis of relativism become widely known as he avoids both extremes of unnecessary subjective nihilism and moral objectivism.
In this study Murray offers an evolutionary account of
morality which depends upon the instrumentality of contractarian
consent theory. Usually contractarianism
claims that moral norms derive their normative force from the idea
of contract or mutual agreement. For Murray game theory
open up the impetus to an evolution of morality as a resolution of
interpersonal conflicts under strategic negotiation. The emphasis on
strategic negotiation is what supports the idea of consent. Consent
theory differs from other contractarian models by abandoning an
artificial need for rational self-interest in favor of evolutionary
adaptation models. From this, more emphasis is placed on consent as
natural convergence rather than consent as an idealization.
Murray’s version of contractarianism, then, ends up looking
more like the relativist model offered by Harman, rather than the
rational (or pseudo-rational) model offered by Gauthier, let alone
the Kantian brands of Rawls or Scanlon. Much of Murray’s
discussion dwells on why it is no loss to abandon hope for the
universal, categorical morality that rational models promise. (Murray
has an ally in the position in Bernard Gert’s theory of Morality but
Murray does not cite it or draw explicit reference to it, though he
does cite Gert in passing)
Murray introduces his approach to moral theory by offering
the betting analogy that sanctions the remaining picture. There are
some bets where the expected utility is positive, though the odds of
winning on this particular occasion are exceedingly low. In such
cases, one cannot hope to give an argument that taking the bet is
rational. The only thing one can say is that those predisposed to
take this kind of bet on these kinds of occasions will do better
than those with other dispositions, so long as such games occur
often enough. The lure of morality is similar. Moral constraint is a
bad bet taken in and of itself, but a good bet when examined
statistically. The game of morality occurs whenever strategic
negotiation takes place, and since this occurs often enough for
social creatures such as us, an attraction for moral dispositions
exists.
By analyzing the evolutionary dynamics of interpersonal
relations in a variety of games, one discovers that strategies
reaching equilibria have something in common: such strategies are
prone to conditionally cooperate with other cooperators. Moral
advice, then, becomes "imitate these strategies in your dealings."
Successful players in evolutionary games operate as if following the
normative advice: Don't do unto others without their consent. More
fully: Don't do unto others without their consent, or expected
consent, or what they would consent to if they were capable of
giving consent, so long as these others abide by this same norm. The
evolutionary success of strategies that operate as if following this
advice is what gives it its normative force.
If the principle of consent is the best semantic
representation of evolutionary dynamics, a number of implications
to moral theory follow.
First, Murray points out one cannot view
morality as an unconditional strategy. Moral theory cannot tell us
what to do independently of what other agents are doing. This will
make it difficult to view morality as a categorical imperative.
Hypothetical imperatives bespeak of conditional strategies,
categorical imperatives do not. Replicator dynamics do not favor
unconditional strategies.
Secondly, we cannot hope to offer rational justification
for adopting the moral advice. Evolutionary dynamics track
successful strategies overtime and across generations, whereas
rationality demands utility maximization in the current situation.
Generally people who are moral tend to do better than
people who are not. Saying "generally" here is an admission that
being moral is not demonstrably rational in all cases. That is, it
is justified as a statistical claim, yet not justifiable as a claim
about a particular instance of moral action. Something similar is
going on with moral advice: it cannot be justified in the particular
instance, but can be justified statistically.
The evolutionary benefits of morality are those who do not
take bets on principle that conditionally cooperative agents will do
better than both unconditionally cooperative agents and
unconditionally uncooperative agents — given suitable conditions. An
affinity to Hume lurks here. Hume said that reason cannot back our
conviction that the sun will come up tomorrow but, fortunately for
us, nature foists the belief upon us anyway. Something similar —
with an important qualification — may be said about our moral
beliefs. Moral actions cannot rationally be justified, but
fortunately for us, certain moral beliefs are thrust upon us whether
we like it or not. There are differences between the two cases, to
be sure. We cannot help but believe in causation, induction, and in
the existence of external objects independent of our senses, whereas
we can avoid belief in morality.
Perhaps the odds of social success for the outright immoral
are low, but belief in morality is not forced the way belief in
causation is forced. Another, though related, difference concerns
the wide uniformity in belief in one's senses across cultures; the
uniformity is less wide concerning moral beliefs across cultures.
These differences do not matter for our purposes, however. Although
neither moral beliefs nor belief in our senses can be rationally
justified, they both have evolutionary fit.
This insight into the fortunate nature of morality is not
to be taken too far. Once we pay closer attention to the statistical
utility of morality, we can begin to tease apart evolutionary
benefits from cultural exaggerations. What people mean by morality
and what aspect of morality is evolutionarily beneficial are not
necessarily the same For instance Murray argues that although people
normally assume morality must be categorical, they are mistaken. In
response to the error that people are in about morality, Murray does
not offer an error-theory. Error theorists maintain that people are
in error about morality, but that they understand morality the only
way one can. For example, one cannot believe in witches without also
believing in supernatural powers. To dismiss the existence of
supernatural powers is to dismiss the belief in witches. We do not
modify witch talk to take into account the non-existence of
supernatural powers — we abandon witch talk entirely. But one can be
in error about morality in less fundamental ways — even if the view
held in error is deemed fundamental by those holding the erroneous
view. After all, many theists hold that God is fundamental to
morality, but rejecting belief in God does not mean we must reject
belief in morality. That those very theists would think so is not
substantive to the argument.
Murray offers a partial-error theory. Moral categories have
use, and forming moral heuristics will also have use, but people's
estimations of morality from those heuristics is where they go
astray. They have overextended the heuristic. By highlighting the
mechanics of how morality has use in terms of evolutionary fit, we
highlight all that we are entitled to say about morality.
An assumed implication of naturalized ethics is the
collapse of morality into an absolute relativism. If moral discourse
cannot be rationally justified, then neither can any moral discourse
be rationally criticized. If nothing is right, nothing is wrong.
Whatever one's view of morality is, it must define right from wrong,
and so any theory that collapses to simple relativism is merely an
obfuscated confession of moral nihilism. Although the naturalized,
non-rational picture of morality that Murray presents is
relativistic, it is not the crass relativism that collapses into
nihilism. Hume used the word "fortunately" and that word is key.
Much of our understanding of morality is mistaken, but not all. The
common bits about moral discourse turn on conflict resolution of
certain strategic interactions. Although it may not be individually
rationally justifiable for agents to adopt moral dispositions,
evolution favors agents who adopt a conditional cooperative strategy
when faced with conflicts of strategic interaction. Some might be
tempted to say that conditional cooperative strategies are justified
to the extent that such strategies are evolutionarily stable. But
this is not the right way of looking at things, as the bet analogy
above demonstrates. The justification cannot be aimed at the
individual actor. There is no justification for her to be moral when
being immoral pays her greater dividends in the particular case. The
"justification" if it can be called that, is aimed more at the level
of statistical trends. We can explain why morality is prevalent
despite its unjustifiability on individual terms. Being moral cannot
be rationally justified, but saying this does not commit us to grope
for some non-natural property accessed by opaque intuition
faculties.
Murray’s endorsement of evolutionary ethics commends a form
of reductive natural irrealism. To be a moral realist is to believe
that discourse about ethics is a discourse about objective moral
properties and facts. One may believe discourse about morality is
necessarily discourse about objective moral properties and facts,
but that there are no such things Murray asserts. Those moral
realists who are not error theorists believe these objective moral
properties exist in fact. Some of these moral realists believe these
moral properties are non-natural like G. E. Moore, and others like
Paul Bloomfield believe they are natural.
An irrealist denies that moral discourse requires the
belief in objective moral properties, and denies the existence of
objective moral properties or facts. Some irrealists go further and
deny that moral discourse has any propositional content at all, non-cognitivists.
Other irrealists allow that moral discourse may be true or false;
they simply deny that truth-value of moral discourse has any
connection to objective natural or non-natural properties. Of this
latter group, a further distinction may be made, that between
reductionists and non-reductionists. Non-reductionists like Mark
Timmons and Gibbard assert that although moral discourse can be true
or false, moral discourse is still purely evaluative.
Murray’s position is that moral discourse fully reduces to
natural descriptive facts of the world. The bits commonly held to be
left over are considered eliminable. The error lies in holding on to
those bits; not in failing to accommodate them. Such a move permits
a form of relativism in moral appraisal. It also precludes speaking
of the natural facts and properties as being themselves moral facts
or properties. Thus, Murray fashions a reductive naturalist
irrealism that avoids unnecessary ontic status to moral
considerations as such and also avoids bald relativism to any
calculated behaviors. Murray also distinguishes well-being discourse
from moral discourse, explaining that moral discourse is the main
subject of the calculations in this study. Morality, Murray
announces, concerns interpersonal relations, not intrapersonal
relations.
Murray demonstrates how much normal moral discourse is
flawed. Normal moral discourse following Immanuel Kant holds that
morality is fundamentally a categorical imperative. This is wrong on
two counts: it is not a categorical imperative, nor is it
fundamentally so. That morality need not be fundamentally viewed as
categorical distinguishes his position from Richard Joyce's. Joyce
holds that if we jettison categoricity, we automatically become
error theorists. Murray stands out claiming that a partial-error
theory is possible, and the all or nothing positions of categoricity
obscures some practical considerations.
Murray examines the implications of understanding morality
as a system of hypothetical imperatives. If moral advice is
hypothetical, it must be predicated on something. The standard
offering is self-interest. Generally critics argue that we cannot
draw morality out of self-interest. Instead Murray argues that the
role of self-interest is largely misinterpreted. It is only within
the confines of interpersonal interaction where self-interest comes
into play. To rely too heavily on self-interest simpliciter would
mean that morality has more to do with personal wellbeing, and less
to do with a social convention geared to resolving conflicting
interests. Morality's role is not to serve preferences, but to solve
interpersonal preference conflicts. As a solver of interpersonal
preference conflicts, morality must be non-partisan to preferences
themselves.
The rejoiner that solving interpersonal preference
conflicts is a necessary, but not sufficient, condition of morality
is discussed later. Murray now draws attention to how evolutionary
game theory is well designed to track the success of moral
strategies — no matter what one's preferences are.
Murray argues against justifying morality in terms of
rationality. Continuing to invoke game theory Murray presents a few
simple tables, and some elementary game theoretic calculations.
Specifically, he highlights David Gauthier's argument that
conditional cooperation (CC) in Prisoner's Dilemmas (PDs) is more
rational than unconditional defection (UD). Importantly, if morality
is rational, we can answer Hobbes's Foole in terms the Foole should
be able to endorse. Murray asserts to the contrary however, that the
CC strategy is not rational. Concerning the first, the success of CC
depends on the robustness of the conditions under which CC prevails.
For CC to prevail, the following conditions must be met.
The reply to (1) depends on our accepting the replies to
(2), (3), and (4), and the replies to (2), (3), and (4) depends on
our accepting the reply to (1). Enough CCs can get into the mix only
if the computation and detection costs are not penal, or so long as
there are enough CCs in the mix, or so long as certain other kinds
of dispositions are not in the mix. Assuming even minimal
computation and detections costs, those initial CC agents would have
to be irrational: they will do worse than being a UD.
CC's rationality is predicated on the irrationality of the
first CC agents. Beyond this, even within the parameters that
Gauthier sets for us, Gauthier requires a broader understanding of
rationality than we may be used to. As Ken Binmore notes, if it is
rational to cooperate in a PD, that shows merely that it was not a
PD. The PD is defined in such a way that it is always irrational to
cooperate with another cooperator when unilateral defection will
earn the defector greater individual utility. That rational move
(defecting against a cooperator) is precisely what CC agents prevent
themselves from doing voluntarily, or can reasonably be predicted to
voluntarily, consent. Put negatively, Any act by people that
negatively affects others who have not voluntarily agreed to being
so affected is an immoral act. Loosely, this may be abbreviated to
the following pocket principle: Don't do to others without their
consent.
Murray believes that David Gauthier's argument is the best
defense of the rational morality in the history of ethics, and that
is why he has focused his attention on it.
Once we admit that morality is a system of hypothetical
imperatives, we at the same time admit that there may be conditions
under which it may be more rational, more in your interest, to not
be moral. To try to argue otherwise is to deny the hypotheticity of
morality.
Nevertheless, there is something right in the game
theoretic approach. The problem is that we cannot look at game
theory as offering normative advice about what to do in a given
situation. But it can show that moral strategies are robust in a
wide range of situations, and that moral strategies have
evolutionary fit. But if morality has evolutionary fit, might we be
tempted to say that conditional cooperative strategies (so long as
that counts as moral behavior) are justified, after all? Those who
are moral do better than those who do not?
Evolutionary modeling can show that it is statistically
better to be moral than immoral, but this cannot provide the formal
deductive proof that the Foole demands.
Murray provides an evolutionary picture of how moral
dispositions can thrive. Evolutionary models of ethics show how
despite their irrationality, moral agents are more likely to pass on
their genes than non-moral agents. Murray introduces a few other
games, the Ultimatum Game, the game of Chicken, Battle of the Sexes,
and a new invention of his own, the Narrow Bridge Game, which may be
viewed as a variant on Chicken. Also, Murray brings in the formula
for replicator dynamics.
By examining different evolutionary games, we discover a
common element in successful strategies. Agents employing rational
strategies tend to be those who cannot do well against their own
kind. Paradoxically, the
more success they have, the worse they will fare.
Conversely, successful agents are those who have the ability to
cooperate among their own kind (and defect against others). In the
short run, admittedly, they do worse compared to strictly rational
agents, but given the paradox of success, agents employing
conditionally cooperative strategies prevail.
After examining the underlying mechanism of successful
strategies in evolutionary games, Murray defends evolutionary ethics
against two sets of problems. The first concerns problems inherent
in the modeling parameters. Of the modeling objections, he considers
three:
The second set of problems is more general. Of these,
Murray recognizes four:
Murray extrapolates a normative ethical principle from
examining the common features of successful strategies in
evolutionary games by considering consent. Once we pay closer
attention to the statistical utility of morality, we can begin to
tease apart evolutionary benefits from cultural exaggerations. What
people think they mean by morality and what aspect of morality is
evolutionarily beneficial are not necessarily the same. As
highlighted in game theory, and given the rejection of categorical
moralities, the bare normative advice that is consistent with the
games examined would be something like: "Be a conditional
cooperator!"
In brief, game theory helps to draw attention to the
evolution of morality as a resolution of interpersonal conflicts
under strategic negotiation. It is this emphasis on strategic
negotiation that supports the idea of consent.
Consent theory is a brand of contract theory. Murray distinguishes it from contractarian and contractualist theories. There are two basic differences. One follows from our abandoning reliance on rational self-interest in place of evolutionary adaptation. The other difference is that, unlike other contract theories, consent theory emphasizes the primary role of consent. Moral constraint against coerced agreements cannot be derived from any ex ante agreement, for the very concept of agreement must presuppose such constraint. To highlight this circularity, Murray establishes the game of Proposal, which is played prior to entering the prisoner's dilemma. This game is presented as a simple tree, and employs Zermelo's backward induction. Again this is a simplified table geared to the non-specialist. Its point is not to eliminate ex ante agreement, but to illustrate how the ex ante agreements that contractualists and contractarians speak about already presumes the more basic principle of consent: namely any act is moral only so long as all concerned, suitably informed, competent agents agree.
(This little principle needs much unpacking to accommodate
issues of competency, proxy consent, surprise parties, positive
duties, and the like, much of which is discussed in the concluding
chapters of this study.)
To emphasize the circularity inherent in consent theory is
to expressly admit that the normative advice of consent theory lacks
full rational appeal, but such an observation will be uninteresting
for those who have accepted the arguments Murray has already put
forth. The normative principle of conditional consent is not a
matter of rational choice. Rather, the concept is a well enmeshed
phenotype: it has evolutionary fit. The root of contract theory is
not that agents consent to moral constraints, but that in order for
mutual benefit to arise from strategic negotiations, they must
presuppose the binding force of consent.
Contract theories suffer from an incoherence problem.
Modern solutions move to a hypothetical domain: people would agree
to abide by morality ex post once they were suitably situated in an
ex ante position. Such maneuvers do not highlight consent, however.
Any decision at the ex ante level is purely parametric. But
evolutionary game theory shows that moral behavior evolves as a
solution to strategic interaction, not parametric choice.
Besides that, any ex ante decision must already presuppose
the normative force of consent. Contractarian analyses portray
agents bargaining on principles of justice and these principles are
justified by the fact that this is what rational agents would
endorse in suitable circumstances. But have these idealized agents
agreed on the procedure that moral matters will be determined by the
principle of consent? If not, is the resultant agreement useless?
And even if they had, would such an agreement be coherent? After
all, to agree on this, agents would have to presuppose that consent
is binding: the very thing on which they are supposedly agreeing.
That is, the doctrine of consent needs to be itself already presumed
as the linchpin of morality in order for the ex ante machinery to
get off the ground. This is not explained in terms of moral
intuitions. Rather, Murray argues, it follows as part of the
evolution of fit strategies. Evolutionarily fit strategies across
disparate games are (1) conditional strategies that (2) can do well
if correlated with their own kind. This gets translated into the
principle of consent. Like Hume's reliance on our senses, the
principle of consent is not itself open to justification, for any
justification of any contract theory already presupposes the
principle of consent. The doctrine of consent may be put thusly: Any
act is morally permissible if and only if all competent, suitably
informed, concerned parties
Now Murray discusses individual problems with consent
theory. First he emphasizes the importance of determining who counts
as a concerned party. If this is left undefined, consent theory will
provide useless advice. Murray this problem that has not received
much attention to date. This is because few ethical theories
emphasize consent to the degree Murray recommends. But once we
define moral actions according to occurrent consent, we need to be
very clear whose consent we are talking about. It is not the case
that anyone off the street who whines about an agreement made
between people other than herself should be sufficient to render the
agreement null or impermissible. The whining must be justified. The
dissenter must be affected in the requisite way in order to squelch
the agreements of others. The requisite way, typically, is conceived
in terms of harm. The appeal to the harm principle is supposed to
rule out busybodies who interfere in others' lives for no good
reason. That Barney does not like Betty's having a nose ring is not
thought to count as warranting the banishing of nose rings. The
concept of harm, however, is not clearly defined. Barney may really
feel "harmed" by people wearing nose rings, albeit in a non-physical
way. It would be simple if consent theorists could speak of only
"physical harms" when they speak of "harms," but this is not right.
Imposed psychological harms cannot be permissible in moral theory.
But since people have different psychological make-ups, what counts
as causing psychological harm will vary according to the person and
the situation. If someone is harmed by all sorts of things that
normal people are not, this individual must be deemed an unreliable
measuring rod of harm. Some criterion of normalcy is required, but
what should count as "normal harm" is itself too vague and too
flexible to be of much help. To distinguish consent that matters
from consent that does not matter, Murray offer the following
criterion:
Anyone suffering or expected to suffer physical adverse
effects is a concerned party. Otherwise, one is a concerned party
only so long as the affecter cannot fulfill her desires without use
of the affectee's person or property.
If you satisfy that condition on a particular occasion,
what you say matters. If you do not satisfy that condition on a
particular occasion, what you say about that occasion does not
matter morally.
For those who feel consent theory leaves out too much,
specifically the demand that we morally ought to want to help
others, a larger argument is required. Murray tackles the problem of
altruism. Although consent theory is well positioned to defend
negative duties, it seems to lack the ability to defend any positive
duties. If two people agree to row a boat, and this action does not
adversely affect others not party to the agreement, nothing could be
immoral about the act, or so consent theorists would avow. Many
strongly protest. It is often supposed that one's moral duty is to
give positive aid to the suffering. Two people who consent to row a
boat may be immoral, according to these objectors, if a third were
drowning and they were rowing the boat away from the victim.
Morality, it is claimed, must require more than mere agreements; in
fact, morality must impose strictures on the content of particular
agreements. If so, consent theory fails to ground morality.
Within the contractarian tradition, "harm" has been defined
in relation to a baseline. If an individual is made worse off than
she was, this counts as harm. The question before us, then, is
whether or not the drowning victim was harmed by the rowers' failure
to rescue her. If you are drowning with your wallet in your pocket,
your baseline is the state of drowning with your wallet. Thus,
failing to save you will not count as harming you since you will
still be drowning with your wallet. Taking your wallet before you
drown will be "harming" you for it worsens your state relative to
your baseline. Death creates complications to this line of thinking.
A drowned man's baseline is so low that nothing could worsen his
state. Thus, seemingly, taking a dead man's wallet may not be
morally inadmissible. Presumably, however, the contents of the
wallet go to the dead man's estate, in which case it would be
immoral to take the wallet, since it reduces the baseline of any
beneficiary. Fine, but accepting that we ought not take the drowning
woman's wallet in this case does not solve anything. If we are to
read consent theory as a simple principle of non-harm, then allowing
the person to drown in this case would be deemed morally permissible
by consent theorists. This does not answer the problem; it
exacerbates the problem. The complaint is not that consent theory is
inconsistent; it is that those who hold it are immoral, or, more
formally, that their theory of morality fails to capture morality.
Knowing why the two rowers are not technically harming the drowning
woman does not convince many that the two rowers are thereby moral.
Many hold it is "morally monstrous" that a moral theory will have
nothing to say to someone who allows another person to drown. I
partly agree, but my agreement is a very qualified sort. First of
all, consent theory is not reducible to a mere non-harm principle.
This does not solve the problem. By the criterion of a concerned
party offered above, the drowning person in our scenario still would
not count as a concerned party. Thereby neither her complaint nor
the complaints of those standing on shore watching the rowers row
away could morally count. What is also needed is a reminder that
consent theory is a semantic representation that approximates in
terms of a normative principle the evolutionary forces that propel
moral dealings. Evolution provides us with broad heuristics aimed at
capturing conditional cooperation in diverse situations. The
mechanics of heuristics are such that individual agents operate with
broad-stroked algorithms only, and some of our deeply held moral
convictions lie in the penumbra of these broad strokes. That is,
Murray defends the conventional praise of altruism as part of an
extended heuristic. We can admit altruism does not fit the mechanics
of replicator dynamics, except that replicator dynamics will favor
algorithms based on heuristics, and it is in the broad strokes of
these algorithms where altruism gets its foothold.
The success of altruism, at any rate, is not demonstrated
by the games examined initially. Those games demonstrate the success
of conditional agents. Altruism appears to be an unconditional
strategy. Nevertheless, the norm of altruism may well piggy-back on
a successful strategy. In this sense, we can explain the prevailing
norm of altruism as an overextended heuristic. To do this, Murray
borrows from the social theories of Robert Boyd and Peter Richerson,
who show the successful strategy of coarse-grained imitation. We
prefer to adopt successful behaviors, not unsuccessful ones, and
some behaviors which offer short-term gain are offset by long-term
loss. To avoid this error, imitation of successful agents of
previous generations has clear advantages, so long as the
environment is not too unstable. Imitation will tend to be
broad-stroked, not fine-tuned, and so some traits will be taken on
that are either not present in the original model, or not necessary
to the model's success. Murray argues that the social norms of
altruism develop through the overextended imitation of successful
phenotypes.
This line of argument carries more weight for those whose
reputations are more prominent. Prestige members will be more
motivated to guard their cooperative reputation, even by exerting
behaviors technically unnecessary for conditional cooperation, if
merely to thwart misconstrual, and thus defection, by others. The
fear of a marred reputation thereby moves the prestigious toward
overextension of cooperative behavior into fuzzy cases of altruism,
and the broad-stroked mechanics of mimicry move the plebeians to
imitate the overextended altruism of successful members, which in
turn coagulates into a norm.