I’m starting to warm up to the objectivist form of
act-consequentialism (partly because I think it lacks content) which Doug
defended in the previous post. One worry people have is that this kind of a
view severs the connection between what is right/wrong and how ordinary, good
people deliberate and advice one another. This argument has recently been made
forcefully by Uri Leibowitz in his ‘Moral Advice and Moral Theory’ paper (Phil
Studies). So, I want to explain this objection first, and then why
act-consequentialism (and many other monistic views) do not actually suffer
from this problem.
Let’s start from basic act-consequentialism; an act is right
iff it maximises utility. Leibowitz first claims that ethical theories such as
AC serve two roles. First, they either tell us what rightness/wrongness is, or
what makes actions right. Second, and more controversially, they are supposed
to guide judgment and action. So, we should be able to derive some general
moral advice from ethical theories. The relevant advice provided by the theory
should be something which a normal agent can use in deliberation, and it should
also be ‘helpful’ (the descriptions used in the advice should be substantial
enough). A defender of AC might contest this second role of ethical theories,
but I’m going to accept it at least for the sake of the argument.
Leibowitz’s argument begins from the claim that the
following is good moral advice:
(RC) Perform action A only if after reflecting on and
deliberating about the normative status of A, you do not believe that A is
Of course, this might not be good advice for all agents.
But, as Leibowitz says, we can take rational, sensitive, and well-informed
agents (RSI-agents). If they follow (RC) – think before you act, then arguably
it is likelier that they do the right thing (there is a threat of begging the
question here against AC).
Leibowitz’s argument against AC is that AC cannot explain
why this would be the case. He lists ordinary first-order considerations which
RSI-agents would be likely to use in their deliberation. These include things
such as how the autonomy of others will be affected, whether others will be
harmed, whether one will be untruthful, and, perhaps surprisingly, whether bad
consequences will be brought about. The claim then is that there is no
connection between these factors and utility-maximisation. If thinking about
these considerations turned out to lead to utility-maximisation, this would at
best be a cosmic coincidence. So, RC could not be good advice if AC were right.
The distance between the criterion of rightness and the considerations used in
deliberation would be too big.
On the basis of this Leibowitz concludes: “To the best of my
knowledge, no one has yet offered any reason to think that, in fact, the
factors that RSI-agents consider when they reflect on and deliberate about the
normative status of actions are reliable indicators of the exemplification of
the property of utility-maximisation. Moreover, I doubt that we have any
evidence, not to mention overwhelming evidence, for the co-instantiation of
certain properties of actions that RCI-agents typically consider and the
property of utility maximisation. As a result, proponents of (AU) are poorly situated
in order to explain how it could be that the factors that agents consider when
they reflect on and deliberate about the normative status of an action are
morally relevant features of that action”.
I’m going to do now something which no one has allegedly
done before. Very few contemporary consequentialists talk about ‘utility’
maximisation. Rather, they talk usually about ‘value’ maximisation. There is a
simple reason for this. Most of them are value pluralists (whereas ‘utility’ is
often associated with only pleasure, happiness, or well-being). They think that
many different things can be good and bad. In fact, I think as many things can
be good that it is fitting for an impartial spectator to value.
The modern consequentialists will then claim that (on the
axiology they have provided) autonomy is good (and undermining it is bad), that
harm is bad, that truthfulness good and untruthfulness bad, and that bad
consequences are of course, well, bad. Now, if we think that these things are
good and bad, then it is not a surprise why reflecting about these things is
likely to make one act rightly – given that our theory says that it is right to
bring about as much goodness as possible. The only way to reliably do this just
is to consider the many different good-making and bad-making considerations just
like ordinary agents usually do. So, as far as I can see, ordinary AC with a
pluralistic value-theory is in perfect situation to explain why (RC) is good
moral advice and the considerations which ordinary agents think about are
morally relevant factors. The view offers a conceptual connection to bridge the
gap which Leibowitz thought that he identified.
This goes for other theories too. Contractualists can say
that these considerations match with the reasons there are to reject different
principles, Kantians can claim that these are the sorts of considerations that
make certain maxims not consistently universalisable, and so on.