Home » Ethics » Naturalism in Ethics

Naturalism in Ethics

Questions in ethics can be reduced to questions about the natural world.  However, the correspondence that allows this reduction is far from certain, because there is no clear agreement on how to define ethical terms like “good”.  Because of this, the prevailing “post-modern” view of ethics is relativism, the view that there are no moral absolutes.  A simple thought experiment demonstrates that this view is only partially valid.

Table of Contents

The Problems with Ethics
Like most of philosophy, ethics is in disarray.  There are a voluminous number of distinct ethical theories, both classical and modern, that differ in very significant ways, and that lead in some cases to drastically different conclusions.  Indeed, there are even numerous theories of metaethics, the branch of philosophy which asks what ethics is about and what it is supposed to achieve1.   Like most of philosophy, ethics suffers from its literary heritage, sacrificing clarity and precision for literary style and metaphorical ambiguity2.  Furthermore, there is an emphasis in philosophy (and ethics) on challenging common wisdom, causing vague arguments to sometimes be used to sow doubt3.

Unlike most of the rest of philosophy, however, ethics is of vital practical importance: all of civil society rests on an ethical foundation.  Fortunately, there is a core of commonly held ethical beliefs and principles that has allowed civilization, albeit in fits and starts, and with great episodes of retrogress and suffering, to progress to what is apparently a higher and higher state of civility.  Nonetheless, it is obvious that we could do much better.  If a theory of ethics could be found that could be proven to be correct, this theory would be a springboard to resolving the many ethical and political debates that haunt society even today4.

Unfortunately, philosophers have decisively demonstrated (correctly, in this case) that such a proof is impossible5.  My point in this essay is not to deny this fact, but to argue that we are in much better shape than is commonly believed.  The tendency has been for philosophers, having demonstrated clearly that “ought” cannot be derived from “is”, to throw up their hands and to declare that ethics is futile, that it is nothing but subjective belief detached from reality, a figment of our collective whims.  The end result is the conclusion that many modernists have reached: that the only viable ethical system is moral relativism, the theory that anything is morally permissible if the prevailing culture condones it.

I will argue that this conclusion is unwarranted, by showing that many problems in ethics can be addressed with differing degrees of success by dividing the domain of ethics into different regions, in which we ask different questions about different situations.  Having done so, we will discover that the only viable metaethical stance is that of reductionist naturalism, the doctrine that all ethical statements, including statements of value, can be reduced to statements about objective, natural facts.  We will also discover that in some of these regions, while “ought” cannot be derived explicitly from “is”, the only possible conclusion in this regard is so obvious that it should be axiomatic: no reasonable person could think otherwise.  While the ethics of the remaining regions will still remain open to debate, the discovery of the applicability of reductionist naturalism, and in some cases axiomatic ethical truth, will leave us in much better shape than the prevailing consensus would have us believe.

The Nature of Value
Let us first consider some common ethical terms.  Of primary importance, I believe, are two concepts, the first being ‘value’ or ‘good’, and the second being ‘ought’, or ‘moral normativity’.   I will comment on each of these in this order.

It would seem to me that the source of all value, moral or otherwise, is what I will call ‘agency’.  Stephen Darwall defines ‘agency’ as “the capacity to act”, with ‘action’ in turn defined as “conduct taken for reasons”6.  Often philosophers restrict agency to apply only to human beings, and often to imply intentionality: the ability to mentally possess intentions.  However, I will use a broader definition more akin to Aristotle’s teleology.  Anything with a purpose or goal, including humans, animals, or even plants and protozoa, can be a source of value.  By “source of value”, I do not mean that they are valuable in themselves or can produce value, but that objects can be valuable to them.  In the future this might be extended to include intelligent machines.  I will refer to such individuals as ‘agents’.

Anything good must be good for (or equivalently, of value to) some agent.  In fact, define x as good for agent a if and only if x raises a’s state of well-being7.  That good is a (two-placed) relation (as opposed to an attribute) can be seen from the fact that some things are good in some contexts, but not good in others.  Consider, for example dung, a substance that for most humans is fundamentally repulsive, and clearly of no value except perhaps as fertilizer.  However, dung is good for dung beetles.  Thus, we have a type of object that is clearly good for some agents, but not good for others.  This can only be true if good is a relation; something can only be good in the context of agency.

Of course, there are many instances in natural language in which ‘good’ is used as an attribute.  However, if we consider some of these, we will see that they reduce to instances of usage of ‘good’ as a relation.  For example, a person does good by performing an act that is good for some agent.  A person who predominantly performs good acts can be said to be good.  An object can be good because in general it is good for a class of agents, such as a good food.

I do not see how something can be intrinsically, purely good in itself.  Some ethical traditions imply that human beings are intrinsically good8, but I believe what they are really saying is that all persons are sources of agency; they are not good in themselves, but things can be good for them.  Can a non-agent be intrinsically good?  Consider a canyon on some far-away planet in a remote galaxy.  This canyon is completely devoid of life (agents), but because of its proximity to its sun, its geological history, and its stupendous size, it is among the most beautiful sites in the galaxy.  However, it is millions of light-years from any inhabited planet.  There is no chance that any agent could possible derive any sort of benefit from the canyon before it is destroyed by geological forces.  Is this canyon good, by way of its beauty?  I think not.  How can it be good if there is no one or nothing for which it can be good9?

Moral Normativity
What do we mean when we say that we ‘ought’ to do something?  It is fairly well established that prescriptive or ‘normative’ statements like this make sense in the context of a goal10.  For example, if your goal is to win a sprint foot race, you ought to run as fast as you can.  If progress towards a goal can be measured, then this becomes an optimization problem of the kind often found in engineering.  Many such optimization problems also involve constraints that restrict the domain of allowable solutions.   Indeed, if the objective function is linear, solving the optimization problem falls under the subject of linear programming11, and constraints are required in order to achieve an optimal solution (otherwise, the answer would run off to infinity).

I believe that it is instructive to think of moral normativity in this way.  Thus, when we ask what the best action under certain circumstances is morally, we can think of it as asking what action will maximize some moral objective function, perhaps subject to moral constraints.  Loosely speaking, ethical theories that are called “consequentialist” by philosophers when they hold that moral decisions should be based solely on an objective function, without any constraints.  Those that include constraints, such as against doing harm or lying, are called “deontological”12.  For example, utilitarianism uses the objective function of total utility or well-being.  In its pure state, without any moral constraints, it would state that it would be morally right to sacrifice a healthy individual and harvest his organs to save the lives of five people who otherwise would die from a specific type of organ failure.

Note that ‘ought’ is a very ambiguous term.  Ought we do the absolutely optimal action under a particular set of circumstances, or are actions that are nearly as good permissible?  For this reason, I believe it is more precise to speak of moral optimality, moral favorability, and moral permissibility.  The morally optimal action is the one that is the best under the circumstances.  Moral favorability is a measure of how good a particular action is as determined by the moral objective function.  An action is morally permissible as long as it does not violate any moral constraints.

Of course, as David Hume pointed out several hundred years ago, there is no logical way to determine ‘ought’ from ‘is’13.  There is no way to logically derive or specify the moral objective function from the state of the world, from the way things are.  I will next consider how this problem can be considered from the point of view of different ethical domains, from which it will become clear that this problem is more difficult in some domains than in others.

An Isolated Individual
Consider a traveler on a cruise ship that was infected by a plague.  The illness killed everyone except her before help could arrive.  Without a captain, the ship went far off course, and grounded on a remote island with no possible hope of rescue.  Somehow during the crisis, the ship’s radio was destroyed.  Furthermore, the island is completely lifeless (a complete desert – zero rainfall each year), including (for some strange reason) the surrounding ocean.  The survivor has access to all of the resources of the ship (save the radio), which are substantial and can sustain her for decades if managed wisely.

The point of this story is that the survivor is, save the microbial fauna in her body (which we will ignore), completely alone.  She is the sole source of agency on this island, and therefore the sole source of good.  In this context, what ought she do?  It seems to me that there is only one possible answer to this question: She ought to maximize her own well-being.

How can the traveler’s well-being be measured?  This is a classic problem and criticism of utilitarianism.  Many possibilities have been suggested, such as pleasure, happiness, and satisfaction of desires.  Note that humans can express their subjective experience of each of these properties, but this would not be possible with animals, much less plants.  Therefore, it would seem to me that some sort of objective measure of well-being based on physical observations of the agent would be necessary.  Identifying such a measure would be a difficult problem, not the least reason being that the criterion selected would ultimately be somewhat arbitrary.  However, for pragmatic purposes, any objective measure that was close to providing a metric of well-being, and on which there was agreement for its use as such, would be suitable.  Just because we cannot achieve an ideal does not mean that we should not try and should not work with the best that we have.

Another big question for the island-dweller would be how to weigh well-being in the future vs. present well-being.  If there is a function that measures the degree to which a particular individual values a unit of well-being at a future time compared to a unit of well-being today, we can ‘discount’ the future well-being by that amount.  We can therefore come up with a comprehensive measure of the agent’s overall well-being, both present and future, using a formula similar to the one in economics and finance for determining what is called Net Present Value, or the present value of a future stream of income.  I will call this measure the agent’s Net Present Well Being, or NPWB14.  Thus, a more correct answer to the question above is: A completely isolated individual ought to maximize her NPWB.

Three Questions of Metaethics
Let us now consider three ethical and/or metaethical questions that I consider important in light of the conclusions reached above with regard to an isolated agent:

  1. What is the nature of the ethical relation ‘good’?
  2. How can we determine what is good?
  3. How can we determine what is the correct moral objective function (and/or constraints, if any)?

What is the nature of the ethical relation ‘good’?  I defined good above as something which raises an agent’s well-being.  I will now revise that definition to say that something is good for an agent if and only if it raises the agent’s NPWB.  Clearly, then, good is an ethical relation that can be reduced to physical properties, in particular, those that are used to measure the well-being of the agent.  This ability to reduce ethical properties to physical properties is the basis for the metaethical stance known as reductionist naturalism.  Of course, thus far we have only considered ‘good’ in the context of a single individual.  I will return to this problem with regard to groups of agents below.

I will note here, however, that G.E.Moore famously argued against ethical naturalism, most notably in his book Principia Ethica, saying that to do so committed the “naturalistic fallacy”15.  His argument has been called the open-question argument16, in which he claimed, for example, that ‘good’ could never be reduced to a natural property, because we could always still ask whether that property was indeed good.  Despite the enormous influence of Moore’s argument, many demonstrations of its lack of validity have been published.  For example, consider the correspondence definition of truth, i.e. x is true if an only if x corresponds to reality.  Using the open-question argument, we could still ask if corresponding to reality is really true.  Indeed adherents to the coherence or constructivist theories of truth would say that it is not.  This does not mean that truth cannot be defined17!

How can we determine what is good?  Or in the context of the isolated individual, how can we be sure that good is that which raises the individual’s NPWB?  Logically, we cannot, but I do not see how any other definition of good for an isolated agent is possible.  One possibility that has been suggested is that good for an agent could be defined in terms of some goal.  But how could any goal be justified in any way other than to serve this individual, given her isolation?  And what other rational, self-directed goal could an agent possibly have other than to maximize her own NPWB?  One notable suggestion might be a supernatural goal, such as to serve God.  Even if this could be supported on metaphysical grounds, the presence of God would mean that the agent was no longer truly isolated.

How can we determine what is the best course of action under a particular set of circumstances?   For the isolated individual, again, the only feasible objective function is her NPWB.  If the only source of value that is relevant to her is her own NPWB, then there cannot be any other contribution to her moral objective function.  In addition, from her total isolation we can also conclude that there are no moral constraints, as anything she does will affect her NPWB.  One might say, for example, that there should be a constraint against cliff-diving into shallow water.  However, this is clearly superfluous, since any such action will lower her NPWB.

Thus it would seem that there are answers to all three of the questions posed above in the case of an isolated individual.  Hume and Moore were correct in that the answers to these questions can never be proven logically.  However, in the case of the isolated individual, the answers would seem to be axiomatic, in that there does not seem to be any other choice.  In the case of the isolated individual, Hume’s is-ought problem and Moore’s naturalistic fallacy are toothless.

Groups of Agents
Of course, agents do not live in complete isolation, and it should be obvious that the answers to the three questions above become more problematic when applied to groups of agents.  Let us examine each of them now in this context.

What is ‘good’ for a non-singular set of agents?  Before we consider the possible answers to this question, let us consider their bases.  Clearly that which is good for the individual agents might be a contributing factor.  Another factor could be the relationships between these agents.  Note that in both cases the factors listed are objective, natural properties.  Thus, it would seem that reductionist naturalism continues to hold in the case of a group of agents.  Indeed, to argue otherwise, in light of the conclusions reached for the isolated individual, would be to find a non-natural factor that could contribute to group well-being.  I am not aware of any such factor (the only logical possibility would, again, be some supernatural factor, which I will dismiss that on metaphysical grounds).  Thus, it would seem that Moore’s naturalistic fallacy is toothless in all circumstance: All ethical value can indeed be reduced to natural properties.  Reductionist naturalism would seem to be a universally valid metaethical stance18.

How can we determine what is good for a group of agents?  Despite the fact that we have concluded that such good can be reduced to natural properties, we still do not know what those properties are.  Here, but only here, is where an attempt to derive a reasonable theory of ethics first truly falters.  The most famous proposal for collective good is utilitarianism, which says that the well-being (or preferably the NPWB) of a collective is equal to the sum of the individual well-being of each of the agents that make up the collective19.  Unfortunately there are many serious problems with the utilitarian formulation, such as the tendency to direct more resources to those agents who are more efficient at converting them to well-being, to the great expense of the less-efficient20.  Another possibility might be what I will call communitarianism.  Just as we might devise an empirical, objective measure of individual well-being, we might also be able to devise such a measure or formula for collective well-being.  The crucial point here is that this measure or formula does not exist, and there is no logical or even straightforward way to definitively derive it.

How can we determine what is the moral objective function (and constraints, if any) for a collective?  Again, the answer to this question is much more difficult than in the case of the isolated individual for two reasons.  The first is that, even if there is an objective function for collective well-being as discussed in the previous paragraph, there is no way to determine the priority of this function over the function of the individual.  Pure utilitarianism says that the well-being of the individual takes no priority other than its contribution to total well-being.  At the other end of the spectrum, egoists claim that the collective objective function carries no normative weight; all that matters is the well-being of the individual, despite his lack of isolation.

The second problem with regard to collective normativity concerns the need for constraints.  What if any moral constraints should be imposed on our actions?  Again, consequentialists claim that there should be none, whereas some deontologists claim that constraints are indeed morally relevant.  As with the tradeoff between collective and individual well-being, there does not seem to be any logical method to determine the set of valid moral constraints.  Thus it would seem that Hume’s problem indeed has teeth in the realistic scenario of a group of agents: there does not seem to be any way to decide definitely what is the moral status of a particular action under a particular set of circumstances.

The Domains of Ethics
In the preceding discussions we have divided ethics into a series of domains, and effectively considered the validity of Hume’s problem and Moore’s naturalistic fallacy in each of these domains to the extent that they apply.  What we have found is that the naturalistic fallacy is without merit, although Hume’s problem does carry weight in the case of groups of agents.  The following table summarizes these findings:

Question
Isolated Individual
Group of Agents
Can ethical properties be reduced to natural properties? Yes Yes
Is there an obviously unique measure of good? Yes No
Is there an obviously unique method to determine the moral status of any action? Yes No

While the two “No” answers under “Group of Agents” are disconcerting, is should be clear that the results shown in this table are much better than is supposed by the collective wisdom, in which each of the cells would contain the answer “No”.

A Theory of Ethics
So where does this leave us in terms of the hunt for a viable theory of ethics?  We can deduce four principles from the foregoing that would constrain the space of possible theories:

  1. In order to be valid, any theory of ethics must reduce to the rule of maximizing one’s own NPWB in the case of complete isolation.
  2. Any ethical theory must be based on reductionist naturalism.  Thus, even in the case of a group of agents, any moral objective function and/or moral constraints must ultimately reduce to statements about objective facts.
  3. The theory must provide a method for measuring the well-being (or actually, the NPWB) of a group of agents.
  4. The theory must provide a mechanism for prioritizing actions that increase collective NPWB vs. those that increase only the agent’s individual NPWB.

Note that we cannot require any moral constraints on these grounds, although I believe that a strong case can be made for at least one such constraint based on the core of commonly held ethical beliefs mentioned above.

One theory that I have developed that complies with these four principles can be summarized very roughly with the following rules21:

  1. Do no harm, except where a little harm will ultimately lead to a net benefit, both individually and collectively.
  2. Work together to maximize total NPWB, the sum of the NPWB of all agents.   By “work together”, I mean do those actions that everyone else is expected to do, too.
  3. Take whatever action will maximize one’s own NPWB.

These rules are shown in priority order.  For example, one should not do unwarranted harm to another agent, even if doing so will increase collective NPWB (the ends do not justify the means).  I am sure that there will be some readers who want to challenge these rules.  This is to be expected, but their defense is outside of the present scope.  My point here is to show that a theory that complies with the four constraints listed above is indeed possible.

It is easy to see that this theory satisfies the four criteria listed above:

  1. In the case of an isolated individual, the first two rules do not apply, in which case this system does indeed reduce to the rule of maximizing one’s own NPWB (rule #3).
  2. If we define doing harm as lowering another agent’s NPWB, then clearly this theory is based on reductionist naturalism, since each of the rules reduces ultimately to a statement about NPWB.
  3. This theory defines collective NPWB in the utilitarian sense, as the sum of the NPWB of all individual agents.  I am actually still open on this point, specifically as to whether a utilitarian or some kind of more complex communitarian measure is appropriate, but I specify the utilitarian formula to demonstrate my point.
  4. The priority order of the second and third rules addresses the tension between collective and individual well-being.  In particular, we should perform those actions that everyone else should also do in order to maximize collective NPWB, but beyond that we should concern ourselves with our own well-being.

As mentioned above, I believe that at least one moral constraint is required in any theory of ethics, and this is addressed by rule #1.

A little thought about the nature of ethics and its limits has led to the conclusion that the domain of valid ethical theories is narrower than thought by conventional wisdom.  Rather than falling back on moral relativism, a workable theory of ethics is indeed within our grasp.  I have presented one possible theory, which is surprisingly simple yet arguable promising.

End Notes

  1. For a partial list of ethical and metaethical theories, see “The Foundations of Ethics and Morals in America” by Reynold Spector, Free Inquiry, Vol. 33, No. 5 (August/September 2013).  For an outstanding introduction to ethics and metaethics in general, see the two highly complimentary volumes, Normative Ethics by Shelly Kagan (1998), and Philosophical Ethics, by Stephen Darwall (1998), both from Westview Press, Boulder, CO.
  2. For more on this see my essay on Philosophical Discourse.
  3. See, for example, my criticism of G.E. Moore’s argument against ethical naturalism below.
  4. Robert Wright has argued recently that this is not the case.  He cites evidence that most conflicts are due not to a lack of a common moral foundation, but instead to fundamental biases in human nature.  Most conflicts are in essence arguments about who has the bigger piece of the pie.  Wright, Robert, “Why We Fight — And Can We Stop It?”, The Atlantic, Vol. 312, No. 4 (November, 2013), pg. 102-116, available online at <http://www.theatlantic.com/magazine/archive/2013/11/why-we-fightand-can-we-stop/309525/>.
  5. This was first pointed out by David Hume.  See note 13 below.
  6. Darwall, op. cit., pg. 233.
  7. I will use the term “well-being” rather than utility so as to avoid any preconceived notions regarding exactly what well-being is, such as pleasure or happiness.
  8. Emmanual Kant held this view.  In Groundwork of the Metaphysics of Morals (Chapter 1, pg. 433), he famously states that every person must be treated as “an end in himself“, which is usually interpretted as meaning that every person has intrinsic value (see for instance “Intrinsic good”, Wikipedia, URL=<http://en.wikipedia.org/wiki/Intrinsic_good>).
  9. This scenario ignores the possibility that the canyon would be observable and therefore possibly good for God.  Even if we allowed the dubious metaphysical assumption that God existed, the value of the canyon would still be derived from agency, in this case from God’s.
  10. See for example “Is–ought problem”, Wikipedia, URL=<http://en.wikipedia.org/wiki/Is%E2%80%93ought_problem#Oughts_and_goals>.
  11. For a description of the field of linear programming, see “Linear programming”, Wikipedia, URL=<http://en.wikipedia.org/wiki/Linear_programming>.
  12. There are also some purely deontological theories, which are based solely on personal characteristics and/or concepts, such as virtue or duty, that ignore consequences altogether.  The term Deontology tends to cover any ethical theory that is not consequentialist.
  13. See A Treaties on Human Nature, Book III, Part I, Section I, final paragraph.
  14. Thus, if WB(a,t) is the measure of agent a’s well-being at time t, and r(a,f) is the discount rate for agent a at a time f in the future (r does not have to be constant!), then NPWB(a,t) = ∫t [WB(a,f) * exp(-r(a,f)*f)] df.
  15. See Moore, G.E., Principia Ethica (2004), Dover Publications, Mineola, NY, Chapter 1, Section 10.
  16. Ibid, Chapter 1, Section 13.
  17. Thomas Nagel makes this point in a review of Sam Harris’s book The Moral Landscape, “The Facts Fetish”, New Republic, October 20, 2010.  “The true culprit behind contemporary professions of moral skepticism is the confused belief that the ground of moral truth must be found in something other than moral values. One can pose this type of question about any kind of truth.”.   (Available online at http://www.newrepublic.com/article/books-and-arts/magazine/78546/the-facts-fetish-morality-science.)
  18. Sam Harris makes a strong case for ethical naturalism in his book The Moral Landscape (2010), Free Press, New York, NY.   In particular, as he observes on pg. 30, “[M]any people seem to think that because moral facts relate to our experience (and are, therefore, ontologically “subjective”), all talk of morality must be “subjective” in the epistemological sense (i.e. biased, merely personal, etc.). This is simply untrue.”
  19. The theory of utilitarianism was first proposed in 1776 by Jeremy Bentham in the preface of his essay A Fragment on Government (for full text see http://www.efm.bris.ac.uk/het/bentham/government.htm).  He later expounded on this principle at length in his book, The Principles of Morals and Legislation (1781), in particular Chapter I.
  20. J.J.C.Smart, despite having a favorable view towards utilitarianism, makes this point in his essay “An Outline of a System of Utilitarian Ethics”; Smart, J.J.C and Williams, Bernard, Utilitarnianism: For & Against (1973), Cambridge University Press, Cambridge, Chapter 9.
  21. I have actually developed this theory in a completely rigorous format, but this level of detail is out of scope for this essay.  For a more detailed summary of the theory see my essay A Theory of Ethics, which includes a link to its fully rigorous exposition.

November, 2013

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: