Report: Division of Ethics




Ethics in the Bible

Ethics in religion

Normative ethics

Applied ethics

Abortion, legal and moral issues

Animal rights


Business ethics

Criminal justice

Environmental ethics


Gay rights

Just war

Medical ethics

Utilitarian ethics

Utilitarian Bioethics

Divine command ethics


Virtue ethics

Social contract

Ethical relativism

Situational ethics

Ethical egoism



Categorical Imperative

Moral philosophy

Ethical code







Value theory





NationMaster Website





Ethics is the general term for attempts to state or determine what is good, both for the individual and for the society as a whole. It is often termed the science of morality.


In philosophy, ethics is one of the three major traditional areas of investigation, alongside metaphysics and logic. See particularly meta-ethics.


“The goal of a theory of ethics is to determine what is good, both for the individual and for the society as a whole. Philosophers have taken different positions in defining what is good, on how to deal with conflicting priorities of individuals versus the whole, over the universality of ethical principles versus “situation ethics” in which what is right depends upon the circumstances rather than on some general law, and over whether goodness is determined by the results of the action or the means by which results are achieved.” (Jennifer P. Tanabe, Contemplating Unification Thought)


The history of ethics


The formal study of ethics in a serious and analytical sense began with the early Greeks, and later Romans. Imporant Greek and Roman ethicists include The Sophists and Socrates, Plato and Aristotle, who developed ethical naturalism. The study of ethics was developed further by Epicurus and the epicureanism movement, and by Zeno and the stoics.


Although not developed in a formal and analystical sense, the subject of ethics was of great concerns to the writers of the Hebrew Bible, and centuries later, the New Testament and the Apocrypha. A suvey of ethics in these subjects can be found in article in Ethics in the Bible; a related article, Ethics in religion covers the more extended topic of how the subject of ethics has developed in major world religions.


The formal study of philosophy stagnated until the medieval era, where it gained a new stregth through the writings of Maimonides, Saint Thomas Aquinas and others. It was at this time that the debate bewteen ethics based on natural law and divine law gained a new importance.


Modern Western philosophy began with the work of greats such as Thomas Hobbes, David Hume and Immanuel Kant. Their work was followed up by the utilitarians, Jeremy Bentham and John Stuart Mill. Friedrich Nietzsche has little patience for previous views of ethics, and launched an assault on such theories. The study of analytic ethics took off with G. E. Moore and W. D. Ross, followed by the emotivists, C. L. Stevenson and A. J. Ayer. Existentialism was developed by writers such as Jean Paul Sartre. Some modern philosophers who have done serious philosophical writing on ethics include John Rawls, Elliot N. Dorff and Charles Hartshorne.


Divisions of Ethics


In analytic philosophy, ethics is traditionally divided into three fields: Metaethics, Normative ethics and applied ethics.




Metaethics is this the investigation of where ethical principles come from. It asks: Where do ethical principles come from? What do they mean? How do we know that any exist? Are ethics merely social conventions, or are they universal truths? Metaethics is one of the most imporant fields in philosophy.


Metaethics studies the nature of ethical sentences and attitudes. This includes such questions as what “good” and “right” mean, whether and how we know what is right and good, whether moral values are objective, and how ethical attitudes motivate us. Often this is derived from some list of moral absolutes, e.g. a religious moral code, whether explicit or not. Some would view aesthetics as itself a form of meta-ethics.


Normative Ethics


# Normative ethics bridges the gap between metaethics and applied ethics. It is the attempt to arrive at practical moral standards that tell us right from wrong, and how to live moral lives. One branch of normative ethics is theory of conduct; this is the study of right and wrong, of obligation and permissions, of duty, of what is above and beyond the call of duty, and of what is so wrong as to be evil. Theories of conduct propose standards of morality, or moral codes or rules. For example, the following would be the sort of rules that a theory of conduct would discuss (though different theories will differ on the merit of each of these particular rules): “Do unto others as you would have them do unto you”; “The right action is the action that produces the greatest happiness for the greatest number”; “Stealing is wrong.”

# Another branch of normative ethics is theory of value; this looks at what things are deemed to be valuable. Suppose we have decided that certain things are intrinsically good, or are more valuable than other things that are also intrinsically good. Given this, the next big question is what would this imply about how we should live our lives? The theory of value also asks: What sorts of things are good? Or: What does “good” mean? It may literally define “good” and “bad” for a community or society.


Theory of value asks questions like: What sorts of situations are good? Is pleasure always good? Is it good for people to be equally well-off? Is it intrinsically good for beautiful objects to exist?


Applied Ethics


Applied ethics applies normative ethics to to specific controversial issues. Many of these ethical problems bear directly on public policy. For example, the following would be questions of applied ethics: “Is getting an abortion ever moral?”; “Is euthanasia ever moral?”; “What are the ethical underpinnings of affirmative action policies?”; “Do animals have rights?” The ability to formulate the questions are prior to rights balancing.


Not all questions studied in applied ethics concern public policy. For example: Is lying always wrong? If not, when is it permissible? The ability to make these ethical judgements is prior to any etiquette.


Examples of applied ethics include:

# Abortion, legal and moral issues

# Animal rights

# Bioethics

# Business ethics

# Criminal justice

# Environmental ethics

# Feminism

# Gay rights

# Just war theory

# Medical ethics

# Utilitarian ethics

# Utilitarian Bioethics


Ethics has been applied to economics, politics and political science, leading to several distinct and unrelated fields of applied ethics, including: Business ethics and Marxism Ethics has been applied to family structure, sexuality, and how society views the roles of individuals; leading to several distinct and unrelated fields of applied ethics, including feminism.


Ethics has been applied to war, leading to the fields of pacifism and nonviolence.


Ethics has been applied to analyze human use of Earth’s limited resources. This has led to the study of environmental ethics and social ecology. A growing trend has been to combine the study of both ecology and economics to help provide a basis for sustainable decisions on environmental use. This has led to the theories of ecological footprint and bioregional autonomy. Political and social movements based on such ideas include eco-feminism, eco-anarchism, deep ecology, the green movement, and ideas about their possible integration into Gaia philosophy.


Ethics has been applied to criminology leading to the field of criminal justice.


There are several sub-branches of applied ethics examining the ethical problems of different professions, such as business ethics, medical ethics, engineering ethics and legal ethics, while technology assessment and environmental assessment study the effects and implications of new technologies or projects on nature and society. Each branch characterizes common issues and problems that may arise, and define their common responsibility to the public, or to obey some social expectations of honest dealings and disclosure.


Major doctrines of ethics


Philosophers have developed a number of competing systems to explain how to choose what is best for both the individual and for society. No one system has gained universal assent. The major philosophical doctrines of ethics include:

# Divine command ethics

# Consequentialism

# Virtue ethics

# Social contract theory

# Ethical skepticism

# Ethical relativism

# Ethical subjectivism

# Ethical nihilism

# Ethical egoism

# Ethical hedonism

# Non-hedonistic ethical egoism

# Utilitarianism

# Immanuel Kant’s Deontological ethics

# The Utilitarian Kantian Principle (Cornman, Lehrer)


Descriptive ethics


Some philosophers rely on descriptive ethics and choices made and unchallenged by a society or culture to derive categories, which typically vary by context. This leads to situational ethics and situated ethics. These philosophers often view aesthetics and etiquette and arbitration as more fundamental, percolating ‘bottom up’ to imply, rather than explicitly state, theories of value or of conduct. In these views ethics is not derived from a top-down a priori “philosophy” (many would reject that word) but rather is strictly derived from observations of actual choices made in practice:


# Ethical codes applied by various groups. Some consider aesthetics itself the basis of ethics - and a personal moral core developed through art and storytelling as very influential in one’s later ethical choices.


# Informal theories of etiquette which tend to be less rigorous and more situational. Some consider etiquette a simple negative ethics, i.e. where can one evade an uncomfortable truth without doing wrong? One notable advocate of this view is Judith Martin (“Miss Manners”). In this view, ethics is more a summary of common sense social decisions.


# Practices in arbitration and law, e.g. the claim by Rushworth Kidder that ethics itself is a matter of balancing “right versus right”, i.e. putting priorities on two things that are both right, but which must be traded off carefully in each situation. This view many consider to have potential to reform ethics as a practice, but it is not as widely held as the ‘aesthetic’ or ‘common sense’ views listed above.


# Observed choices made by ordinary people, without expert aid or advice, who vote, buy and decide what is worth fighting about. This is a major concern of sociology, political science and economics.


Those who embrace such descriptive approaches tend to reject overtly normative ones. There are exceptions, such as the movement to more moral purchasing.


The analytic view


The descriptive view of ethics is modern and in many ways more empirical. But because the above are dealt with more deeply in their own articles, the rest of this article will focus on the formal academic categories, which are derived from classical Greek philosophy, especially Aristotle.


First, we need to define an ethical sentence, also called a normative statement. An ethical sentence is one that is used to make either a positive or a negative (moral) evaluation of something. Ethical sentences use words such as “good,” “bad,” “right,” “wrong,” “moral,” “immoral,” and so on. Here are some examples:

# “Sally is a good person.”

# “People should not steal.”

# “The Simpson verdict was unjust.”

# “Honesty is a virtue.”


In contrast, a non-ethical sentence would be a sentence that does not serve to (morally) evaluate something. Examples would include:

# “Sally is a tall person.”

# “Someone took the stereo out of my car.”

# “Simpson was acquitted at his trial.”


Ethics by cases


By far the most common way to approach applied ethics is by resolving individual cases. This is, not coincidentally, also the way business and law tend to be taught. Casuistry is one such application of case-based reasoning to applied ethics.


Bernard Crick in 1982 offered a more socially-centered view, that politics was the only applied ethics, that it was how cases were really resolved, and that “political virtues” were in fact necessary in all matters where human morality and interests were destined to clash. This and other views of modern universals is dealt with below under Global Ethics.


Is ethics futile?


The whole assumption of the field of ethics is that consistent description, consistent deliberation, and consistent and fair application of authority is possible. However, the more case-based views seem to suggest that a great deal of judgement is required, and that for instance one could never train a robot to do ethics, as it requires empathy and wisdom. However, one might be able to teach an artificial intelligence with empathy and wisdom to do ethics.


Is each case unique? Possibly. The view that ethics is innate and tied to a personal moral core or aesthetics is harder to relate to the formal categories above other than as a meta-ethics in itself.


It is considered by some ethicists to be just a variant of mysticism or narcissism, permitting those who avow aesthetic choices as being ‘above ethics’ to justify anything.


However, the term ethics is actually derived from the ancient Greek ethos, meaning moral character. Mores, from which morality is derived, meant social rules or etiquette or inhibitions from the society. In modern times, these meanings are often somewhat reversed, with ethics being the external “science” and morals referring to one’s inmost character or choices. But it is significant that the origins of the words reflect the tension between an inner-driven and an outer-driven view of what makes moral choices consistent.


Ethics in religion


There are articles on Ethics in religion and Ethics in the Bible.


Ethics in psychology


By the 1960s there was increased interest in moral reasoning. Psychologists Abraham Maslow, Carl Rogers, Lawrence Kohlberg, Carol Gilligan and others began to try to codify rational ethics, and try to express universal levels of moral awareness and capacity. Many viewed rational principles as ‘higher’ than relationships, but others did not.




Often, such efforts take legal or political form before they are understood as works of normative ethics. The UN Declaration of Universal Human Rights of 1948 and the Global Green Charter of 2001 are two such examples. However, as war and the development of weapon technology continues, it seems clear that no non-violent means of dispute resolution is accepted by all.


The need to redefine and align politics away from ideology and towards dispute resolution was a motive for Bernard Crick’s list of political virtues.


Related Topics (in philosophy)

# Deontology

# Epistemology

# Etiquette

# Goodness

# Eorality

# Entology

# Trust

# Truth

# Value theory

# Virtue ethics




Ethics in the Bible


Western philosophical works on ethics were written in a culture whose literary and religious ideas were based in the Hebrew Bible (Old Testament) and the New Testament. As such, there is a connection between the ethics of the Bible and the ethics of the great western philosophers. However, this is not a direct connection; significant differences of opinion in how to interpret and apply passages in the books of the Bible lead to different understandings of ethics. Thus, one should not expect to find a direct correlation between Biblical ethics and post-Enlightenment philosophical study of ethics, nor should one find no correlation at all, either.


Ethics in the Hebrew Bible


The books of the Hebrew Bible (Old Testament) cover a period of many centuries, reflecting a rich variety of conditions and beliefs, ranging from the culture of ancient nomadic shepherd tribes to the refinement of life and law of an urban population, from primitive clan henotheism to the ethical monotheism of the prophets. It is thus unwarranted to treat the ethics of the Bible as a unit; the ethical discussions contained therein do not all neatly flow from one dominant principle; there is no one set of clearly defined rules, conduct and obligation. Instead of one system of ethics, many systems have to be recognized and expounded. Nonetheless, the ultimate outcome of this evolution was ethical monotheism.


With these important qualifications kept in view, it is safe to hold that the principle underlying the ethical concepts of the Bible and from which the positive duties and virtues are derived is the unity and holiness of God, in whose image man was created. A life exponential of the divine in the human is the “summum bonum,” the purpose of purposes, according to the ethical doctrine of the Biblical books. This life is a possibility and an obligation involved in the humanity of every man. For every man is created in the image of God (Gen. i. 26). By virtue of this, man is appointed ruler over all that is on earth (Gen. i. 28). But man is free to choose whether he will or will not live so as to fulfil these obligations.


From the stories in Genesis it is apparent that the Bible does not regard morality as contingent upon an antecedent and authoritative proclamation of the divine will and law. The “moral law” rests on the nature of man as God’s likeness, and is expressive thereof. It is therefore autonomous, not heteronomous. From this concept of human life flows and follows necessarily its ethical quality as being under obligation to fulfil the divine intention which is in reality its own intention.


Enoch, Noah, Abraham, and other heroes of tradition, representing generations that lived before the Sinaitic revelation of the Law, are conceived of as leading a virtuous life; while, on the other hand, Cain’s murder and Sodom’s vices illustrate the thought that righteousness and its reverse are not wilful creations and distinctions of a divinely proclaimed will, but are inherent in human nature. The Israelites are under the obligation to be the people of God (, Ex. xix. 5 et seq.) that is to carry out in all the relations of human life the implications of man’s godlikeness.


Hence, for Israel the aim and end, the “summum bonum,” both in its individuals and as a whole, is “to be holy.” Israel is a holy people (Ex. xix. 6; Deut. xiv. 2, 21; xxvi. 19; xxviii. 9), for “God is holy” (Lev. xix. 2, et al.). Thus the moral law corresponds to Israel’s own historic intention, expressing what Israel knows to be its own innermost destiny and duty.


God is the Lawgiver because God is the only ruler of Israel and its Judge and Helper (Isa. xxxiii. 22). Israel true to itself can not be untrue to God’s law. Therefore God’s law is Israel’s own highest life. The statutory character of Old Testament ethics is only the formal element, not its essential distinction. For this God, who requires that Israel “shall fear him and walk in all his ways and shall love and serve him with all its heart and all its soul” (Deut. x. 12, Hebr.), is Himself the highest manifestation of ethical qualities (Ex. xxxiv. 6, 7). To walk in God’s ways, therefore, entails the obligation to be like God.


Ethics in the Apocrypha


Ethics in systematic form, and apart from religious belief, is as little found in apocryphal or Judæo-Hellenistic literature as in the Bible. However, Greek philosophy greatly influenced Alexandrian writers such as the authors of IV Maccabees, the Book of Wisdom, and Philo.


Much progress in theoretical ethics came as Jews came into closer contact with the Hellenic world. Before that period the Wisdom literature shows a tendency to dwell solely on the moral obligations and problems of life as appealing to man as an individual, leaving out of consideration the ceremonial and other laws which concern only the Jewish nation. From this point of view Ben Sira’s collection of sayings and monitions was written, translated into Greek, and circulated as a practical guide. The book contains popular ethics in proverbial form as the result of everyday life experience, without higher philosophical or religious principles and ideals.


More developed ethical works emanated from Hasidean circles in the Maccabean time, such as are contained in Tobit, especially in ch. iv.; here the first ethical will or testamentis found, giving a summary of moral teachings, with the Golden Rule, “Do that to no man which thou hatest!” as the leading maxim. There are even more elaborate ethical teachings in the Testaments of the Twelve Patriarchs, in which each of the twelve sons of Jacob, in his last words to his children and children’s children, reviews his life and gives them moral lessons, either warning them against a certain vice he had been guilty of, so that they may avoid divine punishment, or recommending them to cultivate a certain virtue he had practised during life, so that they may win God’s favor. The chief virtues recommended are: love for one’s fellow man; industry, especially in agricultural pursuits; simplicity; sobriety; benevolence toward the poor; compassion even for the brute (Issachar, 5; Reuben, 1; Zebulun, 5-8; Dan, 5; Gad, 6; Benjamin, 3), and avoidance of all passion, pride, and hatred. Similar ethical farewell monitions are attributed to Enoch in the Ethiopic Enoch (xciv. et seq.) and the Slavonic Enoch (lviii. et seq.), and to the three patriarchs.


The Hellenistic propaganda literature made the propagation of Jewish ethics taken from the Bible its main object for the sake of winning the pagan world to pure monotheism. It was owing to this endeavor that certain ethical principles were laid down as guiding maxims for the Gentiles; first of all the three capital sins, idolatry, murder, and incest, were prohibited (see Sibyllines, iii. 38, 761; iv. 30 et seq.). In later Jewish rabbinic literature these “Noachide Laws” were gradually developed into six, seven, and ten, or thirty laws of ethics binding upon every human being.




Ethics in religion


Ethics is a branch of philosophy dealing with right and wrong in human behaviour. Although it involves the application of human reason, it is not a science. All religions have a moral component, and religious approaches to the problem of ethics historically dominated ethics over secular approaches. From the point of view of theistic religions, to the extent that ethics stems from revealed truth from divine sources, ethics is studied as a branch of theology.


Greek and Roman religious ethics


This section will deal with classical Greek and Roman religion, and its relationship with classical Greek and Roman ethics. (Please contribute to this section!) The classical Greek and Roman notions of ethics heavily influenced the Mediterranean and European world, from ancient times, to the enlightenment, to today.


Ethics in the Bible


Western philosophical works on ethics were written in a culture whose literary and religious ideas were based in the Hebrew Bible (Old Testament) and the New Testament. As such, there is a connection between the ethics of the Bible and the ethics of the great western philosophers. However, this is not a direct connection; significant differences of opinion in how to interpret and apply passages in the books of the Bible lead to different understandings of ethics.


The subject of Ethics in the Bible has its own entry, containing a detailed study of ethics in the Hebrew Bible, the Apocrypha (deuterocanonicals) and the New Testament.


Jewish ethics


Jewish ethics is based on the fundamental concepts of Judaism, which holds that ethical duties of all mankind can be derived from the Hebrew Bible. The starting point is the belief in the unity and holiness of God, in whose image man was created. This section has its own article, Jewish ethics.


Ethics in the Apocrypha


Ethics in systematic form, and apart from religious belief, is as little found in apocryphal or Judæo-Hellenistic literature as in the Bible. However, Greek philosophy greatly influenced Alexandrian writers such as the authors of IV Maccabees, the Book of Wisdom, and Philo.


Much progress in theoretical ethics came as Jews came into closer contact with the Hellenic world. Before that period the Wisdom literature shows a tendency to dwell solely on the moral obligations and problems of life as appealing to man as an individual, leaving out of consideration the ceremonial and other laws which concern only the Jewish nation. From this point of view Ben Sira’s collection of sayings and monitions was written, translated into Greek, and circulated as a practical guide. The book contains popular ethics in proverbial form as the result of everyday life experience, without higher philosophical or religious principles and ideals.


More developed ethical works emanated from Hasidean circles in the Maccabean time, such as are contained in Tobit, especially in ch. iv.; here the first ethical will or testamentis found, giving a summary of moral teachings, with the Golden Rule, “Do that to no man which thou hatest!” as the leading maxim. There are even more elaborate ethical teachings in the Testaments of the Twelve Patriarchs, in which each of the twelve sons of Jacob, in his last words to his children and children’s children, reviews his life and gives them moral lessons, either warning them against a certain vice he had been guilty of, so that they may avoid divine punishment, or recommending them to cultivate a certain virtue he had practised during life, so that they may win God’s favor. The chief virtues recommended are: love for one’s fellow man; industry, especially in agricultural pursuits; simplicity; sobriety; benevolence toward the poor; compassion even for the brute (Issachar, 5; Reuben, 1; Zebulun, 5-8; Dan, 5; Gad, 6; Benjamin, 3), and avoidance of all passion, pride, and hatred. Similar ethical farewell monitions are attributed to Enoch in the Ethiopic Enoch (xciv. et seq.) and the Slavonic Enoch (lviii. et seq.), and to the three patriarchs.


The Hellenistic propaganda literature made the propagation of Jewish ethics taken from the Bible its main object for the sake of winning the pagan world to pure monotheism. It was owing to this endeavor that certain ethical principles were laid down as guiding maxims for the Gentiles; first of all the three capital sins, idolatry, murder, and incest, were prohibited (see Sibyllines, iii. 38, 761; iv. 30 et seq.). In later Jewish rabbinic literature these “Noachide Laws” were gradually developed into six, seven, and ten, or thirty laws of ethics binding upon every human being.


The Mussar Movement is a Jewish ethics movement which developed in the 19th century, and which still exists today.


Christian ethics


Christian ethics developed while early Christians were subjects of the Roman Empire. Christians eventually took over the Empire itself. Saint Augustine adapted Plato, and later, after the Islamic transmission of his works, Aquinas worked Aristotelian philosophy into a Christian framework.


Christian ethics in general has tended to stress grace, mercy, and forgiveness; it stresses doubt in human (as opposed to divine) judgement. It also codified the Seven Deadly Sins. For more see Christian philosophy.


St. Paul teaches (Rom., ii, 24 sq.) that God has written his moral law in the hearts of all men, even of those outside the influence of Christian revelation; this law manifests itself in the conscience of every man and is the norm according to which the whole human race will be judged on the day of reckoning. In consequence of their perverse inclinations, this law had to a great extent become obscured and distorted among the pagans; Christian understand their mission as, to restore it to its pristine integrity.


Ecclesiastical writers, as Justin Martyr, Irenaeus, Tertullian, Clement of Alexandria, Origen, Ambrose, Jerome, and Augustine of Hippo all wrote on ethics from a distinctly Christian point of view. Interestingly, they made use of philosophical and ethical principles laid down by their Greek (pagan) philosopher forbears.


The Church fathers had little occasion to treat moral questions from a purely philosophical standpoint, and independently of Christin Revelation; but in the explanation of Catholic doctrine their discussions naturally led to philosophical investigations. This is particularly true of St Augustine, who proceeded to thoroughly develop along philosophical lines and to establish firmly most of the truths of Christian morality.


The eternal law (lex aterna), the original type and source of all temporal laws, the natural law, conscience, the ultimate end of man, the cardinal virtues, sin, marriage, etc. were treated by him in the clearest and most penetrating manner. Hardly a single portion of ethics does he present to us but is enriched with his keen philosophical commentaries. Late ecclesiastical writers followed in his footsteps.


A sharper line of separation between philosophy and theology, and in particular between ethics and moral theology, is first met with in the works of the great Schoolmen of the Middle Ages, especially of Albert the Great (1193-1280), Thomas Aquinas (1225- 1274), Bonaventure(1221-1274), and Duns Scotus (1274-1308). Philosophy and, by means of it, theology reaped abundant fruit from the works of Aristotle, which had until then been a sealed treasure to Western civilization, and had first been elucidated by the detailed and profound commentaries of St. Albert the Great and St. Thomas Aquinas and pressed into the service of Christian philosophy.


The same is particularly true as regards ethics. St. Thomas, in his commentaries on the political and ethical writings of the Stagirite, in his “Summa contra Gentiles” and his “Quaestiones disputatae, treated with his wonted clearness and penetration nearly the whole range of ethics in a purely philosophical manner, so that even to the present day his wors are an inexhaustible source whence ethics draws its supply. On the foundations laid by him the Catholic philosophers and theologians of succeeding ages have continued to build.


In the fourteenth and fifteenth centuries, thanks especially to the influence of the co-called Nominalists, a period of stagnation and decline set in, but the sixteenth century is marked by a revival. Ethical questions, also, though largely treated in connection with theology, are again made the subject of careful investigation. We mention as examples the great theologians Victoria, Dominicus Soto, L. Molina, Suarez, Lessius, and De Lugo. Since the sixteenth century special chairs of ethics (moral philosophy) have been erected in many Catholic universities. The larger, purely philosophical works on ethics, however do not appear until the seventeenth and eighteenth centuries, as an example of which we may instance the production of Ign. Schwarz, “Instituitiones juris universalis naturae et gentium” (1743).


Far different from Catholic ethical methods were those adopted for the most part by Protestants. With the rejection of the Church’s teaching authority, each individual became on principle his own supreme teacher and arbiter in matters appertaining to faith and morals. The Reformers held fast to the Bible as the infallible source of revelation, but as to what belongs or does not belong to it, whether, and how far, it is inspired, and what is its meaning -- all this was left to the final decision of the individual.


Philipp Melanchthon, in his “Elementa philosophiae moralis”, still clung to the Aristotelean philosophy; so, too, did Hugo Grotius, in his work, “De jure belli et pacis”. But Cumberland and his follower, Samuel Pufendorf, moreover, assumed, with Descartes, that the ultimate ground for every distinction between good and evil lay in the free determination of God’s will, a view which renders the philosophical treatment of ethics fundamentally impossible.


In the 20th century, some Christian philosophers, notably Dietrich Bonhoeffer questioned the value of ethical reasoning in moral philosophy. In this school of thought, ethics, with its focus on distinguishing right from wrong, tends to produce behavior that is simply not wrong, whereas the Christian life should instead be marked by the highest form of right. Rather than ethical reasoning, they stress the importance of meditation on and relationship with God.


Criticism of Christian ethics


In some ways the futility question is illustrated well by this situation: as Catholic philosophers debated and deplored the rape and extermination and enslavement of the peoples of the New World, it continued without limit, especially in South America, often with the participation of the Church. Of course, it very often protected native converts and shielded them from harm, where it could. A particularly harsh critic, Friedrich Nietzsche, called the Christian ethics a “slave ethics” for counselling submission to enslavers, invaders, authority.


Hindu ethics


Hindu ethics are related to Hindu beliefs, such as reincarnation, which is a way of expressing the need for reciprocity, as one may end up in someone else’s shoes “in a future life”. However Hindu beliefs may help excuse not helping someone in distress, due to both fatalism and the teaching that one deserve’s the life one gets. In part to compensate for this, a cardinal virtue in Hindusim is kindness.


More emphasis is placed on empathy than in other traditions, and women are sometimes upheld not only as great moral examples but also as great gurus. An emphasis on domestic life and the joys of the household and village may make Hindu ethics a bit more conservative than others on matters of sex and family.


Ethical traditions in Hinduism have been influenced by caste norms.


# In the mid-20th century, Mohandas Gandhi undertook to reform these and emphasize traditions shared in all the Indian faiths: vegetarianism and an ideology of harms reduction leading ultimately to nonviolence

# active creation of truth through courage and his ‘satyagraha’

# rejection of cowardice and concern with pain or indeed bodily harm




Buddhist ethics


Gautama Buddha adopted some elements of Hindu practices, notably meditation and (within limits) vegetarianism. Like Aristotle among the Greeks, who emphasized a “Golden Mean” or moderate choice in ethical matters, the Buddha advised moderation in all things, even moderation itself.


The Noble Eightfold Path still serves as the most important guide to Buddhist ethics.


Calm is a cardinal virtue of Buddhism, and is believed to lead to enlightenment.


Criticism of Buddhist Ethics


Buddhism is concerned with reducing attachment in order to sever one’s connection with an illusory world. It therefore cannot encourage one to be good (for example), because one would then be attached to goodness.


Instead, they advocate a “middle ground” in which one does enough that there could be no just criticism of one’s actions. This is unsatisfying for many westerners.


Chinese traditional ethics


Chinese traditional systems of thought are both varied and mixed, so it’s difficult to point to a single, central structure to Chinese ethics. In addition, there is always the question of whether beliefs form behavior, or behavior forms beliefs -- in other words, whether an ethical system is something that people try to follow, or just a description of what they do. However, this being said, it is nonetheless true that there are several basic threads in Chinese traditional ethics.


Confucianism and Neo-Confucianism emphasized the maintenance and propriety of relationships as the most important consideration in ethics. To be ethical is to do what one’s relationships require. Notably, though, what you owe to another person is inversely proportional to their distance from you. In other words, you owe your parents everything, but you are not in any way obligated towards strangers. This can be seen as a recognition of the fact that it is impossible to love the entire world equally and simultaneously.


This is called relational ethics, or situational ethics. The Confucian system differs very strongly from Kantian ethics in that there are rarely laws or principles which can be said to be true absolutely or universally.


This is not to say that there has never been any consideration given to universalist ethics. In fact, in Zhou dynasty China, the Confucians’ main opponents, the followers of Mozi argued for universal love, jian’ai. The Confucian view eventually held sway, however, and continues to dominate many aspects of Chinese thought. Many have argued, for example, that Mao Zedong was more Confucian than Communist.


Confucianism, especially of the type argued for by Mencius (Mengzi), argued that the ideal ruler is the one who (as Confucius put it) “acts like the North Star, staying in place while the other stars orbit around it.” In other words, the ideal ruler does not go out and force the people to become good, but instead leads by example. The ideal ruler fosters harmony rather than laws.


There are many other major threads in Chinese ethics. Buddhism, and specifically Mahayana Buddhism, brought a cohesive metaphysic to Chinese thought and a strong emphasis on universalism. Neo-Confucianism was largely a reaction to Buddhism’s dominance in the Tang dynasty, and an attempt at developing a native Confucian metaphysical/analytical system.


Laozi and other Daoist authors argued for an even greater passivity on the part of rulers than did the Confucians. For Laozi, the ideal ruler is one who does virtually nothing that can be directly identified as ruling. Clearly, both Daoism and Confucianism presume that human nature is good. The main branch of Confucianism, however, argues that human nature must be nurtured through ritual (li), culture (wen) and other things, while the Daoists argued that the trappings of society were to be gotten rid of.


The Legalists, such as Hanfeizi, argued that people are not innately good. Laws and punishments are therefore necessary to keep the people good. Actual governing in China has almost always been a mixture of Confucianism and Legalism.


Islamic ethics


Islam is monoetheistic and emphasizes submission to Allah (God). It sees all of natural law, including that revealed by science, as an aspect of that law. Indeed, everything in the universe “is Muslim” but does not necessarily know it. This tradition informed and spurred the development of most, late medieval science in the West.


Muhammad founded a tradition of ethics built on knowledge. Later Muslim thinker developed this with the investigation of alternatives, the “ijtihad”. Early Muslim philosophy applied it with decreasing diligence, eventually ossifying into a legal code, the fiqh, that served the purposes of the Ottoman Empire. A five-century gap followed while ethics as such was seen only as blind mimicry, or taqlid, using these traditional schools and categories. The hadith, the sayings of Muhammad, filled a popular role in ordinary ethical disputes, and in the mosque where they were usually resolved by a shaikh (“judge”).


The Shia branch of Islam built a hierarchy and rigid ethical codes, while Sunni Islam did not, and relied much more on local figures and traditions. It is critically important in Islam to develop an al-urf, or “custom”, to adapt Islam to local conditions, leading to situated ethics.


Also important is neighbourliness and khalifa, or “stewardship” as a a land ethic. This tradition continues in modern Islamic philosophy.




Normative ethics


Normative ethics (cf. metaethics) is the branch of the philosophical study of ethics concerned with classifying actions as right and wrong without bias, as opposed to descriptive ethics.


Descriptive ethics deal with what the population believes to be right and wrong, while normative ethics deal with what the population should believe to be right and wrong.


“Killing one’s parents is wrong,” is a normative ethical claim. Given that parricide is wrong, normative ethics has no further interest: why it is wrong is someone else’s concern.




Applied ethics


Applied ethics takes some unremarkable theory of ethics, such as utilitarianism and applies it to a particular medical, pharmaceutical, political, legal or commercial problem.


The chief difficulty with formal applied ethics is that many persons can disagree with the selected starting ethical theory. For example, it is known that both Christians and Muslims often disagree with utilitarian solutions - their ethics involve reference to a pre-existing moral code for divine sources. This introduces case prototypes and precedents which are not universally acceptable to all participants.


To avoid this problem, one of the newer approaches to applied ethics is to revive the ancient practice of casuistry. Casuistry attempts to establish a plan of action to respond to particular facts - a form of case-based reasoning. By doing so in advance of actual investigation of the facts, it can reduce influence of interest groups. By focusing on action and not the rationale, it can reduce influence of prior bodies of precedent and explicit moral codes.


In a modern casuistic approach to say, a biomedical issue, two boards of experts are appointed. The ethical board might represent disparate ethical theories, e.g. a Jew, Christian, Buddhist, Humanist and Muslim. The scientific board represents relevant medical, legal, psychological and philosophical disciplines. The ethical board evaluates situations, and recommends and ratifies responses. The scientific board explains the causes and effects of each ethical state and response.


The boards then consider actions that are appropriate for relevant pure cases. For example, most ethical systems agree that assault deserves punishment, while risking oneself to save lives deserves reward. Other cases that are often relevant include theft, gifts, verified truth, verified lying, betrayal, and earned trust.


Taking such cases as data, the board draws parallels with the problem under consideration, and attempts to discover a set of actions to respond to the case under consideration.


For example, medical experiments without informed consent, performed on healthy persons, are often likened to assault with a deadly weapon performed by the experimenter. The experiments usually involve equipment or drugs, which provides the ‘weapon.’ The experimenter’s malice is indicated by the secrecy. Therefore, harmful results can be attributed to the malice, and the degree of damage indicates the degree of assault.


If the experimenter gets informed consent from the subject, the scenario transforms completely because the moral choice moves from the experimenter to the subject. With informed consent, the subject then becomes either a tragic hero if the procedure fails, or a successful hero if the procedure succeeds. The experimenter is then merely offering a brave volunteer an opportunity to contribute to human knowledge, and possibly benefit from the process.


Note that in both cases, “villain & victim,” or “scientist & volunteer,” the actual experiment might cause the same clinical results. However, the moral relationships of the participants are completely different.


Since casuistry is concerned with facts and actions, rather than theories, it is often remarkably easier to come to agreement. Although many ethical systems disagree about the justification for an action, the actions that they recommend are often remarkably similar. There can be many rationales for the same action, thus avoiding any single rationale or the necessity to agree on language seems to remove a major barrier to agreement. In legal terms, it removes the fear of setting precedent.


Such an action-focused approach to applied ethics is thus also less likely to conflict with informal ethical theories of politics, etiquette, aesthetics and arbitration - each of which implies their own concept of a valid precedent. In fact an approach based on casuistry involves practices from all of these - especially the production of hypothetical cases for deriving a common ethic - but does not permit the arbitrary introduction of precedents from any of them.


A specialized example of casuistry is a science court, in which scientists agree in advance what scientific theory would best explain a set of facts and thus what research program is recommended - making it extraordinarily difficult for scientists to disagree with that action if those facts turn out to be true. A similar approach can be taken to engineering and regulatory decisions, with advocates of different potential actions competing to establish themselves. These applications to public policy tend to resemble that of more traditional politics.




Abortion, legal and moral issues



The controversy


The morality and legality of abortion is a large and important topic in applied ethics and is also discussed by legal scholars and religious people. Important facts about abortion are also researched by sociologists and historians.


Abortion has been common in most societies, although it has often been opposed by some institutionalized religions and governments. In 20th century politics in the United States and Europe, abortion became commonly accepted by the end of the 20th century. (Even though in some countries, such as Germany, abortion is technically illegal even in the first trimester, but prosecution does not occur.) Additionally, abortion is legal and accepted in China, India and other populous countries. The Catholic Church remains opposed to the procedure, however, and in other countries, notably the United States and the (predominantly Catholic) Republic of Ireland, the controversy is still extremely active, to the extent that even the names of the respective positions are subject to heated debate. While those on both sides of the argument are generally peaceful, if heated, in their advocacy of their positions, the debate is sometimes characterized by violence. Though true of both sides, this is more marked on the side of those opposed to abortion, because of what they see as the gravity and urgency of their views.


The central question


The central question in the abortion debate is a clash of presumed or perceived rights. On the one hand, is a fetus (sometimes called the “unborn” by pro-life/anti-abortion advocates) a human being with a right to life, and if so, at what point in the pregnancy does the fetus become human? On the other hand, is a fetus part of a woman’s body and therefore subject to a woman’s right to control her own body? How does one balance these respective rights, or do they both exist?


The extreme “pro-life” argument is that an embryo (and later, a fetus) is a human life from the moment of conception and, moreover, that the right to life is absolute. Therefore, abortion under any circumstance is tantamount to murder, and wrong. The extreme “pro-choice” argument is that a woman’s right to control her own body is absolute, and that abortion is acceptable under any circumstance.


Underlying this debate is another debate, over the role of the state: to what extent should the state interfere with a woman’s body to protect the public interest, or to what extent should the state protect the general interest, even if it means controlling a woman’s body? This is a major issue in a number of countries, such as India and China, which have tried to enforce forms of birth control (including forced sterilization), and the United States, which historically has limited access to birth control.


The many and varied positions about abortion


# The competing labels for positions tends to blur over important differences in what can be advocated about abortion. In discussions of abortion it is of paramount importance to distinguish the variety of conclusions that can be advocated on the subject. First, consider the unequivocal positions: Abortion is always morally permissible.

# Abortion is always immoral (morally impermissible).

# Abortion ought to be legal in every instance.

# Abortion ought to be illegal in every instance.


There is clearly a difference, for example, between the views that abortion is immoral and that it should be illegal. It is possible to hold the views both that every instance of abortion is immoral and also that it should never be illegal.


(There are, in fact, several other positions that represent even greater extremes than these, though they are not, strictly speaking, positions about abortion per se. On the one hand, there are some persons who believe, virtually always on religious grounds, that birth control is morally impermissible; they argue that the choice of whether a child should be created should always be left to God. On the other hand, there are persons such as professor Peter Singer, who think that infanticide is morally permissible and should be legally permissible, and there are cases of persons actually committing infanticide. It is also at least conceivable that some may take the position that abortion ought to be compulsory, either in certain situations --- the Twelve Tables of Roman law required that deformed children be put to death --- or as a population control measure, as in the People’s Republic of China.) There are also several more qualified positions about abortion, which represent mid-ground between the relatively extreme positions that abortion is always moral, or never, and that it should always be legal, or never. That is, the qualified positions are that abortion is sometimes moral and at other times not, and in some cases it should be legal and in other cases not. Examples of these positions are:


# Abortion in the first trimester (or before the embryo or fetus is viable outside the womb) is morally permissible; abortion after that time is immoral.


# Abortion in the first trimester (or before the embryo or fetus is viable outside the womb) ought to be legal; abortion after that time ought to be illegal.


# Abortion up to the third trimester (so-called late-term abortion) is morally permissible; in the third trimester, it is immoral.


# Abortion up to the third trimester ought to be legal; in the third trimester, it ought to be illegal.


# Abortion should always be illegal, except in some special circumstances—for example, when the mother’s long-term health or life is at stake, or when the pregnancy is the result of rape or incest.


The latter position represents a point of serious controversy among abortion foes, who feel that, in those cases where the completion of a pregnancy would likely result in severe permanent physical injury or death for the mother, abortion is morally permissible and/or should (continue to) be legally permitted. Some oppose even this exception, however. Similarly, when pregnancy is the result of rape or incest, the situation created—where the mother is bearing a rapist’s child, or her close relative’s—is regarded as so morally repugnant that there is no moral obligation, and should be no legal obligation, to continue the pregnancy. Again, some people will not make an exception even in such cases.


The political debate tends to center on questions of legality, though such debates are often based on moral questions. In the United States, the political debate centers on two questions:

# Should late-term abortions (and particularly those performed by the partial-birth abortion technique) for medical reasons related to the mother’s health continue to be legal?

# Should first-trimester abortions continue to be legal? In the United States, this is tantamount to asking, “Should Roe v. Wade continue to be supported?”


At present, only the first of these questions has a viable political life in the United States. [We need a few more sentences here about “partial-birth abortion.”] The second question is a matter of deep concern for many, but the chances of Roe v. Wade being overturned are low at present. Related issues such as requiring parental consent for minors, waiting periods, and education, are also in contention in some states.


In many countries, but most strikingly in the United States, the scientific, religious, and philosophical communities have failed to reach any consensus on most of these issues. The controversy over abortion remains a very emotionally charged issue, and difficult to resolve.


Apologies for the focus on the United States here. General information about debates in other countries would be very appropriate to add.


Modern arguments about the legality and morality of abortion


Briefly, the basis of the view that all, or almost all, abortion should be illegal is the belief that a human life—and all political rights attending it—begins at conception. Given that, one is invited to consider the common assumption that each innocent human being is entitled to the protection of society against the deliberate destruction of its life by another person. The latter is a rough statement of the right to life, which is guaranteed in many basic legal and political documents such as the United States Declaration of Independence and the Universal Declaration of Human Rights, and is the basis of laws against murder. Thus, the pro-life view is that elective abortion is the deliberate killing of an innocent human being and therefore not morally justifiable, regardless of what the law has to say. But since the law should be consistent with truth, elective abortion ought to be regarded legally as murder. Again, this is the basic argument against the legality of abortion. There exist people who morally disapprove of abortion but who, for other reasons, deny that abortion should be legally proscribed. This will be explained below.


One could also oppose the legality of abortion on nonreligious grounds, which is a strategy employed by those who believe that their personal religious considerations have no proper place in public policy debate. One could say, for example, that the proposition that each human life begins at conception is a fact of biology. In this view, the term “human life” is used in a straightforward, uncontroversial way to refer to the life of an individual human, which begins with the union of parental gametes that creates a new individual with a distinct genetic identity, initiating the process of growth and change that ends only with death. Proponents of this view recognize that there is a period of several months during which the child is biologically dependent upon the mother to sustain its life, but they regard the obligation of a parent to protect the life of its child as one which ought to be an uncontroversial societal norm. Opponents of this view ignore the unanimity of biologists in when mammalian life begins, and argue instead that biologists are by no means unanimous in their agreement about when a human life begins—. However, of course, the life of the zygote undoubtedly begins at conception. They say that the issue is not one that is or can be adjudicated by science, and that scientists are in the same boat as philosophers and religionists.


Those who believe that abortion is morally permissible, and should remain legally permissible, typically have a different view of the issue as to when human life begins. Many hold that an embryo or fetus which is incapable of surviving outside the mother’s womb (a status generally reached no sooner than 17 weeks into gestation) is not recognizable as a human life separate from the mother’s body, while others hold that human life starts with the development of concious thought, which requires at least a developed nervous system. Anti-abortionists counter by saying that it is arbitrary when an embryo or fetus is to be considered a separate human life, and that future technology may make it possible for a human life to develop entirely outside of a mother’s body. Who controls the fetus then--the father, the mother, the laboratory, or the government? This latter point has implications for both sides of the abortion issue, however, because once it is claimed that the beginning of human life is arbitrary, or a matter of convention, a new controversy is possible, namely, what the convention ought to be. The legal issues associated with the fetus become even more complex.


For those who believe that abortion should be legally permissible (regardless of its morality), one of the most common arguments is based on privacy rights. Abortion rights advocates hold that a woman’s right to determine what happens with her body (including whether to carry a pregnancy to term) is private, is not to be interfered with by outside influences, and negates all rights of her offspring. This point was given an interesting formulation by the “philosopher” Judith Jarvis Thompson: if one were to find oneself suddenly attached to another, adult human being, and in such a position such that, if one were to remove oneself, that other person would die, it is by no means clear that one would be obligated, morally or legally, to continue to be attached to that person. Against this argument the objection is frequently made that in about 97 percent of all cases (rape and incest account for about 3 percent) it was, after all, the mother who chose to “attach” herself to the embryo developing within her, and therefore the analogy is imperfect.


Another common argument is political pragmatism. Where abortion is illegal, some women nonetheless seek to end their pregnancies and will resort to unsafe methods that endanger their own lives—so-called “back-alley” abortions. Since modern medical testing makes it possible to estimate early in pregnancy whether a child might be born with severe defects, some abortion rights advocates also argue that requiring such children to be born would be an unnecessary burden on society as well as the parents. This, however, raises another contentious moral issue of “selective” abortion, where parents might choose to terminate a pregnancy based on desired traits of the child (such as sex) that can be determined before birth.


Some abortion rights advocates point to global population pressures which many hold responsible for endemic hunger, overcrowding, and environmental impacts; they believe that making abortion illegal would result in further such pressures and would exacerbate these problems. They also sometimes refer to the difficulties and often miseries experienced by the children and their mothers, when the mothers are often single and impoverished. An increase of children born to such situations could result in an increase in social ills, including increases in crime, broadening of the population base of those living below the poverty line, and ballooning of the state welfare rolls. Abortion opponents observe that a related rationale led China to adopt its “one child” policy, which has led not only to increased abortions and sterilizations, but also to live baby daughters being secretly abandoned in hopes that the next child will be a son. When the answer to social ills is to reduce the number of people, the argument goes, other even less palatable ways of reducing existing populations may begin to look attractive as well. Abortion opponents also point out the abortion proponents rarely suggest killing infants and todders as a solution to hunger, overcrowding, and environmental impacts.


On January 28, 1935 Iceland became the first country to legalize abortion.




Animal rights


Animal rights is the term commonly used for the view that animals are in every way persons: autonomous, possess the animating spirit, have unique personalities, are aware of self and surroundings, feel pleasure and pain, have complex emotional nature, use communication, possess memory, are capable of learning, etc., and are thus deserving certain rights (mainly the right to live in a free and natural state of their own choosing) the same as humans. Animals must then be worthy of our ethical consideration in how we humans interact with them.


While many advocates of animal rights do support rights for animals in the strict philosophical or legal sense, the term primarily is used for the notion that animals should not be killed for food, imprisoned, experimented upon, or abused for entertainment.


Animal rights in philosophy


Among the most famous philosophical proponents of animal rights are the philosophers Peter Singer and Tom Regan, who hold views that have much in common, but with different philosophical justifications (see below). Activists Karen Davis of United Poultry Concerns and Ingrid Newkirk of PETA have also eloquently defined fully-fledged political/personal philosophies of animal rights.


Although Singer is said to be one of the ideological founders of today’s animal rights movement, his philosophical approach to an animal’s moral status is not based on the concept of rights, but on the principle of equal consideration of interests. His seminal book, Animal Liberation, argues that humans grant moral consideration to other humans not on the basis of intelligence (in the instance of children, or the mentally disabled), on ability to moralize (criminals and the insane), or on any other attribute that is inherently human, but rather on their ability to experience suffering. As animals also experience suffering, he argues, excluding animals from such consideration is a form of discrimination he calls ‘speciesism’.


Tom Regan, on the other side, claims that non-human animals that are so-called “subjects-of-a-life” are bearers of rights like humans, although not necessarily of the same degree. This means that animals in this class have “inherent value” as individuals, and cannot merely be considered as means for an end. This is also called a “direct duty” view on the moral status of non-human animals. According to Regan we should abolish the breeding of animals for food, animal experimentation and commercial hunting.


These two figures serve to illustrate the main differences within the animal rights movement. While Singer is primarily concerned with improving treatment of animals and accepts that, at least in some hypothetical scenarios, animals could be legitimately used for further (human or non-human) ends, Regan relies on the strict “Kantian” idea that animals are persons and ought never to be sacrificed as mere means. Yet, despite these theoretical discrepancies, both Singer and Regan mostly agree about what to do in practice: for instance, they both concur in that the adoption of a vegan diet and the abolition of nearly all forms of animal experimentation are ethically mandatory. Those who want to antagonize the “rights” and the “welfarist” approaches should remember the words of Noam Chomsky, who, quoting Dewey (in another context), said that


    [it is correct that] mere “attenuation of the shadow will not change the substance”, but it can create the basis for undermining the substance. It goes back to the Brazilian rural worker’s image [of] expanding the floor of the cage. Eventually you wan to dismantle the cage, but expanding the floor of the cage is a step towards that.




Animal rights in law


Generally speaking, animals have been denied the same rights as human beings and corporations. However, animals are protected under the law in many jurisdictions. There are criminal laws against cruelty to animals, laws that regulate the keeping of animals in cities and on farms, transit of animals internationally quarrantine and inspection provisions. Generally speaking, these laws are designed to protect animals, or protect human interaction with animals, or regulate the use of animals as food or in food processing. In the common law it is possible to create a trust and have the trust empowered to see to the care of a particular animal after the death of the benefactor of the trust. Some eccentric wealthy individuals without children create such trusts in their will. Such trusts can be upheld by the courts if properly drafted and the testator was of sound mind. There are also many movements to give animals greater rights and protection under domestic and international law.






Bioethics is a field of study which concerns the relationship between biology, science, medicine and ethics, philosophy and theology. Bioethicists analyze which medical treatments or technological innovations are moral, when treatments may or may not be used, etc.


# Issues discussed in bioethics include whether or not any of the following are ever permissible, and if permissible, under what circumstances: Abortion

# Artificial insemination

# Donating one’s sperm or eggs

# Genetic engineering

# The obligation of the individual, community, state and nation to provide health care and/or health insurance.

# Homosexuality

# Human cloning

# When to use, and when to withold, life-support

# When to use, and when to withold, artificial hydration and artificial nutrition

# How to treat infertility

# Organ transplants and Organ donation

# Stem-cell cloning

# Suicide, assisted suicide and euthanasia

# The use of surrogate mothers

# Use of nanotechnology and cybernetics within humans

# The advent of artificial wombs

# The treatment of non-human animals


Bioethics may be a purely secular concern; in such cases bioethicists focus on using philosophy to help analyze said concerns. A large number of Jewish and Christian religious scholars have become involved in the field, and have developed rules and guidelines on how to deal with these issues from within the viewpoint of their respective faiths. A smaller number of religious scholars from other religions have recently become involved in this field as well.




Business ethics


Business ethics is the study of common ethical questions that people face in conducting business.


General introduction

[The introduction needs to discuss general ethical/philosophical discussions about the intersection of ethics and the accumulation and use of wealth. This introductory section should, at the very least, survey progress in this field from the Enlightenment to today.]


Religious views on business ethics


Jewish views


Judaism has an extensive literature and legal code on the accumulation and use of wealth. The basis of these laws is the Torah, where there are more rules about the kashrut (fitness) of one’s money than about the kashrut of one’s food. These laws are developed and expanded upon in the Mishnah and the Talmud.


Rabbi Yisrael Salanter (19th century), founder of the Mussar movement in Eastern European, taught that just as one checks carefully to make sure their food is kosher, so too should one check to see if their money is earned in a kosher fashion. (Chofetz Chaim, Sfat Tamim, chapter 5).


Jewish references on this topic include:   “The Challenge of Wealth”, Meir Tamari, Jason Aronson Inc., 1995


  “You Shall Strengthen Them: A Rabbinic Letter on the Poor” Elliot N. Dorff with Lee Paskind (The Rabbinical Assembly)


Christian views


Christianity has an extensive literature and legal code on the accumulation and use of wealth. The basis of these laws is the Torah, and they are amplified in the New Testament.


Muslim views


Islam has an extensive literature and legal code on the accumulation and use of wealth. The basis of these laws is the Quran, and they are amplified in the Hadith.


Libertarian socialist view of property


Libertarian socialists, sometimes known as left-anarchists, hold that, as Proudhon said, “Property is theft” -- that is, in reference to the ownership of productive resources, property is not the right to use, but the right to keep others from using. Advocates of this philosophy therefore hold the “institution of property”, as they sometimes call it, to be immoral in itself, so the accumulation of wealth that includes productive resources, especially land, is also immoral. This means that no business can really be ethical, since the very foundation of business as we know it is private property.




Criminal justice


# The study of criminal justice traditionally revolves around three main components of the criminal justice system: police

# courts

# corrections


Nowadays, it is sometimes argued that psychiatry is also a central part of the criminal justice system.


The pursuit of criminal justice is, like all forms of “justice” or “fairness” or “process”, essentially the pursuit of an ideal. Thus this field has many relations to anthropology, economics, history, law, political science, psychology, sociology, and theology. The establishment of criminal justice, as an academic field, is generally accredited to August Vollmer, during the 1920s. By 1950, ~1,000 students were in the field; by 1975, ~100,000 students were in the field; by 1998, ~350,000 students were in the field. A notable center for criminal justice studies is the John Jay College of Criminal Justice.




One question which is presented by the idea of creating justice involves the rights of victims and the rights of accused criminals, and how these individual rights are related to one another and to social control. It is generally argued that victim’s and defendant’s rights are inversely related, and individual rights, as a whole, are likewise viewed as inversely related to social control.


Rights, of course, imply responsibilities or duties, and this in turn requires a great deal of consensus in the community regarding the appropriate definitions for many of these legal terms.




There are several basic theories regarding criminal justice and it’s relation to individual rights and social control:


# Restorative justice assumes that the victim or their heirs or neighbors can be in some way restored to a condition “just as good as” before the criminal incident. Substantially it builds on traditions in common law and tort law that requires all who commit wrong to be penalized. In recent time these penalties that restorative justice advocates have included community service, restitution, and alternatives to imprisonment that keep the offender active in the community, and and re-socialized him into society. Some suggest that it is a weak way to punish criminal who must be deterrred, these critics are often proponents of


# Retributive justice or the “eye for an eye” approach. Assuming that the victim or their heirs or neighbors have the right to do to the offender what was done to the victim. These ideas fuel support for capital punishment for murder, amputation for theft (as in some versions of the sharia).


# Psychiatric imprisonment treats crime nominally as illness, and assumes that it can be treated by psychoanalysis, drugs, and other techniques associated with psychiatry and medicine, but in forcible confinement. It is more commonly associated with crime that does not appear to have animal emotion or human economic motives, nor even any clear benefit to the offender, but has idiosyncratic characteristics that make it hard for society to comprehend, thus hard to trust the individual if released into society.


# Transformative justice does not assume that there is any reasonable comparison between the lives of victims nor offenders before and after the incident. It discourages such comparisons and measurements, and emphasizes the trust of the society in each member, including trust in the offender not to re-offend, and of the victim (or heirs) not to avenge.


In addition, there are models of criminal justice systems which try to explain how these institutions achieve justice.


# The Consensus Model argues that the organizations of a criminal justice system do, or should, cooperate.


# The Conflict Model assumes that the organizations of a criminal justice system do, or should, compete.


The US Criminal Justice system


“There is a criminal justice process through which each offender passes from the police, to the courts, and back unto the streets. The inefficiency, fall-out, and failure of purpose during this process is notorious.” -- US National Commission on the Causes and Prevention of Violence “Three strikes you’re out” is claimed to be cruel and unusual punishment by its opponents, who argue that the U.S. system is too dependent on retributive justice, and is failing socially as well as criminally.


A society should not be judged on how it treats its outstanding citizens but by how it treats its criminals.....Fyodor Dostoyevsky




Environmental ethics


Environmental Ethics are the ethical realtionship between human beings and the environment we live in. There are many ethical decisions that human beings make in respect to the environment. Consider clear cutting issues in the Pacific Northwest. Do we continue to desimate the forrests for sake of human consumption? Should we continue to make gas-guzzling, SUVs’, depleating fossil fuel resources while we already have the technology to create zero-emission vehicles? These are just a few examples of Environmental ethics.






Feminism is a set of social theories and political practices that are critical of past and current social relations and primarily motivated and informed by the experience of women. Most generally, it involves a critique of gender inequality; more specifically, it involves the promotion of women’s rights and interests. Feminist theorists question such issues as the relationship between sex, sexuality, and power in social, political, and economic relationships. Feminist political activists advocate such issues as women’s suffrage, salary equivalency, and control over reproduction.


Feminism is not associated with any particular group, practice, or historical event. Its basis is the political awareness that there are uneven power structures between groups, along with the belief that something should be done about it. There are many forms of feminism.


Radical feminists consider patriarchy to be the root cause of the most serious social problems. Some radical feminists advocate separatism -- a separation of male and female in society and culture -- while others question not only the relationship between men and women, but the very meaning of “man” and “woman” as well; some argue that gender roles, gender identity, and sexuality are themselves social constructs (see also heteronormativity). For these feminists, feminism is a primary means to human liberation (i.e. the liberation of men as well as women, and men and women from other social problems).


Other feminists believe that there may be social problems separate from or prior to patriarchy (e.g., racism or class divisions); they see feminism as one movement of liberation among many, each with effects on each other.


Although many leaders of feminism have been women, not all women are feminists and not all feminists are women. Some feminists argue that men should not take positions of leadership in the movement, but most accept or seek the support of men. Compare pro-feminist, humanism, masculism.




Feminism is generally said to have begun in the 19th century as people increasingly adopted the perception that women are oppressed in a male-centered society (see patriarchy). The feminist movement is rooted in the West and especially in the reform movement of the 19th century. The organised movement is dated from the first women’s rights convention at Seneca Falls, New York, in 1848. Over a century and a half the movement has grown to include diverse perspectives on what constitutes discrimination against women. Early feminists are often called the first wave and feminists after about 1960 the second wave.


Attitude towards men and women


The earliest works on ‘the woman question’ criticised the restrictive role of women without necessarily claiming that women were disadvantaged or that men were to blame. Mary Wollstonecraft’s A Vindication of the Rights of Woman is one of the few works written before the 19th century that can unambiguously be called feminist. By modern standards her metaphor of women as nobility, the elite of society, coddled, fragile and in danger of intellectual and moral sloth, sounds like a masculist argument. Wollstonecraft believed that both sexes contributed to this situation and took it for granted that women had considerable power over men.


In the United States, this view had begun to evolve by the 1830s. Early feminists active in the abolition movement began to increasingly compare women’s situation with the plight of African American slaves. This new polemic squarely blamed men for all the restrictions of women’s role, and argued that the relationship between the sexes was one-sided, controlling and oppressive.


Most of the early women’s advocates were Christians, especially Quakers. It started with Lucretia Mott’s involvement as one of the first women to join the Quaker abolitionist men in the abolitionist movement. The result was that Quaker women like Lucretia Mott learned how to organize and pull the levers of representative government. Starting in the mid-1830s, they decided to use those skills for women’s advocacy. It was those early Quaker women who taught other women their advocacy skills, and for the first time used these skills for women’s advocacy. As these new women’s advocates began to expand on ideas about men and women, religious beliefs were also used to support them. Sarah Grimké suggested in her Letters on the Equality of the Sexes (1837) that the curse placed upon Eve in the Garden of Eden was God’s prophecy of a period of universal oppression of women by men. Early feminists set about compiling lists of examples of women’s plight in foreign countries and in ancient times.


At the Seneca Falls convention in 1848, Elizabeth Cady Stanton modeled her declaration of sentiments on the United States Declaration of Independence. Men were said to be in the position of a tyrannical government over women. This separation of the sexes into two warring camps was to become increasingly popular in feminist thought, despite some reform minded men such as William Lloyd Garrison and Wendel Phillips who supported the early women’s movement.


As the movement broadened to include many women like Susan B. Anthony from the temperance movement, the slavery metaphor was joined by the image of the drunkard husband who batters his wife. Feminist prejudice that women were morally superior to men reflected the social attitudes of the day. It also led to the to focus on women’s suffrage over more practical issues in the latter half of the 19th century. Feminists assumed that once women had the vote, they would have the political will to deal with any other issues.


Victoria Woodhull argued in the 1870s that the 14th amendment to the United States Constitution already guaranteed equality of voting rights to women. She anticipated the arguments of the United States Supreme Court a century later. But there was a strong movement opposed to suffrage, and it was delayed another 50 years, during which time most of the practical issues feminists campaigned for, including the 18th amendment’s prohibition on alcohol, had already been won.


Feminists of the second wave focused more on lifestyle and economic issues; “The personal is the political” became a catchphrase. As the reality of women’s status increased, the feminist rhetoric against men became more vitriolic. The dominant metaphor describing the relationship of men to women became rape; men raped women physically, economically and spiritually. Radical feminists argued that rape was the defining characteristic of men, and introduced a new phase of hostility to maleness. Lesbian separatists condemned heterosexual relations with men.


Radical feminists, particularly Catharine MacKinnon, began to dominate feminist jurisprudence. Whereas first wave feminism had concerned itself with challenging laws restricting women, the second wave tended to campaign for new laws that aimed to compensate women for societal discrimination. The idea of male privilege began to take on a legal status as judicial decisions echoed it, even in the United States Supreme Court.


One of the largest, earliest and most influential feminist organizations in the U.S., the National Organization for Women (NOW) illustrates the strong influence of radical feminism. Created in 1967 with Betty Friedan as president, the organization’s name was deliberately chosen to say for women, and not of women. By 1968, the New York chapter lost many members who saw NOW as too mainstream. There was constant friction, most notably over the defense of Valerie Solanas. Solanas had shot Andy Warhol after authoring the SCUM manifesto, a passionately anti-male manifesto, calling for the extermination of men. Ti-Grace Atkinson, the New York chapter president of NOW described her as, “the first outstanding champion of women’s rights”. Another member, Florynce Kennedy represented Solanas at her trial. Within a year of the split, the new group limited the number of women members who live with men to 1/3 of the group’s membership. By 1971, all married women were excluded from the breakaway group and Atkinson had also defected.


Friedan denounced the lesbian radicals as the lavender menace and tried to distance NOW from lesbian activities and issues. The radicals accused her of homophobia. There was a constant fight for control of NOW which eventually Friedan lost. By 1992 Olga Vives, chair of the NOW’s national lesbian rights taskforce estimated that 40 percent of NOW members were lesbians. However NOW remains open to male members in contrast to some groups.


Feminists disagree over the role of men as participants within the movement. Some female feminists (especially on college campuses) feel that it is inappropriate to call self-named feminist men ‘feminist’ and instead prefer the title pro-feminist men; however, in most of American (U.S.A) society , this terminology has not caught on. Others view the imposition of a label (like pro-feminist male) on people who are revulsed by the label and prefer another label (like feminist), equivalent to the imposition of racial epithets that are not preferred by the groups so named.


Feminists are sometimes wary of the transsexual movement because they challenge the distinctions between men and women. Transsexual women are rejected by some feminists who say that no one born male can truly understand the oppression women face. On the other hand, transsexual women are quick to retort that the discrimination they face due to asserting their gender identity, more than makes up for any they may have “missed out on” growing up.


Relation to other movements


Most feminists take a holistic approach to politics, believing the saying of Martin Luther King Jr., “A threat to justice anywhere is a threat to justice everywhere”. In that belief, feminists usually support other movements such as the civil rights movement and the gay rights movement. At the same time many black feminists such as bell hooks criticise the movement for being dominated by white women. Feminist claims about the disadvantages women face are often less relevant to the lives of black women. Many black feminist women prefer the term womanism for their views.




Feminism has effected many changes in society, including women’s suffrage; broad employment for women at more equitable wages (“equal pay for equal work”); the right to initiate divorce proceedings and “no fault” divorce; the right of women to control their own bodies and medical decisions, including obtaining birth control devices and safe abortions; and many others. Most feminists would argue, however, that there is still much to be done on these fronts. As society has become increasingly accepting of feminist principles, some of these are no longer seen as specifically feminist, because they have been adopted by all or most people. Some beliefs that were radical for their time are now mainstream political thought. Almost no one in Western societies today questions the right of women to vote or own land, a concept that seemed quite strange 200 years ago.


In some cases (notably equal pay for equal work) major advances have been made, but feminists still struggle to achieve their complete goals.


Feminists are often proponents of using non-sexist language, using “Ms.” to refer to both married and unmarried women, for example, or the ironic use of the term herstory instead of history. Feminists are also often proponents of using gender-inclusive language, such as “humanity” instead of “mankind”, or “he or she” in place of “he” where the gender is unknown. Feminists in most cases advance their desired use of language either to promote a respectful treatment of women or to affect the tone of political discourse, rather than in the belief that language directly affects perception of reality (compare Sapir-Whorf Hypothesis).


Impact on morals


Opponents of feminism claim that women’s quest for this kind of external power, as opposed to the internal power to affect other people’s ethics and values, has left a vacuum in the area of moral training, where women formerly held sway. Some feminists reply that the education, including the moral education, of children has never been, and should not be, seen as the exclusive responsibility of women. Such arguments are entangled within the larger disagreements of the Culture Wars, as well as within feminist (and anti-feminist) ideas regarding custodianship of societal morals and compassion.


Impact on religion


Feminism has had a great impact on many aspects of religion. In liberal branches of Protestant Christianity, women are now ordained as clergy. Within these Christian groups, woman have gradually become equal to men by obtaining positions of power; their perspectives are now sought out in developing new statements of belief. In Reform, Conservative and Reconstructionist Judaism, women are now ordained as rabbis and cantors. Within these Jewish groups, woman have gradually become more nearly equal to men by obtaining positions of power; their perspectives are now sought out in developing new statements of belief. These trends have been resisted within Islam; all the mainstream denominations of Islam forbid Muslim women from being recognized as religious clergy and scholars in the same way that Muslim men are accepted.


There is a separte article on God and gender; it discusses how monotheistic religions deal with God and gender, and how modern feminism has influenced the theology of many religions.


Perspective: the nature of the modern movement


Discrimination against women still exists in the USA and European nations, as well as worldwide. How much discrimination and whether it is a prolem is a matter of dispute.


There are many ideas within the movement regarding the severity of current problems, what the problems are, and how to confront them. Extremes on the one hand include some radical feminists such as Mary Daly who argues that the world would be better off with dramatically fewer men. There are also dissidents, such as Christina Hoff Sommers or Camille Paglia, who identify themselves as feminist but who accuse the movement of anti-male prejudices. Many feminists question the use of the “feminist” label as applying to these individuals.


Many feminists, however, also question the use of the term feminist to refer to any who espouse violence to any gender or who fail to recognize a fundamental equality between the sexes. Some feminists, like Katha Pollitt (see her book Reasonable Creatures) or Nadine Strossen (President of the ACLU and author of Defending Pornography [a treatise on freedom of speech]), consider feminism to be, solely, the view that “women are people.” Views that separate the sexes rather than unite them are considered by these people to be sexist rather than feminist.


There are also debates between Difference Feminists such as Carol Gilligan on the one hand, who believe that there are important differences between the sexes (which may or may not be inherent, but which cannot be ignored), and those who believe that there are no essential differences between the sexes, and that the roles observed in society are due to conditioning. Modern scientists sometimes disagree on whether inborn differences exist between men and women (other than physical differences such as anatomy, chromosomes, and hormones). Regardless of how many differences between the sexes are inherent or acquired, none of these differences is a basis for discrimination.


Notable feminists


# Early pioneers Heinrich Cornelius Agrippa

# Christina of Sweden

# John Stuart Mill

# George Sand


First wave

# Susan B. Anthony

# Emma Goldman

# The Grimké sisters

# Lucretia Mott

# Elizabeth Cady Stanton

# Lucy Stone

# Mary Wollstonecraft

# Victoria Woodhull

# Virginia Woolf

# Frances Wright


Second wave

# Gloria Anzaldua

# Simone de Beauvoir

# Lorraine Bethel

# Susan Brownmiller

# Charlotte Bunch

# Mary Daly

# Angela Davis

# Andrea Dworkin

# Susan Faludi

# Shulamith Firestone

# Jo Freeman

# Marilyn French

# Betty Friedan

# Carol Gilligan

# Germaine Greer

# Donna Haraway

# Nancy Hartsock

# bell hooks

# Catharine MacKinnon

# Cherrie Moraga

# Robin Morgan

# Bernice Johnson Reagon

# Alice Schwarzer

# Gloria Steinem


Third Wave

# Rebecca Walker



# Charlotte Perkins Gilman

# Carol J. Adams

# Helene Aylon

# Judi Bari

# Bernadette Cozart

# Françoise d’Eaubonne

# Lois Marie Gibbs

# Susan Griffin

# Petra Kelly

# Winona LaDuke

# Wangari Maathai

# Vandana Shiva

# Charlene Spretnak

# Starhawk


Dissident feminists

# Donna LaFramboise

# Wendy McElroy

# Camile Paglia

# Christina Hoff Sommers

# Naomi Wolf


French Feminists

# Helene Cixous

# Luce Irigary

# Julia Kristeva

# Monique Wittig


Lesbian Feminists

# Judith Butler

# Adrienne Rich

# Monique Wittig


Other Feminists

# Flora Brovina

# William Moulton Marston

# Katha Pollitt, author of Reasonable Creatures




Gay rights


The gay rights movement seeks acceptance for homosexuality and homosexual persons. The movement seeks various changes in public perception as well as in law to provide the same rights to homosexuals as are provided to heterosexuals; some of these changes are controversial.


Gay rights activists dismiss as irrelevant, misguided or malicious views that portray homosexuality as a sin or a perversion. They do not believe that one’s sexual orientation might be affected by human volition, referring to homosexuality and heterosexuality equally as unchangeable sexual orientation. Thus they generally are adamant in opposing reparative therapy as well as religious ministries that claim to help volunteers “transition” from homosexuality to heterosexuality.


History and accomplishments


The gay rights movement arose in response to what many activists called discrimination and prejudice against homosexuals.


One of the first (possibly the first?) gay rights activism was centered around Magnus Hirschfeld in pre-World War II Berlin, Germany. The gay rights movement in Germany was almost completely obliterated by Adolf Hitler and the Nazi movement (See Homosexuals in Nazi Germany and Night of the Long Knives.)




In the United States, there were some initial steps toward a gay rights movement with the formation of the Mattachine Society and the publications of Phil Andros in the years immediately following World War II. Also during this time frame Sexual Behavior in the Human Male was published by Alfred Kinsey, a work which was one of the first to look scientifically at the subject of sexuality. Kinsey’s incredible assertion, backed by a great deal of research, that approximately 10% of the population was homosexual, was in direct opposition to the prevailing beliefs of the time. Before its publication, homosexuality was not a topic of discussion, generally, but afterwards it began to appear even in mainstream publications such as Time Magazine, Life Magazine, and others.


Despite the entry of the subject into mainstream consciousness very little actual change in the laws or mores of society was seen until the 1960s, the time of the “Sexual Revolution”. This was a time of major social upheaval in many social areas, including views of sexuality.


These works, along with other changes in society such as huge migrations to the cities following the War, began to build gay communities in urban centers, and gay people began to have a sense of themselves as a minority group rather than just a few isolated “inverts”. While gay bars existed even in the early 20th century, they were very few.


With the rise of the gay community, gay bars became more and more common, and the sense of gay identify strengthened during the 1950s and 1960s.


Gay people became less and less accepting of their status as social outcasts and criminals. However, they had little or no political and social power until the late 1960s.


However, the Stonewall riots of 1969 are considered to be the starting point for the modern gay rights movement, when all of these relatively underground changes reached a breaking point, and gay people began to organize on a large scale and demand legal and social recognition and equality.


The aftermath of the Stonewall riots saw the creation of the Gay Liberation Front (GLF) in New York City. The GLF’s ‘A Gay Manifesto’ set out the aims for the fledgling gay liberation movement. Chapters of the GLF would then spread to other countries. These groups would be the seeds for the various modern gay rights groups that campaign for equality in countries around the globe.


Today, defending homosexuals against homophobia and gay-bashing and other forms of discrimination is a major element of American gay rights, often portrayed as intrinsic to human rights. Indeed, one of the most influential gay rights groups in the U.S. is called the Human Rights Campaign. Other American gay rights organizations include the National Gay and Lesbian Task force (NGLTF), Parents and Friends of Lesbians and Gays (PFLAG) and the Gay and Lesbian Alliance Against Defamation (GLAAD).


The movement has been successful in some areas. Sodomy laws were repealed or overturned in most states of the United States in the late twentieth century, and all were ruled unconstitutional in the June 2003 ruling in Lawrence v. Texas. Many companies and local governments have clauses in their nondiscrimination policies that prohibit discrimination on the basis of sexual orientation. In some jurisdictions in the U.S., gay bashing is considered a hate crime and given a harsher penalty.


The U.S. state of Vermont, the Canadian provinces of Quebec and Nova Scotia, and some European countries provide the civil union as an alternative to marriage. The Netherlands and Belgium allow same-sex marriage; Canada recognizes common-law marriages between persons of the same sex, and a recent court ruling of the Ontario and Quebec Supreme Courts will require the federal government to grant full marriage rights to same-sex couples within two years. Gay people are now permitted to adopt in some locations, although there are fewer locations where they may adopt children jointly with their partners.


In the cultural arena, similar changes have taken place. Positive and realistic gay characters appear with increasing regularity in television programs and movies.


The main opponents of the advances of the gay rights movement in the US have, in general, been the Christian right and other social conservatives, often under the aegis of the Republican Party.


The United States has no federal law protecting against discrimination in employment by private sector employers based on sexual orientation. However, 14 states, the District of Columbia, and over 140 cities and counties have enacted such bans. As of July 2003, the states banning sexual orientation discrimination in private sector employment are California, Connecticut, Hawaii, Maryland, Massachusetts, Minnesota, Nevada, New Hampshire, New Jersey, New Mexico, New York, Rhode Island, Vermont and Wisconsin.[]. Many of these laws also ban discrimination in other contexts, such as housing or public accommodation. A proposed bill to ban anti-gay employment discrimination nationwide, known as the Employment Nondiscrimination Act (ENDA), has been introduced in the United States Congress, but its prospects of passage are not believed to be good in the current Republican-controlled Congress.


On March 4, 1998 the Supreme Court of the United States ruled in the case Oncale v. Sundowner Offshore Services that federal laws banning on-the-job sexual harassment also applied when both parties are the same sex. The lower courts, however, have reached differing conclusions about whether this ruling applies to harassment motivated by antigay animus.




Just war


A just war is a war which is permissible according to a set of moral or legal rules. The rules applied may be ethical, religious, or formal (such as international law). The rules classically cover the justification for the war (Jus Ad Bellem) and the conduct of the participants in the war (Jus In Bello).


# Just war theory has ancient roots. Cicero discussed this idea and its applications. St. Augustine and Thomas Aquinas later codified a set of rules for a just war, which today still encompass the points commonly debated, with some modifications. “Whether it is always sinful to wage war?” In modern language, these rules hold that to be just, a war must meet the following criteria before the use of force: War can only be waged for a just cause. Self-defense against an armed attack is one example that is considered just cause.

# War can only be waged under legitimate authority. The sovereign power of the state is usually considered to be legitimate authority.

# War can only be waged with the right intention. Correcting a suffered wrong is considered a right intention, while material gain is not. Thus a war that would normally be just for all other reasons would be made unjust by a bad intention.

# War can only be waged with a reasonable chance of success. It is considered unjust to meaninglessly waste human life and economic resources if defeat is unavoidable.

# War can only be waged as a last resort. War is not just until all realistic options which were likely to right the wrong have been pursued.


Once war has begun, just war theory also directs how combatants are to act:

# The force used must be proportional to the wrong endured, and to the possible good that may come.

# The acts of war should be directed towards the inflictors of the wrong, and not towards civilians caught in circumstances they did not create.

# Torture, either of combatants or of non-combatants is forbidden.

# Prisoners of war must be treated respectfully.




Medical ethics


Medical ethics is the discipline of evaluating the merits, risks, and social concerns of activities in the field of medicine.


Many methods have been come up to help evaluate the ethics of a situation. These methods tend to introduce principles that should be thought about in the process of making a decision.


# Six of the principles commonly included are: Beneficence - this means that a practitioner should act in the best interest of the patient.

# Non-maleficence - from the Hippocratic Oath, “never do harm”.

# Autonomy - means that the patient should have the right to decide on their treatment.

# Justice - concerns the distribution of scarce health resources, and the decision of who gets what treatment.

# Dignity - the patient (and the person treating the patient) should be given the right to dignity.

# Truthfulness - the patient should not be lied to, and deserves to know the whole truth about their illness.


Principles like these are not designed to give answers as to how to handle a situation, and they will often overlap or contradict each other (for instance autonomy and beneficence clash if a patient refuses a life-saving blood transfusion). These principles are intended as guidelines as to what needs to be considered for particular issue or situation.


List of topics in medical ethics



issues around death and dying

# euthenasia, mercy killing, assisted suicide

# final directives and ethics of resuscitation and the withdrawal of life support




issues regarding reproductive medicine

# accessibility of abortion

# cloning

# genetic manipulation

# eugenics




issues regarding medical research

# patient’s rights

# animal research

# stem cell research




issues regarding distribution and utilization of research

# accessibility of health care

# basis of priority for organ transplantation




Utilitarian ethics


Utilitarian ethics was formulated first by Jeremy Bentham in 1781, and later championed and elaborated by the philosopher John Stuart Mill. This ethic states that the rightness of an action entirely depends on the value of its consequences, and that the usefulness can be rationally estimated. (As opposed to, say, the intentions behind it, the social acceptability, or the historical/religious principles of ethics that might disagree.) The value of said consequences are measured by the Greatest Happiness Principle, which states that each person’s happiness counts for exactly the same as every other’s, and that value of an action is positive if and only if that action increases the total happiness in the world.


The central idea of the utilitarian theory is that ethics is a reality which can be demonstrated. One can define it without religious dogma, nor external regulation, starting from the only elementary motivations of human nature -- seeking happiness or pleasure, and to escape suffering. This principle is formulated in the opening sentence of Bentham’s book, Principles of Morals and Legislation (printed in 1781, but only published in 1789) :   Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne.


A closely related and very controversial branch is Utilitarian Bioethics, which concludes from Utilitarian Ethics that killing unhappy people is a net positive value, and that therefore people with birth defects, people with terminal diseases, and depressed people are candidates for Euthanasia. In some versions of Utilitarian Bioethics, the Euthanasia need not be a matter of suicide at all -- even homicide in these cases is justified.




Utilitarian Bioethics


Utilitarian Bioethics is a very controversial branch of Utilitarian ethics that espouses directing medical resources where they will contribute most to the sum of the number of happy people in the world.


The upsides include easy medical decision-making by simple principles, and an increase in total number of happy people (and/or a decrease in unhappy ones).


The downsides include many justifications for physicians to kill patients, and the classification of many disabled or young or old people as “nonpersons”.




Divine command ethics


The Divine command theory (hereafter: DCT) is a theory of ethics. It states that the difference between right and wrong is simply that the former is that which has been commanded by God (or the gods), while the latter is that which has been prohibited by God.


Plato’s Euthyphro

The DCT was challenged by Plato in his dialogue, Euthyphro. In this dialogue, Socrates asks essentially this question:   Is an act good because God commands it, or does he command it because it is good?


The question is such that either answer seems to lead to the rejection of the DCT. Firstly, if an act is good solely because God commands it, then that would mean that if murder, rape or theft were divinely commanded, they would be good. This seems to be absurd, although on some occasions it has indeed been seriously proposed.


This may provoke a reply to the effect that God would never command such things, because God would never command what was wrong. However, this argument cannot be made if the DCT is to be maintained - under the DCT, if God commanded something, it would not be wrong.


Secondly, if God commands an act because it is good, this again undermines the DCT, as it means that the act was good independently of God’s commanding it, and therefore being commanded by God is not the only reason the act is good. Rather, whatever reason God had for commanding it is the ultimate reason that it is good.


This line of attack on the DCT is well-enough known that it is referred to as the Euthyphro dilemma. Plato is generally believed to have refuted the DCT outright. However, it should be noted that certain other theories that link morality to God are more subtle and are not straightforwardly refuted in this manner.


Missing commands

Another problem for the DCT is what to do when there is no command that is relevant to a particular ethical dilemma.


For example, the following is an extract from a Christian website’s review of the movie Alive, and its portrayal of cannibalism:   I do not know whether God condemns cannibalism or not ... Without being able to find His Word about whether cannibalism to survive is sinful or not, I cannot advise for or against it. [] This illustrates the problem - a follower of the DCT wishes to do God’s will, but if he only has access to specific commands, rather than general guiding principles, he will struggle when faced with ethical problems not covered by God’s commands. By contrast, other ethical systems (especially utilitarianism and Kantian deontology), lay down general principles for ethical action which (at least in theory) allow a person to deduce the right course of action for any situation.






Consequentialism is the belief that what ultimately matters in evaluating actions or policies of action are the consequences that result from choosing one action or policy rather than the alternative.


Defining consequentialism


Consequentialism is sometimes conflated with utilitarianism, which is a mistake, as utilitarianism is but one kind of consequentialism. Even utilitarianism is a broad family of theories, including act utilitarianism and rule utilitarianism.


Consequences for whom


Kinds of consequentialism--in a broad sense of “consequentialism” that not all philosophers would countenance--can be distinguished by the subject who is supposed to enjoy the consequences. That is, one might ask “Consequences for whom?” Egoism can be understood as individualist consequentialism according to which the consequences for the agent herself is taken to matter most.


Utilitarianism, on the other hand, can be understood as collectivist consequentialism according to which the consequences for some large group (humanity perhaps, or the sum of sentient beings) is of the greatest moment.


These views, while both consequentialist, can be in stark contrast. Individualist consequentialism may license actions which are good for the agent, but are deleterious to general welfare.


Collectivist consequentialism may license actions that are good for the collectivity but deadly for individuals. Some environmentalists seem to take the entire environment or ecosystem to be the relevant patient of consequences. The entire universe might be the subject, the best action being the one that brings the most value into the universe, whatever that value might be.


What kinds of consequences


Another way to divide consequentialism is by the kind of consequences that are taken to matter most.


The most popular form of consequentialism is hedonic consequentialism, according to which a good consequence is one that produces net pleasure, and the best consequence is one that produces more net pleasure than any of the alternatives.


Closely related is eudaimonic consequentialism, according to which full, flourishing happiness (which may or may not be the same as enjoying a great deal of pleasure) is the aim.


However, one might fix on some non-psychological good as the preferred consequence of actions.


For instance, certain ideologues seem to be consequentialists with regard to material equality or political liberty, regarding gains in these things as desirable in themselves, regardless of other consequences.


One might also adopt a beauty consequentialism, in which the ultimate aim is to produce beauty.


Similarly, one might find nothing of greater gravity than the production of knowledge.


One can also assemble packages of goods, all to be promoted equally. Since in this case there is no overarching consequence to aim for, conflicts between goods are to be adjudicated not by some ultimate consequentialist principle, but by the fine contextual discernment and intuition of the agent.


Consequentialism contrasted with other moral theories


Consequentialism is often contrasted with deontology. However, this may be mistaken. Many forms of consequentialism at bottom are deontological, demanding that we simply have a duty to produce a certain kind of consequence, whether or not that kind of consequence personally moves us. And even paradigmatic deontological theories, such as Kant’s, do not disregard consequences entirely. For instance, one might argue that for Kant, the more expression of rational nature, or the good will, the better. It is difficult to find a theory that posits an intrinsic good (such as the good will in Kant) in which it is not better to have more of the intrinsic good.


A more fundamental distinction is between theories that demand that agents act for ends in which they have some personal interest and motivation (actually or counterfactually) and theories that demand that agents act for ends perhaps disconnected from their interests and drives.


Consequentialism can also be contrasted with aretaic moral theories such as virtue ethics. Once again, one must be careful.


Consequentialist theories can consider character in two ways: (1) Effects on character are consequences.


(2) A consequentialist theory can ask the question, “What kind of virtues will produce the best consequences?” There can be a difference, however. Whereas consequentialist theories, by definition, posit that consequences of action should be the primary focus of moral theories, aretaic moral theory insists that character rather than the consequences of actions should be the focal point.




Virtue ethics


In philosophy, the phrase virtue ethics refers to ethical systems that focus primarily on what sort of person one should try to be. Thus, one of the aims of virtue ethics is to offer an account of sort of characteristics a virtuous person has.


Virtue ethics contrasted with deontology and consequentialism


Virtue ethics is explicitly contrasted with the dominant method of doing ethics in philosophy, which focuses on actions - for example, both Kantian and utilitarian systems try to provide guiding principles for actions that allow a person to decide, in any given situation, how to behave.


Virtue ethics, by contast, focus on what makes a good person, rather than what makes a good action. As such it is often associated with a teleological ethical system - one that seeks to define the proper telos (goal or end) of the human person.


Historical origins


Like much of the Western tradition, virtue ethics seems to have originated in ancient Greek philosophy. Discussion of what were known as the Four Cardinal Virtues - prudence, justice, fortitude and temperance - can be found in Plato’s Symposium. The virtues also figure prominently in Aristotle’s moral theory. The Greek idea of the virtues was incorporated into Christian moral theology. During the scholastic period, the most comprehensive consideration of the virtues from a theological perspective was provided by St. Thomas Aquinas in his Summa Theologica and his Commentaries on the Nicomachean Ethics. The idea of virtue also plays a prominent role in the moral philosophy of David Hume.


Aristotle’s theory of the virtues


In the Nicomachean Ethics, Aristotle categorized the virtues as moral and intellectual. Aristotle identified two intellectual virtues, sophia (theoretical wisdom) and phronesis (practical wisdom). The moral virtues included courage and good temper. These two virtues illustrate Aristotle’s doctrine of the mean. Aristotle argued that each of the moral virtues was a mean between two corresponding vices. For example, the virtue of courage is a mean between the two vices of recklessness and cowardice. Courage illustrates another aspect of Aristotle’s account of the moral virtues. Courage is a mean with respect to the emotion of fear. Whereas cowardice is the disposition to feel too much fear for the situation, and recklessness is the disposition to feel too little fear, courage is the mean between the two, i.e. the disposition to the amount of fear that is appropriate to situation.


Virtues ethics outside the Western tradition


Non-western moral and religious philosophies, such as Confucianism, also incorporate ideas that may appear similar to those developed by the ancient Greeks. However, Confucianism places a greater emphasis in defining virtue in terms of how people relate to each other. Chinese thought makes an explicit connection between virtue and statecraft--a characteristic that is shared by ancient Greek ethics.


Contemporary virtue ethics


Although some enlightenment philosophers (e.g. Hume) continued to emphasize the virtues, with the ascendancy of utilitarianism and deontology, virtue ethics moved to the margins of western philosophy. The contemporary revival of virtue ethics is frequently traced to the philosopher G.E.M. Anscombe’s essay, Modern Moral Philosophy and to Philippa Foot, who published a collection of essays in 1978 entitled Virtues and Vices. In the 1980’s, in works like After Virtue and Three Rival Versions of Moral Enquiry, philosopher Alasdair MacIntyre has made an interesting effort to reconstruct a virtue-based ethics in dialogue with the problems of modern and post-modern thought. More recently, Rosalind Hursthouse has published On Virtue Ethics and Roger Crisp and Michael Slote have edited a collection of important essays titled Virtue Ethics.




Social contract


Social contract is a phrase used in philosophy, political science, and sociology to denote a hypothetical agreement within a state regarding the rights and responsibility of the state and its citizens, or more generally a similar concord between a group and its members. All members within a society are assumed to agree to the terms of the social contract by their choice to stay within the society. The term “social contract” was coined by Jean-Jacques Rousseau, in his influential 1762 treatise The Social Contract.


Because social contract theory assumes the existence of a contract binding upon individuals who have not explicitly accepted it, the theory has been found flawed by some philosophers. However, the usual response to this objection is that many contracts and acceptances of same in a modern economy also tend to be implicit, e.g. copyright which exists in a work regardless how marked, entry into private spaces where rules of access and exclusion are posted (but not explicitly accepted other than by actually entering premises), and software and web site licenses. In the same way that implicit contracts in these circumstances standardize interactions to make them simpler and cheaper to support, enhancing the value of capital, social contract can likewise increase the social capital (a formal term for trust), and what’s more this is measurable.


In the informal sense of the term, social contracts are informal and many are not well understood. In very dynamic or mobile societies the local consensus is often rapidly shifting as people move in and out of groups. Conflict often arises out different understandings of the local aggregate expectations as well as disagreement regarding appropriate rules of behavior and interaction. This can be very stressful for group members until new informal agreements have been informally negotiated between interacting members of the group, community, or society.




Ethical relativism


Moral relativism is the viewpoint that moral standards are not absolute, but instead emerge from social customs and other sources. The philosophical stance can be traced back at least as far as the Greek scholar Protagoras, who stated that “Man is the measure of all things;” a modern interpretation of this statement might be that things exist only in the context of the people who observe them.


Moral relativism stands in contrast to moral absolutism, which sees morals as fixed by an absolute human nature (John Rawls), or external sources such as deities (many religions) or the universe itself (as in Objectivism). Those who believe in moral absolutes often are highly critical of moral relativism; some have been known to equate it with outright immorality or amorality.


Moral relativism has sometimes been placed in contrast to ethnocentrism. Essentially, the claim is that judging members of one society by the moral standards of another is a form of ethnocentrism; some moral relativists claim that people can only be judged by the mores of their own society. (This is analogous to the stance often taken by historians, in that historical figures cannot be judged by modern standards, but only in the context of their time.) Other moral relativists argue that, as moral codes differ among societies, one can only utilize the “common ground” to judge moral matters between societies.


One consequence of this viewpoint, also known as cultural relativism, is the principle that any judgment of society as a whole is invalid: individuals are judged against the standards of their society; societies themselves have no larger context in which judgement is even meaningful. This is a source of conflict between moral relativists and moral absolutists, since a moral absolutist would argue that society as a whole can be judged for its acceptance of “immoral” practices, such as slavery. Such judgments are inconsistent with relativism, although in practice relativists often make such judgments anyway (for example, a relativist is unlikely to defend slave-owners on relativistic principles).


Another view point is the individual viewpoint, also known as emotivism, where people judge morality based on ones emotions and feelings.


The philosopher David Hume suggests principles similar to those of moral relativism in an appendix to his Enquiry Concerning the Principles of Morals (1751).




Situational ethics


Situational ethics is a referring to ethical standards that are not standards at all, rather are collections of contradictory actions that can be commonalised only in that the ethical grounds for each act are based on the situation. This is similar to moral relativism, and is contradictory to moral universalism, and moral absolutism.


The term situational ethics has been broadened to include numerous situations in which a code of ethics is designed to suit the needs of the situation.


The original situational ethics theory was developed by Joseph Fletcher, an Episcopalian priest, in the 1960s. Based on the concept that the only thing with intrinsic value is Love, Fletcher advocated a number of controversial courses of action.


Opponents say that in its purest sense, situational ethics is an oxymoron, with the inherent contradiction that ethics and similarly, morality are fundamental, and cannot be based on practical, functional, or ethno-centric values, but must be based on something more persistent than one group’s assessment of their current situation.


Situated ethics is an entirely different theory in which it is the actual physical, geographical, ecological and infrastructural state one is in, determines one’s actions or range of actions - green economics is at least partially based on that view. It too is criticized for lack of a single geographically-neutral point of view from which to apply standards of or by an authority.




Ethical egoism


Ethical egoism is the view that one ought to do what is in one’s own self-interest, if necessary to the exclusion of what is (or seems to be) in other people’s interests. This can be contrasted with both altruism and psychological egoism. A philosophy holding that one should be honest, just, benevolent etc., because those virtues serve one’s self-interest is egoistic; one holding that one should practice those virtues for reasons other than self-interest is not egoistic.


There have been only a few ethical egoists among professional philosophers.


The consensus among professional philosophers seems to be that the view is implausible to begin with and that those who advocate it seriously (as “enlightened egoists”) do so only at the expense of redefining what self-interest amounts to (including, as it is made to do, the interests of some other people or all other people at some times).


Among philosophers of note who might be called ethical egoists are Friedrich Nietzsche, Max Stirner, Robert Nozick. Some, such as Thomas Hobbes and David Gauthier, have thought that the conflicts which arise when people each pursue their own ends be resolved the best for each individual only if they all voluntarily forgo some of their aims--that is, egoism within a society is often best pursued by being (partly) altruistic.


As Nietzsche (in Beyond Good and Evil) and Alasdair Macintyre (in After Virtue) are famous for pointing out, the ancient Greeks did not associate morality with altruism in the way that post-Christian Western civilization has done. Consequently, it is sometimes said that Greeks like Aristotle (for whom pride was a virtue) were ethical egoists. It would be more accurate, perhaps, to say that the issue of altruism vs. egoism simply did not arise for them in the way that it does for us, or for some of us. Aristotle’s view, for example, is that we have duties ourselves as well as to other people (e.g., friends) and the polis as a whole.






Utilitarianism is both a metaethical doctrine, and a theory in normative ethics.


Utilitarianism holds, in its simplest form, that “the good” is whatever yields the greatest “utility”.


Utility has been understood in different ways - happiness, pleasure, preference-satisfaction, etc. - but it is always a naturalistic conception of an individual’s good.


As a metaethical doctrine, it holds that “whatever yields the greatest utility” is the meaning of the word “good” (thus it is a naturalistic theory of metaethics); while as a normative theory, it merely holds that “whatever yields the greatest utility” is in fact good, whatever the meaning of the word “good” may be.


Utilitarianism was originally proposed in 18th century England by Jeremy Bentham and others, although it can be traced back to ancient Greek philosophers such as Epicurus.


As originally formulated, utilitarianism holds that the good is whatever brings the greatest happiness to the greatest number of people.


Both Bentham’s formulation and the philosophy of Epicurus can be considered different types of hedonism since they judge the rightness of actions from the happiness that they lead to, and happiness is identified with pleasure.


Note, however, that Bentham’s formulation is a selfless hedonism.


Where Epicurus recommended doing whatever made you happiest, Bentham would have you do what makes everyone happiest.


Utilitarianism is the classic consequentalist theory of ethics, and as such is opposed to non-consequentalist theories, such as deontology or virtue ethics.


Utilitarianism suffers from a number of problems, one of which is the difficulty of comparing utility among different people.


Many of the early utilitarians believed that happiness could somehow be measured quantitively and compared between people through a felicific calculus, although no one has ever managed to construct one in practice.


It has been argued that the happiness of different people is incommensurable, and thus a felicific calculus is impossible.


Utilitarianism has been criticized for leading to a number of conclusions contrary to ‘common sense’ morality.


For example, if forced to choose between saving one’s child or saving two strangers, most people will choose to save their own child.


If you ignore your own future happiness or unhappiness as a parent, utilitarianism would support saving the strangers instead, since two people have more total potential for future happiness than one.


John Stuart Mill wrote a famous (and short) book called Utilitarianism.


Although Mill was a utilitarian, he argued that not all forms of happiness are of equal value, using his famous saying “It is better to be Socrates unsatisfied, than a pig satisfied.” Daniel Dennett uses the example of Three Mile Island to explore the limits of utilitarianism for guiding decisions.


Was the near-meltdown that occurred at this nuclear power plant a good or a bad thing (according to utilitarianism)?


He points out that its long-term effects on nuclear policy would be considered beneficial by many (and at least it wasn’t a Chernobyl!).


His conclusion is that it is still too early (20 years after the event) for utilitarianism to weigh all the evidence and reach a conclusion.


To try to get around some of these cases, different varieties of utilitarianism have been proposed.


The traditional form of utilitarianism is act utilitarianism, which states that the best act is whichever act would yield the most utility.


An common alternative form is rule utilitarianism, which states that the best act is the one that would be enjoined by whichever rule would yield the most utility.


So, suppose that some situation allows Jill to either lie or be honest.


Suppose further that lying would yield the most utility of the three possible acts.


Suppose further still that Jill’s adhering to the policy of honesty would yield more utility than her adhering to any other available policy.


Then act utilitarianism would recommend lying and rule utilitarianism would recommend being honest.


Utilitarianism influenced economics, in particular utility theory, where the concept of utility is also used, although with quite different effect.






In moral philosophy, deontology is the view that morality either forbids or permits actions. For example, a deontological moral theory might hold that lying is wrong, even if it produces good consequences. Historically, the most influential deontological theory of morality was developed by the German philosopher Immanuel Kant, who introduced the idea of the categorical imperative.


Contrasted to consequentialist and aretaic moral theories

Deontological theories of morality are frequently contrasted to consequentialist theories such as utilitarianism and Aretaic turn theories such as contemporary virtue ethics. While deontological moral theories typically hold that certain actions are either forbidden or wrong per se, consequentialist theories usually maintain that the rightness or wrongness of an action depends on the consequences of the act and hence on the circumstances in which it is performed.


Another way of distinguishing consequentialism and deontology is due to Shelley Kagan, who notes that, under deontology, individuals are bound by constraints (such as the requirement not to kill) but are also given options (such as the right not to give money to charity, if they do not wish to). Strict consequentialism recognises neither.


By way of contrast, aretaic theories often maintain that character as opposed to actions or their consequences should be the focal point of ethical theory.


Examples of deontological theories

The most famous deontological theory is that advanced by the German philosopher Immanuel Kant. Kant’s theory included the idea of a categorical imperative. One expression of the categorical imperative is: “Act so that the maxim [determining motive of the will] may be capable of becoming a universal law for all rational beings.” One example of a contemporary deontological moral theory is the contractualism developed by the American philosopher Thomas Scanlon.




Categorical Imperative


The philosophical concept of a categorical imperative is central to the moral philosophy of Immanuel Kant. In his philosophy, it denotes an absolute, unconditional requirement that allows no exceptions, and is both required and justified as an end in itself, not as a means to some other end; the opposite of a hypothetical imperative. Most famously, he holds that all categorical imperatives can be derived from a single one, which is known as “the” Categorical Imperative; it is upon this Imperative that the article will focus.



# In his Groundwork for the Metaphysics of Morals, Kant formulates the Categorical Imperative in three different ways: The first (Universal Law formulation): “Act only on that maxim through which you can at the same time will that it should become a universal law.”

# The second (Humanity or End in Itself formulation): “Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.”

# The third (Kingdom of Ends formulation) combines the two: “All maxims as proceeding from our own [hypothetical] making of law ought to harmonise with a possible kingdom of ends.”



In Kant’s view immorality occurs when the categorical imperative is not followed: when a person attempts to set a different standard for themselves then for the rest of humanity. In the Groundwork for the Metaphysics of Morals, once Kant has derived his categorical imperative he applies it to a number of examples. The second example and probably the most analysed is that of an unfaithful promise. Kant applies his imperative to a person who is short of money who intends to ask for a loan, promising to repay it, but with no intention of doing so. When Kant applies the categorical imperative to this situation he discovers that it leads to a contradiction, for if breaking promises were to become universal then no person would ever agree to a promise and promises would disappear. Kant connects rationality with morality, and sees contradictory behaviour as immoral. Some critics have argued that Kant never asserts the connection between rationality and morality, but most dismiss this and point out that Kant clearly explains how morality must be based upon reason and not upon desires.


Rejection of Aristotle

Especially important to Kant were the works of Aristotle, which stand in direct oppostion to much of what Kant argues. Before Kant the most important moral theories were based upon Aristotle’s Nicomachean Ethics which assert that whatever leads to greater eudaimonia, or happiness, is what is moral. Kant, however, believes that any action taken for a deliberate end, whether it be happiness or some other goal, is morally neutral. Kant rests his rejection of the Aristotelian position on a number of points. He points out that all the imperatives are hypthetical, they are performed merely to attain a certain end. More importantly to Kant, this end is one dictated by desires, implying that the human will is no more than a facilitator of predetermined ends, limiting human freedom.


Kant also challenges the traditional viewpoint using his definition of duty as something that is impossible to learn from observation, and thus can only be deduced rationally. While it is possible to learn imperatives of skill and prudence that are morally neutral through observation, the categorical imperative that allows one to determine what actually is moral is known a priori and can only be properly determined through reason. Any imperative that is hypothetical is not based on reason. You perform a hypothetical imperative only if you want or desire something; they are performed for a certain hoped for end. They are based on desire and hope and not upon the reason upon which ethics should be founded.


One important difference between Aristotle and Kant is that in Aristotle’s case only the educated and the leisurely class that can indulge in self-examination can be moral. Kant’s philosophy is far more egalitarian. Morality cannot be taught or learnt, it must arise spontaneously from within.


Equivalence of the formulas?

One problem is how Kant can possibly regard the first two formulas as equivalent. The answer may lie in the fundamental motivation for the Categorical Imperative - it is essentially based on a conception of fairness and universalizability. I must realise that there must be a consistent law for everyone - there cannot be one rule for me and another for everyone else.


For example, stealing would fail the test, since, to steal, you must deny the existence of property rights. But in so doing, you would be denying ownership of your own property, and the whole act of stealing would become (in Kant’s eyes) logically self-defeating.


This desire for consistency drives the first formulation - could I consistently will that stealing became a universal law? Of course not. The same goes for killing, lying, and so on - I cannot will that these be universally practiced, since if they were it would be harmful to me. Any such actions I carry out are thus inconsistent with reason. There are also more subtle effects: for example, I cannot decide never to help others out, as I must recognise that I am likely to need help from others at some point.


The second formulation can be seen as following from the first - if we are tempted to use other people merely as a means, we must realise that a universal law that allowed this would harm us. So the second formulation can be seen as just an example of a universal law willed under the first formulation.


More than this, however, the second formulation is again based on a concept of fairness - although I recognise the importance of myself as an end (that is, a person with hopes, desires, and so on), I must realise that what is special about me as a rational being also makes everyone else special, and they too must be seen as ends-in-themselves. So I must regard all persons as ends-in-themselves, rather than just as means to my ends, which are, after all, no more important than anyone else’s.


Thus, the first two formulations (and therefore the third, which combines the two) can arguably be seen as closely tied together.



Since the publication of the Groundwork, many new opponents have arisen and they too have tried to challenge the effectivness of the imperative.


The Enquiring Murderer

One of the first major challenges to Kant’s reasoning came from the Swiss philosopher Benjamin Constant who asserted that since truth telling must be universal according to Kant’s theories, one must (if asked) tell a known murderer the location of his prey.


This challenge occured while Kant was still alive and his response was the now infamous essay On a Supposed Right to Tell Lies from Benevolent Motives. In this reply Kant argued that it is indeed one’s moral duty to be truthful to a murderer, a statement which seems to contradict Kant’s earlier assertions that his moral theory is the one that people practice subconciously anyway. The scholar Paton, usually a great Kant fan, has called this letter a temporary aberration, and the petulant reply of 73 year old man.


It is worth noting that the example can be restated in such a way that “no comment” will confirm to the murderer the location of the victim, and so the only way to save the victim is to lie. In any event, saying “no comment” probably violates the Categorical Imperative anyway, since one cannot will that everyone would always respond to questions in such a manner. Thus, arguably, the Categorical Imperative requires truthfulness, rather than just prohibiting lies.


Kant believed the world would be shocked by how vast the uses and implications of his categorical imperative would be. After Constant, however, the uses were either dramatically lessened, or it had to be accepted that the moral system Kant was proposing would stand in opposition to the intuitions of the average person.


Universal oath-breaking

Another objection to Kant came from the Englishman Sir David Ross who pointed that a world where everyone could be depended upon to always break their promises would be just as effective and reliable as world where everyone kept their promises and one could thus will that promise breaking become universal.


The reply to this is that a world where one could always rely on everyone to break their promises would be the same world as one with promises but with a different language. The word ‘not’ in a phrase such as “I promise not to go to class today” would no longer mean a negation of a promise but would be an essential part of a promising phrase. Because the language is different does not change the act of promising at all; promises would still exist and one would still expect them to be carried through.


Prudential vs. moral maxims

Another problem for Kant’s moral theory was that it has difficulty proving what is a moral maxim, and what is merely a prudential maxim. Louis White Beck used the example of the maxim that the purchaser of every new book should write their name on the flyleaf. There is nothing in the categorical imperative to discern that this is not a moral imperative for it is easily something which one would wish to be universaily applied, and this universal application would lead to no irrational contradictions. Of course this imperative is actually hypothetical, but the condition is merely omitted. One could say that you should always inscribe your name inside a new book, if you want it to be returned. The categorical imperative on its own cannot differentiate between a conditional maxim and one that is truly moral--this requires a longer and more complex method of reasoning.


One possible solution here would be to reformulate the categorical imperative so that it tells us what is morally permissible, rather than what is morally required.


False negatives

The categorical imperative sometimes seems to give false negatives in terms of what is permitted behaviour. For example, I cannot will that everyone in the world should eat in my favourite restaurant.


Perhaps this sort of problem can be avoided by being careful in the use of relative terms like my. In this case, it is possible to will that everyone should eat in their favourite restaurant.




Moral philosophy


Moral philosophy is the Western academic study of ethics. Within that tradition, one usually refers to “ethics, epistemology and metaphysics” as the three major branches of philosophy. It evolved from earlier ethical tradition and theology especially in Christian philosophy. Wikipedia does not confine itself to this tradition, nor a European POV.


Western moral philosophy, like all conceptions of ethics, involves systematizing, defending, and recommending concepts of right and wrong behavior. It divides ethical theories into three general subject areas: meta-ethics, normative ethics, and applied ethics.


A weakness of this approach is that it largely ignores descriptive ethics (which are often required to deal with politics and civics and dispute resolution). Another weakness is that it assumes that systematization and defense can be made fair to all participants, say by adversarial process as used in the law (a major driver of the Western tradition), and that systemic bias in participation in ethical debates (like poverty) can be equalized by diligence and goodwill. A third and perhaps the most critical weakness is that it usually assumes that a single point of view can be established from which all arguments can be evaluated and from which decision-making can be made and accepted. Many thinkers challenge some or all of these assumptions. Some assert that, like Christianity, it evolved in the context of imperial courts and rule, where such a single claimed-neutral point of view was “in charge”. See ethics, colonialism, imperialism, cultural bias, gender bias, political virtues for ways of approaching this problem that include many that are defiant of Western traditions.


Within the Western tradition: Metaethics investigates where our ethical principles come from, and what they mean. Are they merely social inventions? Do they involve more than expressions of our individual emotions? Metaethical answers to these questions focus on the issues of universal truths, the will of God, the role of reason in ethical judgments, and the meaning of ethical terms themselves.


Normative ethics takes on a more practical task, which is to arrive at moral standards that regulate right and wrong conduct. This may involve articulating the good habits that we should acquire, the duties that we should follow, or the consequences of our behavior on others.


Finally, applied ethics involves examining specific controversial issues, such as abortion, infanticide, animal rights, environmental concerns, homosexuality, capital punishment, or nuclear war. By using the conceptual tools of metaethics and normative ethics, discussions in applied ethics try to resolve these controversial issues. Whether the parties trying to do so have any right to do so, is a difficult question, one not resolvable strictly within ethics itself.


Due to this and other factors, the lines of distinction between metaethics, normative ethics, and applied ethics are often blurry. For example, the issue of abortion is an applied ethical topic since it involves a specific type of controversial behavior. But it also depends on more general normative principles, such as the right of self-rule and the right to life, which are litmus tests for determining the morality of that procedure. The issue also rests on metaethical issues such as, “where do rights come from?” and “what kind of beings have rights?” Failure of Western philosophy to resolve such issues often leads people to disdain it entirely as an ethical guide. However, more recently, it has become popular to argue that politics is itself an expression of real people’s ethical choices, and to assign a much more pivotal role to description and education and practical dispute resolution, and the language used therein. For instance, the word “rights” implies that someone somewhere has “duties” to determine, uphold and protect them. There is some balance thus required to deal with questions of who has the rights, who has the duties, and how people are trained or supervised to actually perform the duties to provide the rights. A distinction between “positive” and “negative” rights is sometimes suggested, but, the so-called “negative” rights require provision for courts, judges, lawyers, training and infrastructure for same, prisons, and other things that presumably there is a “positive” right to. So this is an unsatisfactory view.


Modern moral philosophy is very focused on constraints arising from living in a mass society, and inherent issues of human nature in opposition to what are now known to be serious limits on human behaviour, especially in conflict. The potential for nuclear war is one such example, leading to a focus on peace and de-escalation as the single greatest concern of ethicists.




Ethical code


Ethical codes are specialized and specific codes of ethics.


Such codes exist in most professions to guide interactions between specialists with advanced knowledge, e.g. doctors, lawyers, engineers, stonemasons, and the general public. They are often not part of any more general theory of ethics but accepted as pragmatic necessities.


As the public is in general incapable of distinguishing good from bad decisions, ethical codes are normally part of a profession’s own self-regulation. Public guidance typically is confined to ensuring that there is such an internally consistent code, and imposing stricter rules if a profession, e.g. accounting, is deficient in the extreme.


Ethical codes are distinct from moral codes that apply to the education and religion of a whole larger society. Not only are they more specialized, they are more internally consistent, and typically can be applied without a great deal of interpretation by an ordinary practitioner of the speciality.






Casuistry is any attempt to determine the correct response to a moral problem, often a moral dilemma, by drawing conclusions based on parallels with agreed responses to pure cases, also called paradigms. Another common everyday meaning is “complex reasoning to justify moral laxity.” Casuistry is a branch of applied ethics. It is the standard form of reasoning applied in common law.


Casuistry takes a relentlessly practical approach to morality. Rather than applying theories, it examines cases. By drawing parallels between paradigms, so called “pure cases,” and the case at hand, a casuist tries to determine the correct response (not merely an evaluation) to a particular case. The selection of a paradigm case is justified by warrants, and opposed by ??? [forgot the word].


This form of reasoning is the basis of case law in common law.


Casuistry is successful because it does not require participants in the evaluation to agree about ethical theories or evaluations before making policy. Instead, they can agree that certain paradigms should be treated in certain ways, and then agree on the similarities, the so-called warrants between a paradigm and the case at hand.


Since most people, and most cultures substantially agree about most pure ethical situations, casuistry often creates ethical arguments that can persuade people of different ethnic, religious and philosophical beliefs to treat particular cases in particular ways. For this reason, casuistry is the form of reasoning used in English Law.


Casuistry as a method was popular among Catholic thinkers in the early modern period, especially the Jesuits. It however was later attacked (e.g. by Pascal) as the mere use of complex reasoning to justify moral laxity; hence the everyday use of the term to mean complex reasoning to justify moral laxity.


Casuists have often been mistrusted as too self-serving, and their reasoning thought too inaccessible. The reasoning is often inaccessible because successful casuistry requires a large amount of knowledge about paradigms, and how parallels can be drawn from those paradigms to real life situations. In modern times, there is a similar tremendous resentment against lawyers and law.


In modern times, Casuistry has successfully been applied to Law, bioethics and business ethics, and its reputation is being rehabilitated.






Epistemology, the branch of philosophy that deals with the nature of knowledge and truth, encompasses the study of the origin, nature, and limits of human knowledge. People approach this task in various ways; the following categories originally reflected divisions among schools of philosophy in the seventeenth and eighteenth centuries, but may prove useful in categorizing certain approximate trends throughout the history of epistemology:   (1) Rationalists (see rationalism) believe there are innate ideas that are not found in experience. These ideas exist independently of any experience we may have. They may in some way derive from the structure of the human mind, or they may exist independently of the mind. If they exist independently, they may be understood by a human mind once it reaches a necessary degree of sophistication.


  (2) Empiricists (see: empiricism, scientific method, philosophy of science naive empiricism) deny that there are concepts that exist prior to experience. For them, all knowledge is a product of human learning, based on human perception. Perception, however, may cause concern, since illusions, misunderstandings, and hallucinations prove that perception does not always depict the world as it really is.


  Some say the existence of mathematical theorems poses a problem for empiricists; their truths certainly do not depend on experience, and they can be known prior to experience. Some empiricists reply that all mathematical theorems are empty of cognitive content, as they only express the relationship of concepts to one another. Rationalists would hold that such relationships are indeed a form of cognitive content.


  (3) The German philosopher Immanuel Kant is widely understood as having worked out a synthesis between these views. In Kant’s view people certainly do have knowledge that is prior to experience, which is not devoid of cognitive significance. For example, the principle of causality. He held that there are a priori synthetic concepts.


People in all schools of thought agree that we have the capacity to think of questions that no possible appeal to experience could answer. For instance: Is there an end to time? Is there a God? Is the God of the philosophers the same as the Biblical God? Is there a reality beyond that which we can sense? Such questions are termed transcendental, as they seem to go beyond the limits of rational inquiry. In the 20th century logical positivists have declared such questions to be totally devoid of cognitive significance. Others disagree, and hold that only some metaphysical claims are devoid of cognitive significance, and that others may not be.


No consensus exists as to which epistemology will prove the most productive in allowing human beings to have the most accurate understanding of the world. All people use an epistemology, even if unconsciously. Thinking beings cannot understand and analyze ideas without first having a system to accept and analyze information in the first place, which we all do. All people - even children - possess rudimentary and undeveloped epistemologies. However, those who study some philosophy and logic can begin to recognize how their own epistemologies work; only they can choose to change their epistemology, if they so wish.


Our analysis then will be dependent on the system we used to begin with.


One might wonder: What do I have to do, to be sure that I do have the truth? How can I be sure that my beliefs are true? Is there some sort of guarantee available to me -- some sort of criterion I might use, in order to decide, as rationally and as carefully as I possibly could, that indeed what I believe is true?


Suppose you thought your belief had been arrived at rationally. You used logic, you based your belief on observation and experiment, you conscientiously answered objections, and so forth. So you conclude that your belief is rational. If so, then your belief has at least some claim to be true. Rationality provides an indicator of truth: if your belief is rational, then it is at least probably true. At the very least, the rationality of a belief gives reason to think the belief is true.


Now, there are a number of features of beliefs, such as rationality, justification, and probability, that are indicators of truth. So let’s define a general term: A feature of belief is an epistemic feature if it is at least some indication that the belief is true.


Many of our beliefs do have lots of positive epistemic features; many of our beliefs are quite rational, quite justified, very probably true, highly warranted, and so on. But most of us, at least in some moments, don’t want to rest content with just being rational. We don’t want to have a rational belief that is, unfortunately, false. Because that can occur, right? I can be very conscious, careful, and logical in forming a belief, and so be rational in holding the belief; but it still might be false. So rationality isn’t our ultimate ambition that we have for our beliefs.


Our ultimate ambition for our beliefs is knowledge. Because if I do know something, then not only am I justified, or rational, in a belief; because I have knowledge, I have the truth. So naturally, when we are thinking about the epistemic features of our beliefs, the big question is this: When do I have knowledge? When can I say that I have it? As I’m sure you realize, some people claim that we can’t have knowledge; such people are called skeptics. More on that, of course, later.


# Now I can describe to you the field of epistemology, which is also called the theory of knowledge. Here is a definition: Epistemology includes the study of: what the epistemic features of belief, such as justification and rationality, each are (e.g., what justified belief is)

# the origin or sources of such features (and thus the sources of knowledge)

# what knowledge is, i.e., what epistemic features would make a true belief knowledge

# whether it is possible to have knowledge.


So first, epistemologists spend a great deal of time concerning themselves with various epistemic features of belief, such as justification and rationality. And they write long articles and books trying to say just when beliefs are justified, or rational.


A second, related concern is where such epistemic features ultimately come from. If I say, for example, that my belief that Paris is the capital of France is justified, I can ask: Where did the justification for my belief come from? Probably at some point some reliable source told me that Paris is the capital of France; and that was enough to make me justified in adopting the belief. OK, then one, but only one, source of justification would be testimony, which is just a fancy word for what other people tell me. Another source of justification would be sense-perception. So epistemology asks: What are the ultimate sources of justification, rationality, or other epistemic features of belief? And that allows to answer a further question: What are the ultimate sources of knowledge?


Which brings us to the third topic studied by epistemologists, namely, what knowledge is. The question here isn’t what we can know, or even what we do know. The question is: What would knowledge be, if we had it? A belief has to pass some sort of muster if it’s to count as knowledge. So what features would a belief have to have, in order to be an actual piece of knowledge -- not just something that pretends to be knowledge, but which is actually knowledge?


Then, fourth, there is one of the more difficult topics of philosophy -- trying to answer, or otherwise deal with, the challenge that we cannot have knowledge. A number of philosophers -- not too many, but some -- have said that we cannot have knowledge. A lot of philosophers have said that it’s very difficult to obtain knowledge; but they don’t deny that we have it, or that we can have it. Not so many philosophers, however, have gone so far as to say that we have no knowledge at all, or (to say something even stronger) that it is impossible to have knowledge.


Another contemporary approach to epistemology divides the approaches into two categories: foundationalism and coherentism.


Foundationalism holds that there are basic beliefs in which you can be certain and that you can be similarly confident in other beliefs derived rigorously from these. The most famous example of this is Descartes’ dictum: “I think therefore I am”, by which he meant that it is impossible to doubt your own existence. Others have responded that your observation of your own mental activity is not fundamentally different or more reliable than other observations and does not necessarily imply a thinker. The difficulty of foundationalism is that no set of basic beliefs proposed for it are uncontroversial.


Coherentism holds that you are more justified in beliefs if they form a coherent whole with your other beliefs. A common, cheeky, riposte to this is called the “drunken sailors” argument, which points out that two drunken sailors holding each other up may still not be on solid ground. Stated more formally: a set of beliefs can be internally consistent but still not reflect the actual world.


Recently, Susan Haack has attempted to fuse these two approaches into her doctrine of Foundherentalism, which accrues degrees of relative confidence to beliefs by mediating between the two approaches. She covers this in her book Evidence and Inquiry: Towards Reconstruction in Epistemology.


See also Self-evidence; theory of justification; the regress argument in epistemology; a priori and a posterior knowledge; knowledge; scepticism; Common sense and the Diallelus; social epistemology






Etiquette is the code of unwritten expectations (which evolve into written rules) that governs social behavior. It usually reflects a theory of conduct that society or tradition has invested heavily in. Like “culture”, it is a word that has gradually grown plural, especially in a multi-ethnic society with many clashing expectations. Thus, it is now possible to refer to “an etiquette” or “a culture”, realizing that these may not be universal.


Etiquette fundamentally concerns the ways in which people interact with each other, and show their respect for other people by conforming to the norms of society. Etiquette instructs us to: greet friends and acquaintances with warmth and respect, refrain from insults and prying, offer hospitality equally and generously to our guests, wear clothing suited to the occasion, contribute to conversations without dominating them, offer a chair or a helping arm to those who need assistance, eat neatly and quietly, avoid disturbing others with loud music or unnecessary noise, follow the established rules or a club or legislature upon becoming a member, arrive promptly when expected, comfort the bereaved, and respond to invitations promptly.


Violations of etiquette, if severe, can cause hurt feelings, misunderstandings, or real grief and pain, and can even escalate into murderous rage. One can reasonably view etiquette as the minimal politics required to avoid major conflict in polite society, and as such, an important aspect of applied ethics. An etiquette can be considered to be an ethical code in itself.


The term etiquette, being of French origin and arising from practices at the court of Louis XIV, carries a strong whiff of anachronism and it is common to disparage the entire field by setting it up as a straw man only concerned with which fork to use. Rules which are such that violations of them do not harm anybody, are considered by some to be unnecessary restrictions of freedom. For instance, wearing pajamas to a wedding in a cathedral may be an expression of the guest’s freedom, which may cause the bride and groom to wonder how the guest in pajamas feels about them and their wedding. Others feel that a single, basic code shared by all makes life simpler and more pleasant by removing many chances for misunderstandings.


The term is sometimes used synonymously with manners, though some writers make the distinction between manners to mean rules which involve justifiable respect shown to others, and etiquette to mean rules which are based purely on tradition with little obvious purpose.


Etiquette is dependent on culture; what is excellent etiquette in one society may shock in another. It is a topic that has occupied writers and thinkers in all sophisticated societies for millennia, beginning with a behavior code by Ptahhotep, a vizier in ancient Egypt’s Old Kingdom during the reign of the Fifth Dynasty king Djedkare Isesi (ca 2414-2375).


All known literate civilizations, including ancient Greece and Rome, developed rules for proper social conduct. Confucius included rules for eating and speaking along with his more philosophical sayings. Louis XIV himself wrote a book on court ceremony, and Benjamin Franklin and George Washington wrote codes of conduct for young gentleman. The immense popularity of advice columns and books by Miss Manners shows the currency of this topic.


The rise of the Internet has necessitated the adaptation of existing rules of conduct to create Netiquette, which governs the drafting of email, rules for participating in online forums, and so on.






A definition of goodness would be valuable because it might allow one to construct a good life or society by reliable processes of deduction, elaboration or prioritization.


One could answer the ancient question, “How then should we live?” Sadly, known definitions are meaningless, circular, or long lists of cultural values.


Summary of the problems:


Attempted definitions of goodness fail in known ways. Definitions generally either describe traits or properties of a real object or set of objects, or divide a concept into other, subsidiary concepts. Both approaches have failed to define goodness.


As a result, desperate philosophers have tried desperate expedients to get some of the value that such a definition would provide. Problems with definitions using traits or properties: Most philosophers find that the traits or properties that would justify calling a thing good are different for different categories of judgment.


For example, the criteria by which we judge art to be good are different from those by which we judge people to be good.


A famous early discussion of this problem is by Aristotle, in his Nicomachean Ethics (at 1096a5).


Many judgments of goodness translate to prices, but this appears to be a summary or effect of judgment, not a cause. For example, a piece of art found in an attic may be sold for the price of a meal.


A collector may then recognize it as a lost work of a famous artist, and sell it for more than the price of a house. The price changed because the collector had better judgment than the owner who kept it in an attic. If goodness were a common trait or property, we should be able to abstract it, but no one has succeeded.


Thus goodness is widely believed not to be a property of any natural thing or state of affairs. Of course, this belief is open to trivial skepticism: Perhaps philosophers just haven’t stumbled across the right definition. However, after several thousand years, the prospect is bleak.


One wonders where such an immaterial trait as goodness could reside. An obvious answer is “Inside people.” Some philosophers go so far as to say that if some state of affairs does not tend to arouse a desirable subjective state in self-aware beings, then it cannot be good.


Although a definition of external “objective” goodness could be used to construct rational morals and legislation, a subjective definition of goodness remains useful to help one live a good life.




Epicurus made the first known attempt to define goodness as subjective pleasure, and its opposite as pain. This is called Hedonism. (See “Lives of Eminent Philosophers” by Diogenes Laertius) However, simple hedonism is rejected even by most hedonists because there seem to be pleasures that are bad (e.g. eating too much) and pains that are good (e.g. going to the dentist).


There are some problems with identifying goodness as pleasure. It’s strange to say that carrying out one’s duty (which is obviously good) has anything to do with pleasure. Also, the sense of achievement following completion of one’s work is rarely considered pleasure, although it is clearly good to finish one’s work.


Aristotle even distinguished genuine happiness from amusement, and virtuous from base pleasures.


This makes some sense because useful work (like the Wikipedia) is clearly better than mere amusement (such as a chat room). The usual fix of Hedonism is to consider consequences, as well as pleasure and pain. For example. going to a dentist has a small amount of pain now, but avoids a great deal more later. However, even consequentialism is strained when considering duty. Happiness or pleasure can often be recognized, which solves many problems for Hedonism.


But there are more problems with Hedonism. No known definitions of happiness or pleasure have met objections similar to those of a definition of goodness: The situations producing the happiness or pleasure are different in different categories of action. Neither happiness nor pleasure has been conceptually divided (analyzed) in a way that permits deductive choices of real-world alternatives.


Problems with definitions dividing the concept of Goodness: The other form of definitions of goodness is to try to divide the concept of goodness into smaller, more understandable concepts. It has been thought that if some conception of goodness were divided, or causally regressed far enough, the process would eventually come to a logical stopping place, an “ultimate good.” However all known forms of such regressions appear to be either circular, or open to skepticism.


Attempts to translate, divide or causally regress the concept of goodness usually fail in a particular way. Every such attempt seems to end up with one or more subconcepts described with the word “goodness” or related words like “pleasure,” “dutiful,” “praiseworthy”, or “virtuous.” Such definitions appear to be circular, and therefore are believed invalid. The circularity of causal regression hits scientific definitions of goodness especially hard, because it seems to indicate that science cannot study goodness. Some philosophers have gone so far as to say that science can only study “what is”, not “what should be.” The clearest proponent of this viewpoint was David Hume in “A Treatise Concerning Human Understanding,” who famously said that there is no logical way to move from statements about facts to statements about what ought to be. It is not illogical for a person to prefer the destruction of the world rather than suffer a small injury to their finger.


G.E. Moore described this circularity clearly and called this “The Naturalistic Fallacy,” because he believed that people had a sort of nonphysical intuition that could sense goodness. Few people believe in this intuition, but the term has stuck because goodness is so widely thought nonphysical.


Many philosophers tried to end the regressions by applying an auxiliary evaluation that helps the general regression to a stopping place. This auxiliary evaluation is often open to skepticism. For example, Aristotle considered “The supreme element of happiness” to be theoretical study, because it “ruled all others.” (Nicomachean Ethics, 1177a15) In this case, supremity was the auxiliary evaluation that could be doubted.


Thomas Aquinas asserted that everything sensed was an effect, with an earlier cause, and that each immediate (proximal) cause was less diluted in goodness, and that therefore, the first cause would have to be perfectly good. In this case, the concept of dilution might be doubted as an inaccurate metaphor, or that the dilution necessarily scales back to perfection (maybe the first cause was pretty good, instead of perfect). One might also doubt that the causal regression ends: It might be circular, for instance.




Philosophers have made progress against the circularity. Aristotle pointed out that some goods appear to have value in themselves, while others are useful only to get other goods. He called these “intrinsic goods” and “instrumental goods.” For example, if health is an intrinsic good, then surgery and drugs are instrumental goods.


Another improvement is to distinguish contributory goods. These have the same qualities as the good thing, but need some emergent property of a whole state-of-affairs in order to be good. For example salt is food, but is usually good only as part of a prepared meal. Other exampless come from music and language.


Most philosophers that think goods have to create desireable mental states also say that goods are experiences of self-aware beings. These philosophers often distinguish the experience, which thay call an intrinsic good, from the things that seem to cause the experience, which they call “inherent goods.” Kant showed that many practical goods are good only in states-of-affairs described by a sentence containing an “if” clause. Further, the “if” clause often described the category in which the judgment was made (Art, science, etc.). Kant described these as “hypothetical goods,” and tried to find a “categorical” good that would operate across all categories of judgment.


An interesting result of Kant’s search for a categorical good was a moral command, the “categorical imperative”: “Act according to those maxims that you could will to be universal law.” From this, and a few other axioms, Kant developed a moral system that would apply to any “praiseworthy person.” (See “Foundations of the Metaphysics of Morals,” third section, [446]-[447].) It’s clear that a general definition of goodness must categorically define goods that are ultimate, intrinsic, non-contributory, and at least inherent.


Desperate practical expedients:


The classic expedient is to simply make a list of all the ancient good things. This works, but it can’t help find or evaluate new things, and it can’t answer skeptics. The problems with such lists are that they are likely to be both incomplete, and contain difficult items with no value. One reasons about such lists using casuistry.


Some philosophers have concentrated on prioritizing real things or states-of-affairs using simplified or undefined concepts of goodness. Jeremy Bentham’s book “The Principles of Morals and Legislation” prioritized goods by considering pleasure, pain and consequences. This theory had a wide effect on public affairs, up to and including the present day. A similar system was later named Utilitarianism by John Mill.




Utilitarianism succeeds in many cases. However Utilitarianism has some questionable areas of judgment. For example, it considers all goods as interchangeable. If feeding a starving child would cause the child to feel sick, and not permanently improve his situation, a Utilitarian would prefer to spend the money on a car for a rich man. Unhappily, the utilitarian argument to permit abortions is of the same form as this questionable type, though with changed quantities.


To see this, substitute “unconscious fetus, destined for loveless poverty” for “starving, hopeless child” and “improved woman’s income” for “rich man’s watch.” To a humanist, who values human life above all else, the form of the judgment remains invalid, while a utilitarian might agree with the statement, based on the changed magnitudes of value.


In another widely questioned set of judgments, Utilitarians weigh the pleasures and pains of men and animals in the same scale. See PETA, an animal rights organization based firmly on Utilitarian ideals.


John Rawls’ book “A Theory of Justice” prioritized social arrangements and goods based on their contribution to justice. Rawls defined justice as fairness, especially in distributing social goods, defined fairness in terms of procedures, and attempted to prove that just institutions and lives are good, if rational individuals’ goods are considered fairly. Rawls’ crucial invention was “the original position,” a procedure in which one tries to make objective moral decisions by refusing to let personal facts about oneself enter one’s moral calculations.


A problem with both Kant’s and Rawls’ approach is that goodness appears to be both prior to and essential to fairness, and different for different beings. Procedurally fair processes of the type used by Kant and Rawls may therefore reduce the totality of goodness, and thereby be unfair. For example, if two people are found to own an orange, the standard fair procedure is to cut it in two, and give half to each. However, if one wants to eat it, while the other wants the rind to flavor a cake, cutting it in two is clearly less good than giving the peel to the baker, and feeding the meat to the eater.


Many people judge that if both procedures are known, using the first procedurally-fair procedure to mediate between a baker and an eater is unfair because it is not as good.


Applying procedural fairness to an entire society therefore seems certain to create recognizable inefficiencies, and therefore be unfair, and therefore, by the equivalence of justice with fairness, unjust. This strikes at the very foundation of Kantian ethics, because it shows that hypothetical goods can be better than categorical goods, and therefore be more desirable, and even more just. Whatever goodness is, it is important.






Trust refers to willing acceptance of one person’s power to affect another. It is discussed more formally in the articles on social capital, profession and authority.


There is much dispute on whether degrees of trust can be measured, or whether it simply exists until there is doubt. See ethics and meta-ethics for more on these abstract questions.


There is also much dispute about how lying, blaming and hypocrisy interact with trust.


---- In common law legal systems, a trust is where a person or persons (the trustees) have legal control over certain property (the trust property), but is/are bound to exercise that legal control for the benefit of other persons (the beneficiaries).


The trustee can be either a natural or a legal person. A trust will not fail solely for want of a trustee; if there is no trustee, whoever has title to the trust property will be considered trustee. The trust property can be any form of property, be it real or personal, tangible or intangible. The beneficiary can be either a single person or multiple persons, including people not yet born at the time of the trust’s creation. The trustee can be one of the beneficiaries, so long as they are not the only beneficiary. A trust can also be created with some charitable purpose, as opposed to a particular person or persons, as its beneficiary.


The trust has been called the most innovative contribution of English legal thinking to the law. It plays an important role in all common law legal systems. (Do trusts exist in civil law legal systems? yes and no. If not, is there anything that can be used to do similar things? there is the patrimony of affectation and the foundation that have similar independent patrimonies from their donors). Trusts developed out of the English law of equity which has no equivalent in civil law jurisdictions, however since the use of the trust is so widespread some jurisdictions have incorporated trusts into their civil codes.


Trusts are used for a number of purposes, including to plan one’s estate and as a form of investment. They are also frequently used to reduce the amount of tax payable, since they often receive special tax treatment. Pension schemes are often set up as trusts.


Express, Implied and Constructive Trusts


Trusts can be classified in a number of ways. One of these ways is by how the trust was created. Most commonly, a classification of trusts as express trusts, implied trusts and constructive trusts is used. Note however that this terminology is not accepted by all authors.


An express trust is created where one person (the settlor) conveys property to another (the trustee) on the condition that the property will be used for the benefit of a third party or paries (the beneficiaries). The intention of the parties to create the trust must be shown clearly by their language or conduct. For an express trust to exist, there must be certainty to the objects of the trust and the trust property. Statute of frauds provisions require express trusts to be evidenced in writing if the trust property is above a certain value, or is real estate.


An implied trust (also called a resulting trust) is created where some of the legal requirements for an express trust are not met, but an intention on behalf of the parties to create a trust can be presumed to exist.


Unlike an express or implied trust, a constructive trust is not created by an agreement between a settlor and the trustee; rather a constructive trust is imposed on the trustee by the law. This generally arises due to some wrongdoing on behalf of the trustee, where the trustee has acquired legal title to some property but cannot in good conscience be allowed to benefit from it. For example, the Privy Council has held that if a fiduciary accepts bribes or makes an improper profit, a constructive trust is thereby created, by which the fiduciary holds the bribes or improper profit as trustee of a constructive trust for the benefit of the principal.


Simple or Bare Trusts vs. Special Trusts


In a simple trust (also called a bare trust) the trustee has no active duty beyond conveying the property to the beneficiary at the some future time determined by the trust. In a special trust, however, the trustee has active duties beyond this.


Private Trusts vs. Public or Charitable Trusts


A private trust has one or more particular individuals as its beneficiary. By contrast, a public trust (also called a charitable trust) has some charitable end as its beneficiary. In order to qualify as a charitable trust, the trust must have as its object certain purposes such as alleviating poverty, providing education, carrying out some religious purpose, etc. The permissible objects are generally set out in legislation, but objects not explicitly set out may also be an object of a charitable trust, by analogy. Charitable trusts are entitled to special treatment under the law of trusts and also the law of taxation.


Fixed, Discretionary and Hybrid Trusts


In a fixed trust, the amount of money or other goods or services to be paid to the beneficiaries is fixed by the settlor. An express fixed trust requires a certain degree of certainty regarding who are the beneficiaries and the amounts to be paid to them, so that the trustee has little or no discretion. If this degree of certainty is not met, an implied trust exists instead. In a discretionary trust, the amount of money or other goods or services to be paid to the beneficiaries is up to the trustee, so long as the decision is made based on the beneficiaries best interests. A hybrid trust combines elements of both fixed and discretionary trusts. In a hybrid trust, the trustee must pay a certain amount of the trust property to each beneficiary fixed by the settlor. But the trustee has discretion as to how any remaining trust property, once these fixed amounts have been paid out, is to be paid to the beneficiaries.


Specific Types of Trust: Unit Trusts, Protective Trusts

A unit trust is a trust where the beneficiaries (called unitholders) each possess a certain share (called a unitholding) and can direct the trustee to pay money to them out of the trust property according to the number of unitholdings they possess. Unit trusts are primarily used for investment purposes.


A protective trust is a type of trust that was devised for use in estate planning. Often a person, A, wishes to leave property to another person B.


A however fears that the property might be claimed by creditors before A dies, and that therefore B would receive none of it.


A could establish a trust with B as the beneficiary, but then A would not be entitled to use of the property before they died. Protective trusts were developed as a solution to this situation.


A would establsh a trust with both A and B as beneficiaries, with the trustee instructed to allow A use of the property until they died, and thereafter to allow its use to B.


The property is then safe from being claimed by A’s creditors, at least so long as the debt was entered into after the trust’s establishment.


This use of trusts is similar to life estates and remainders, and are frequently used as alternatives to them.






Truth is knowledge which conforms to reality, thus what truth means depends on the corresponding meanings which are used for knowledge and reality. Absolute truth, for example, is certain knowledge of ultimate reality. For ordinary human purposes, truth is knowledge gained by a reliable method about some aspect of reality which is observable.


When you are asked to testify at law truthfully or otherwise “tell the truth”, you are not being asked for absolute truth but for a good faith attempt to recount your memory of an observed event. That what you say may differ from true accounts of other wittnesses is a commonplace of practical law.


Truth is not amenable to any simple, widely-agreed upon definition for all times, places, and peoples. The notion is the subject of much theorizing by philosophers, thinkers, and social leaders. Concepts of truth, like trust or integrity, depend heavily on the point of view chosen.


Historically, religion has had the major influence, but philosophy and increasingly science has been brought to bear on these subjects, and related but more limited issues of knowledge and causality.


Grasping truth


There are several broad senses in which the concept of truth is usually considered. They are quite different in the point of view or purpose they assume. There is also the view that conceptual metaphor dominates the connection between experience and action, in which truth cannot be stated, and the view that chosen action alone is a reliable guide to truth.


These do not fit conventional academic and scholastic philosophy, but we will deal with them here in a trivial way by adopting one such metaphor, that of “grasping truth” or to “get it”, both of which imply the use of the hand as a metaphor for comprehending truth. In keeping with that metaphor, let’s label each of the broad senses of truth as if they were parts of the hand. This will act as a mnemonic, and, it will provide a sense of both the metaphoric view and the action/body dependent view - in a way which those schools would accept.


First, the most formal and limited sense is that of a truth-table or truth-value. Most commonly, in logic, the values true and false are recognized. But there are also other truth-tables that recognize “not-true” as something other than “false”, “not-false” as something other than “true”, and “don’t-know” and “don’t-care” values. This we may call the definitive type of truth - used in axiomatic proof, and to detect paradox. To believe that all other concepts of truth can always be built up from such tables is reductionism or mathematical fetishism. In the hand metaphor, this is the littlest finger - essential only for playing a musical instrument, cupping runny liquids or achieving a truly powerful grip. It can’t lift much on it’s own, and it breaks easily.

A second way to see truth is as a broad, abstract concept that is applied to many true propositions, thoughts, beliefs, etc., and which can ultimately be understood by the individual human being, or at least, approached by the mind or by inquiry in some way. One generally treats this kind of truth as a process of discovery, e.g. scientific method. This we may call the investigative type of truth. Like the fourth finger on the hand, it moves usually only with the fifth. It’s the one we look at in Western society for a ring, to determine one’s level of commitment in engagement or marriage - appropriate since all of science is cumulative and cooperative and relies wholly on peer review.


And, third, there is truth as a body of important, profound, perhaps spiritual belief--Truth with a capital “T” or enlightenment or happiness or contentment. These revealed truths and intuitive truths are generally recognized as immutable, personally experienced, hard to share, and difficult to separate from issues of faith and divinity. If one thinks of truth in this way, it is difficult to discuss generally outside the frame of a particular religion or devotional practice, and it is also difficult to claim any special insight. One generally commits oneself to truth of this kind, and does not question it further. This we may call the authoritative type of truth. Let’s give authority the middle finger: it’s big, and it can be rude.


There are also various ideas of reasoned and observed and constrained truth as guide to action or justification of inaction - this is where truth is hard to tell apart from know-how or knowledge, and where it becomes important to ask what truth is “for”. This we may call the applied type of truth, which necessarily confuses the above, and is never wholly derived from any one of them, but involves inquiry, definition and decision. This is problematic for the philosopher, who may see it as an opening to situational ethics. To us, it’s the index finger, used to push buttons, click, point, and pull triggers. To take action, which can never wait for a pure and undisputed sort of truth.


Then, there is the thumb: the actual entity that takes the point of view. Although in theory (artificial intelligence or collective intelligence for instance) truth can exist with no one evaluating or believing or feeling it, in our human experience that doesn’t happen. A body must be there to feel or believe or take action on the truth, and it may have a situated ethics - less of a problem for the philosopher. It is in a real place at a real time and in a real environment (which might be in our metaphor the palm we grip into). Our body. Like the thumb, without it, we just don’t “grasp” at all. This view is often advanced by feminism and queer theory, which focus on the ways bodily experience is alienated as that of “The Other”, unlike our own.


If we consider truth from someone else’s point of view, as we often try to do, we might imagine the four fingers pressing against the palm, which could be in this metaphor the environment of the future after we die. This would not be as strong a grip as we can get with the thumb, using our own body and feelings. But it is possible. Academic philosophers lately have tried to cope with it: They have had some success, mostly by keeping the above separate. Pursuing one type of truth does not necessarily imply pursuing them all. Looking only at the authoritative and investigative extremes, the question, “What is truth?” is apt to be given two completely different sort of answers by different professions. Analytic philosophers approach the question offering closely-reasoned defenses of precisely-worded definitions. By contrast, for religionists and some essayists, the question is simply another way of asking such broad religious and philosophical questions as “Does the world have a creator and if so, what is its nature?” and “What is the meaning or purpose of life?” Giving an answer to the philosophers’ question clearly does not entail giving an answer to the religionists’ questions, and conversely.


Scoping truth


Another class of question can be asked about truth, namely, whether it is absolute (roughly, it simply depends on the facts of reality) or relative to us (e.g., it depends upon our beliefs, culture, language, point in history or geography, etc.). Philosophers generally discuss this not specifically under the heading of truth but instead moral absolutism, relativism, realism, anti-realism, and various other headings. Probably the reason for this is that philosophers do not so often speak about relativism about all truths, but about particular classes of truths (or “truths”): moral, epistemological, and aesthetic, just to name a few. However, there are scope issues that are of general enough concern to be part of the philosopher’s approach to truth.


First of these is time. Truth need not (some say cannot) be permanent - beings with very long-term perspective and the will to live by it are very rare in any society, and symbolic means of recording events are themselves distorted: Recent philosophy of mathematics focuses on the fact that even axiomatic proof is quite often corrected after the fact, and ultimately relies on human beings to and their inherent similarity (the cognitive science of mathematics). So truth might be dependent on one’s point in history, and the tools available to inquiry at that point in time. Most especially the investigative and authoritative kind. The definitive kind does somewhat better, some truths in mathematics having “stood the test of time” for millenia. However it is difficult to say when any of these are about to change: until recently the universe was thought to obey the rules of Euclidean geometry. It would have been a terrible shock to Euclid to find it doesn’t.


Nor need truth, even of a permanent sort, apply to every type of living being, e.g. economics is a human construct - things which are true for human beings might not be true for beings with varied cognition, and different bodily needs, e.g. a robot. So truth might be dependent on one’s species, senses, and what causes one likely bodily harm. “You cannot live underwater without a gaseous oxygen supply” is quite true for any human alive today. It may not be true for future human subspecies or robots, and it is clearly not true for fish.


Choice of species, time and space limits (a spacetime frame) and point of view from which truth can be assessed, and the assessment then trusted by others, is absolutely pivotal to establishing any notion of truth - the branch of philosophy called epistemology deals with this directly. So little more will be said of this issue here - it will however recur when discussing the Consensus Theory of Truth below.


Five theories of truth


When focusing on the topic of truth itself, philosophers are mainly engaged in formulating philosophical theories of truth. There are, roughly speaking, four broad conceptions of truth that philosophers and logicians have discussed:


   (A) The Correspondence Conception of Truth


  (B) The Deflationary Conception of Truth


  (C) The Semantic Conception of Truth


  (D) The Epistemic Conception of Truth


In addition there is an Epistemic variant, the Consensus Theory of Truth, which they have discarded, and a fifth and more refined conception, closely related to it but not an Epistemic theory, which is common in activist and social justice circles, and has some correspondence with philosophy of action, contemporary political philosophy, and many broad faith-based movements:


   (E) The Active Creation of Truth


Almost any attempt you will see which attempts to define or analyze the notion of truth will fall under one of the first four headings. Where you see some assertion that propaganda or authority is being over-trusted, it is likely that the fifth concept is in play. Almost all of the most common attempts to define truth fall under The Epistemic Conception, which (paradoxically) is the one rejected by almost all contemporary philosophers and logicians (see below). We shall briefly describe each here:


(A) The Correspondence Conception of Truth


# Consider first the correspondence theory, associated with Plato, Aristotle, G. E. Moore, Bertrand Russell and others. We can define it as follows: (1) The proposition that P is true iff P corresponds with the facts.


So “truth” means “correspondence with the facts.” That’s a traditional formulation of the theory. So let’s try to explain what it says. For example, it’s true that some dogs bark if the proposition, “Some dogs bark,” corresponds with the facts. Which facts? Actually, just one: the fact that some dogs bark. So suppose that it is a fact that some dogs bark (that’s not hard to suppose). Then we can improve our example. We could say: it’s true that some dogs bark if, and only if, the proposition, “Some dogs bark,” corresponds with the fact that some dogs bark. Or we could say: it’s true that God exists if, and only if, the proposition, “God exists,” corresponds with the fact that God exists.


The most commonly cited problem for the correspondence theory is this question: what is correspondence? When does a proposition correspond with the facts? Well, you can think of correspondence as a sort of matching-up relation -- if a proposition can be matched up with a fact, then it corresponds to that fact. But that’s still puzzling, isn’t it? I mean, when does a proposition “match up” with a fact? To say that “correspondence” means “matching up” doesn’t really shed a whole lot of light on the subject. (Bertrand Russell and shortly after, Ludwig Wittgenstein, suggested that proposition and fact “correspond” when their structure is isomorphic.) Well, one thing we might observe in any case is that, in order for a proposition to be true, according to the correspondence theory, there must be some fact to which it corresponds. So a fact has to exist in order to be matched up a proposition. And remember, we’ve already decided which fact that a proposition has to correspond with: the proposition that P has to correspond with the fact that P, if the proposition that P is true.


So here is a suggestion that can help get us around the objection about correspondence. We can say that it is true that P if, and only if, there exists a fact that P. If we put it like that, then we don’t have to talk about correspondence at all. We just say: it’s true that some dogs bark if, and only if, there exists a fact, that some dogs bark. And we could put it even simpler than that:

# (2) The proposition that P is true iff it is a fact that P.


So consider that the revised version of the correspondence theory:

# (3) P is true when it is a fact that P.


Examples of this might be:

# (4a) The proposition that dogs bark is true if it’s a fact that some dogs bark.

# (4b) The proposition that God exists is true if it’s a fact that God exists.

# (4c) The proposition that snow is white is true if it’s a fact that snow is white.


And so on. We can regard that as explaining what it means for a proposition to correspond with a fact: basically, if there is a fact that P, then that fact corresponds with the proposition that P.


But this reformulation of the theory faces now a different problem. Namely what are facts, and what does it mean to say that facts exist, or that there is some alleged fact? Look at the problem like this. Our reformulation basically says that “true proposition” means “factual proposition.” So then we have to ask ourselves: “Have we really explained anything about truth, about true propositions, if we merely said that they are factual? Because then aren’t we just letting this other word, ‘fact’, do all the work of the word we’re confused about, ‘true’? And then wouldn’t we have to give some account of what facts are?” There are at least two different ways to reply to this objection. The first way to reply is to actually offer a theory of what facts are. This is something that philosophers, this century, have actually tried to do. They say things like this: some facts are basically combinations of objects together with their properties or relations; so the fact that Fido barks is the combination of an object, Fido, with one of Fido’s properties, that he barks.


But of course that is only one kind of fact; there would be other kinds of facts, about all dogs; or about the relation between dogs and cats; and so on. But the idea is that it is possible, anyway, to specify and categorize all those different kinds of facts. And then you’ve got an answer to the question, “What are facts?” You say: it’s one of these sorts of things (pointing to your theory of facts). And when it is asked, “What does it mean for a fact to exist?” you can answer: well, it’s for each part of a fact to exist. So if Fido exists, and Fido’s barking exists, then the fact that Fido bark exists. And that’s what makes it true to say that Fido barks. That’s a very appealing way to answer the objection. The philosophy of language was for long based on this view. But it has been challenged (see below).


(B) The Deflationary Conception of Truth


Another way, which has been perhaps even more popular, particularly in the last 30 years, is to offer an even further stripped-down theory. First, observe that if I say that it’s a fact that P, I might as well have just said, “P”. If I say, for example, that it’s a fact that some dogs bark, then why don’t I just say, “Some dogs bark”? Why do I have to declare that it’s a fact? If I’m saying it, then I’m implying that it’s a fact, am I not? Sure. Well notice that, in the previous theory of truth, these words occur: “it is a fact that P”. So then why don’t we just say “P” in place of “it is a fact that P”? I mean, suppose I’m right, and when I say “It’s a fact that P,” I really mean nothing more than when I say “P.” Then why not just substitute “P” in for “it is a fact that P” in our previous, revised correspondence theory? Then we don’t talk about facts at all. So here’s the new, even further stripped-down theory:

# (T) The proposition that P is true iff P.


That’s it! Statements of the form (T) are often called T-sentences. And some people say that that’s basically all there is to say about truth. To understand the notion of truth is to understand and accept all the T-sentences.


The original version of this bare-bones theory was called “the redundancy theory of truth”, and it is due to F. P. Ramsey and Alfred Ayer, English philosophers who wrote their works in the 1920s and 1930s. It’s called “the redundancy theory” because it basically implies that saying that something is true is always redundant. (This has loose connections with the “performative theory of truth”, associated with Peter Strawson.) The redundancy theory of truth is really a special version of what is now called The Deflationary Conception of Truth, or deflationism for short. Deflationism has two major versions. A version called Minimalism, which has been developed by Paul Horwich. And a version called Disquotationalism, which has been developed by Hartry Field. The minimalist theory takes truth bearers to be propositions and takes, as constituting the notion of truth, statements of the following form:

# (T*) The proposition that P is true iff P.


The disquotational theory in contrast takes sentences as the central truth bearers, and its basic principles take the following form:

# (T**) The sentence “P” is true iff P.


Roughly, statements of any of the forms (T), (T*) or (T**) are called “T-sentences”, and deflationists take T-sentences to be central in characterizing the notion of truth.


The idea is that, instead of saying, “It is true that some dogs bark,” you could, without loss of meaning, say simply, “Some dogs bark”. In principle, we could always eliminate talk of truth, in favor of simply forthrightly asserting whatever it is that we say is true.


Now there’s one simple objection to the theory that might occur to you. You might say: “Well, if I claim, ‘Pigs fly,’ then the deflationary theory says that it’s true that pigs fly! If I claim that philosophy is simple, then it’s true that philosophy is simple!” This is a bad objection. It’s bad because it has the deflationary theory wrong. The deflationary theory doesn’t say: “It’s true that P iff I claim that P.” It says: “It’s true that P iff P.” So, if pigs fly, if pigs do indeed fly, then it’s true that pigs fly. Nothing wrong with saying that: that’s correct. If pigs did fly, then it would be true that pigs fly. But that’s quite different from saying that, if I claim that pigs fly, then it’s true that pigs fly. So the deflationary theory doesn’t say that whatever anyone says is true. What it does say is that, if I say something, then I’m committed to saying that what I said is true.


And this makes some sense. Suppose, on the one hand, I say, “God exists! There is a supreme being!” Then suppose on the other hand that I say, “It’s true that God exists! It’s true that there is a supreme being!” Have I added anything to my original claim when I say that it’s true? I mean, have I added anything other than emphasis and a declaration that I really do believe what I’m saying? The redundancy version of deflationism thinks not; saying that something is true is only adding emphasis.


But some people disagree. They think that there is something that the redundancy theory is missing. They think there’s got to be some reason why we came up with this word “true.” The redundancy version of deflationism says basically that it’s only a term of emphasis. But is that really all it is? Isn’t the idea, rather, that one specifically wishes to point to the fact that a proposition bears some relation to reality -- correspondence, describing the facts, something like that?


There is a second, and important, objection to the redundancy version of deflationism. We can eliminate “true” from a statement like,

# (5) “Snow is white” is true.


to obtain just,

# (6) Snow is white.


But we cannot do likewise when we attribute truth to a statement by some kind of indirect reference. For example,

# (7) The last thing Plato said was true.


The redundancy view of truth provides no guidance for eliminating “true” from this statement. Ramsey himself was aware of this, and suggested something along the lines of the following

# (8) (If the last thing Plato said was “Snow is white”, then snow is white) and (If the last thing Plato said was “Penguins waddle”, then penguins waddle) and (If the last thing Plato said was “Grass is pink”, then grass is pink) and ... etc.


So, the idea is that we can eliminate “true” from (7) by using an infinitely long conjunction of statements of the form

# (9) If the last thing Plato said was “P”, then P.


Similarly, contemporary deflationists such as Horwich and Field do not in general advocate the older redundancy view, and do think that “true” is not merely a method of emphasis. First, both minimalists and disquotationalists argue that truth just is a property which satisfies the “equivalence condition” that P and “P is true” are equivalent. Second, disquotationalists have further argued that a property (or predicate) satisfying this condition has an important logical use, which permits one to express infinitely many statements all in one go. For example, if we wish to assert each statement that a mathematical theory T proves, we should have to list them all, and then say, one by one:

# (10) S1, S2, S2, ...


The modern deflationists (following W. V. Quine) have pointed out that instead of asserting all of these particular statements, one can instead say simply:

# (11) All theorems of T are true.


So, instead of asserting all the theorems of T one by one, you can simply say a single statement (6), “All theorems of T are true”. This ability is the basis of set theory - it was thought for many years that there were foundations of mathematics that were “true” in some more basic sense, and that set theory was the basis of it. That turned out not to be true, at least, not true for any language that was capable of talking about truth itself.


Also key to the deflationary conception is the Liar’s Paradox, which can be stated as a theorem in T:

# “All theorems of T are false, including this one.”


Well, if that statement is false, then it’s true, and if it’s true, then it’s false. This suggests that there may be more truth-values than just true or false, and leads to the introduction of such values as “don’t know”, “don’t care”, “paradoxed”, “not-true”, or etc.


An objection to the deflationary conception of truth is that it simply isn’t valid in the human psychological sense - truth is something one can act on with confidence, not a game one plays with words or logic.


(C) The Semantic Conception of Truth


In some ways related to both the Correspondence Conception and the Deflationary Conception is the Semantic Conception of Truth, due to Alfred Tarski, a Polish logician who published his work on truth in the 1930s. Tarski took the T-sentences not to give the theory of truth itself, but to be a constraint on defining the notion of truth. That is, in Tarski’s view, any adequate definition or theory of truth must imply all of the T-sentences (this constraint is known as Convention T). Tarski developed a rather complicated theory, involving what is known as an inductive definition of truth and further ideas, such as the distinction between object language and meta-language (which is important in avoiding the semantic paradoxes such as the Liar’s Paradox outlined above).


Tarski’s inductive definition of truth included the following important principles:

# (i) A negation ~A is true iff A is not true.

# (ii) A conjunction A&B is true iff A is true and B is true

# (iii) A disjunction A v B is true iff A is true or B is true.

# (iv) A universal statement “for all x A(x)” is true iff each object satisfies “A(x)”.

# (v) An existential statement “there exists x A(x)” is true iff there is an object which satisfies “A(x)”.


Tarski’s semantic conception of truth plays an important role in modern logic and also in much contemporary philosophy of language. It is rather controversial matter whether Tarski’s semantic theory should be counted as either a correspondence theory or as a deflationary theory.


It could also be considered as an active creation theory, since the objects might “satisfy” the function by making some choice, which might be influenced by some desire to make some larger statement true. For instance, a computer program typically calls on functions with side effects to test whether any given assertion is true. Even in LISP, a particularly strict language, four of the five basic atom types have such side effects, and thus its statements are not “formal”. Such issues have generally discredited formal validation of computer programs, but, in Tarski’s conception, this is not a problem. The use of assertions within programs can be seen to be derived at least in part from his work. As in science, the test is the way one disproves truth.


(D) The Epistemic Conception of Truth


Coherence Theory


Another conception of truth that differs drastically from the correspondence theory, the deflationary theories and Tarski’s semantic conception, and an example of the Epistemic Conception of Truth, is called the coherence theory, and is associated with the Idealist school of philosophers, such as Hegel and so on. The coherence theory offers another definition of “truth”. It says that truth depends on coherence, as follows:

# (12) The proposition that P is true iff P is part of a coherent system of propositions.


Roughly, P is true if it coheres with a system of propositions that it’s part of. Typically a “system of propositions” is understood as a group of propositions that some one person believes. So if you like, you can think of “system of propositions” as meaning a belief system. It is because of this reference to beliefs and their justification that it is called an epistemic theory of truth. Then the idea is that if your belief system is coherent, then your beliefs are true. And if you come across a belief that doesn’t cohere with the others, then you can toss it out as incoherent and thus false.


We shall not try and give an example of a coherent system or a belief that is true because it is part of the system. The reason isn’t that the coherence theory is obviously wrong, but because the coherence theory is better regarded as a theory about justified belief, that is, when beliefs are justified or rational. For instance, konwledge.


The coherence theory is better regarded as a theory about when beliefs are justified or actionable than as a theory about when beliefs are true. The coherence theory of justification is another topic, and in that article are examples and criticisms that apply to coherence theories generally -- whether of truth or of justification.


Consensus Theory of Truth


A related epistemic theory of truth, popular with sociologists and many net activists, is the simplest version of the Consensus Theory of Truth, which roughly says that,

# (13) The proposition that P is true relative to a community C iff all members of the community accept P.


This too is more about what is actionable than what is true, or one might say, “what one can get away with”. For instance, in propaganda analysis, truth is that explanation that we not only accept but act on reliably (to some standard of evidence for some period of time). Thoughtful people may think of this kind of truth as something that holds ‘for a lifetime’ or some equivalently long but more standardized period, e.g. to the seventh generation.


However, no matter how long a time horizon is chosen, and despite the popularity of this theory of truth, it is easy to see that it is false. For obviously a statement may be “accepted” and yet be false - there are countless examples of this from history and current affairs. Barry Wellman, a sociologist, claims that such acceptance defines not truth but rather an epistemic community. So it is confusion of such a “community” with the truth itself that acceptance of (13) indicates.


For instance, one might claim that a given Wikipedia article contains only “truth” if no member of the “Wikipedia community” has chosen to challenge it. But that’s unreasonable - as anyone who has read the Wikipedia frequently knows, many false claims are accepted and go unchallenged. And, this is not solely due to inability of all to read every article: Even if everyone could and would read every article, they could still all be wrong. There is systemic bias in who reads Wikipedia, who writes it, and who challenges it.


It becomes even more obvious if one considers a very narrow and self-interested group, such as those involved in accounting scandals in the USA, who might collectively agree that:   Truth is that agreed on a golf course and lasting until liquidity event


Even if we grant that concepts of truth might change over time or with choice of scope, as we did above, accepting this degree of moral relativism is dangerous - it’s plainly a “liar’s idea of truth”. So the Consensus Theory of Truth may lead us directly to chaos, and that’s not what we expect of truth.


However, this conception of truth makes more obvious two issues that apply to them all: first, we have little choice but to accept consensus on most matters - those who actually assess and assert the truth are often a smaller group than those who rely on it. So authority will predominate if consensus is not in some way discovered and shared. Second, all truth as understood by humans is only a consensus of humans. Recognize Great Ape personhood, and suddenly we must ask fo their view too. This and other related observations about power, risk, sincerity and commitment issues in truth, and how they influenced ideology and political economy, and often to war as each self-interested group or nation pursued its own concept of truth, tended to render all Consensus conceptions unacceptable by the end of World War II.


However, the consensus view had much influence on later thinkers. We might for instance alter truth, under this definition, by changing “who we must agree with” to include our former enemies. And, the chaotic early 20th century convinced many in Europe and North America that Buddhism and Taoism might be right that truth simply can’t be stated: this influenced the philosophy of action, which disdains assertions of truth as such, and seeks to discover implied assertions of truth through shared actions - involving a much deeper concept of “accept” that can answer most issues with the Consensus Theory of Truth.


To understand it, however, we must deal with another 20th century conception of truth:




Another epistemic theory was introduced and made by American philosophers, Charles Peirce (pronounced “purse”) and William James, in the late 19th and early 20th centuries. Their theory is called pragmatism, or the pragmatic theory of truth. Pragmatism is another example of the Epistemic Conception of Truth, since it closely relates the notion of truth to the notions of belief and justification.


Pragmatism has been regarded as an American theory of truth, although the correspondence theory probably more closely reflects the commonsense realism that most Americans share.


“Pragmatism” is one of those neat words that philosophers like so much that they want to appropriate it for themselves, without regard to how it has been used before. As a result, the term “pragmatism” means a lot of different things to a lot of different people; there are lots of versions of pragmatism. One very important, influential version, is due to Peirce, and has received some renewed interest from some philosophers today, like Richard Rorty and Hilary Putnam (who applied it to philosophy of mathematics and derived his view of quasi-empiricism in mathematics). Peirce’s version, roughly stated is:

# (14) The proposition that P is true iff P is agreed upon in the consensus achieved at the ideal limit of inquiry.


So, the pragmatist theory of truth is rather like the Consensus Theory mentioned above, but it is a long-run and idealized version of consensus. Truth is what consensus will be the ideal limit of scientific inquiry. Peirce invites us to imagine what science will be like a few hundred, or perhaps a few thousand years from now. He predicted that human inquiry and truth-seeking would, or at the very least could, at some point come to an end, a limit; there would, he thought, be basically no questions left to be answered, and the state of human knowledge could not be improved upon. At that point there would be, he thought, some very general consensus, firmly agreed-upon, by all inquirers. And if some proposition now being considered would be something that everyone would agree on, in that ideal limit of inquiry, then that proposition is true. And that’s what it means to say that a proposition is true: that it is part of the consensus that would exist in the ideal limit of inquiry.


The appeal of this theory may be that the truth is knowable, although not necessarily practical for you to find out for yourself. Accept as your own opinion that: if something is true, then it can be known to be true. Then what, really, is knowability? Well, something would be knowable if it could be known -- if not now, then by someone, somewhere down the road. So something is knowable if we could, after long hard careful inquiry, discover it. In yet other words, something is knowable if we would know it in the ideal limit of inquiry. This is actually faith in a process, not any productf that process.


Now, suppose you thought that all truth is knowable in this sense. In that case, everything that could be known, would be known in the ideal limit of inquiry. In the perfect science all truths would be known. There wouldn’t be any truths left over. So then why not say that there is no more to truth than that what that perfect science would tell us? That would simplify matters. There would be no need to look for any sort of correspondence between propositions and the world, or between propositions and a coherent system of propositions. Truth, since it is knowable, is whatever the perfect science would tell us in the ideal limit of inquiry. In that way pragmatism is very optimistic. It is also necessarily a route to scientism.


Two basic objections are commonly made to pragmatism. First, a common objection of theologians and skeptics both, you can say: maybe there are some truths that aren’t knowable. Why think that every proposition must be knowable? Why not say there are some true propositions that we can’t ever know, not even in some ideal limit of inquiry? Let me give you an example. There are probably complex processes going on inside of black holes; but black holes are so gravitationally powerful that not even light can escape from them. So we could not possibly get knowledge of some specific events going on, right now, inside some black hole. Nonetheless there would seem to be some facts there; scientists might even know enough to be able to describe what might be going on; the point, though, is that they can’t confirm that it is going on, even if they can describe, in generalities, what might be going on. So the first problem for pragmatism is that it certainly appears that there are some truths that would not appear in the perfected science in the ideal limit of inquiry -- because they cannot be known at all. You can probably think of more examples yourself; maybe truths about what went on in the minds of people long dead, or facts about very distant events. Perhaps there are facts about subatomic particles which we cannot, in principle, ever know. This objection has been made many times, from the medieval (Ghazali most notably) to the present.


A second objection, due originally in this form to Bertrand Russell, is that pragmatism describes an indicator or a sign of truth. It really cannot be regarded as a theory of the meaning of the word “true.” Do you see the difference? There’s a difference between stating an indicator and giving the meaning. For example, when the streetlights turn at the end of a day, that’s an indicator, a sign, that evening is coming on. It would be an obvious mistake to say that the word “evening” just means “the time that the streetlights turn on.” In the same way, while it might be an indicator of truth, that a proposition is part of that perfect science at the ideal limit of inquiry, that just isn’t what “truth” means.


Russell’s objection isn’t so much an argument against pragmatism, so much as it is a request -- that we make sure that we aren’t confusing an indicator of truth with the meaning of the concept truth. There is a difference between the two and pragmatism confuses them. A more powerful objection that builds on both that of Ghazali and Russell, and in part adopted by Russell in his later political activism, is that the limit of inquiry must itself be ethical, a consequence not of structure, but of choice. This leads directly to a post-epistemic method which is variously attributed to any of postmodernism, modern Islamic philosophy’s revival of Asharite views and renewal of khalifa within ilm (see Islamization of knowledge), and the green movement. For lack of a simpler name it may be referred as:


(E) The Active Creation of Truth


This is the radical view that activity, rather than agreement, is what creates truth, which can only be seen and measured and agreed after the fact.


The best known advocate of this perspective was likely Mohandas Gandhi, who took what Carol Moore called “a systems view of political action”. Since in politics the actions taken are necessarily those which redefine the social and infrastructural reality, and even increasingly the natural world, there is some real resonance of this view with the reality of the era of nuclear weapons:


# (14) The proposition that P is true relative to a community C iff all members of the community act as if P were an inevitable consequence of inaction.


At any given moment, we may look backwards and say “well, we acted as if to escalate that conflict to nuclear war would kill us all, and so avoided it.” This may explain why such theories as Mutual Assured Destruction and models based on Prisoner’s Dilemma, or even reality game shows now appear to be getting popular. Each of them illustrate truth as created post-facto by the surviving participants, and each necessarily involves an “opacity of intent.” Let’s look more closely at the MAD case. Specifically, it is reasonable for any human being to say “we all exist only because the superpowers have not yet chosen to exterminate us using their vast stockpiles of nuclear arms.” At the point in history where science, technology and globalization converge to expose all of us to each other’s choices, be they wars or spreading of plagues by modern air transport (like SARS), the rules of truth change: The limit of inquiry, which was an abstract concept philosophers cared about prior to the global threats, becomes a matter of more practical concern, as it determines the funding of say research that might possibly lead to weapons of mass destruction, and which in turn might lead any educated individual to make personal choices about the limits of one’s own inquiry, direction of education, and ethics one would choose to practice with them. In Gandhi’s method, masses of humans co-operate to resist nonviolently, as if the consequences of their obeying laws and authorities instead were much worse. They accept some limits of inquiry and action, i.e. they do not seek in this formation to break ranks, “fight back”, or do anything else that they cannot experience strictly in common. This forms political solidarity and demonstrates competence in the control of mass action. When authorities flee, in part due to fear of this controlled mass becoming angry or uncontrolled or led by a more violent leader, what is left is the discipline itself, a common memory, and a political party - an entity capable of taking full control of the state’s mechanisms. It has in fact already demonstrated the ability to control people on the street, with nothing more than what Gandhi called it’s “truth-force” or satyagraha.


When the mob is in effective control of its own polity and history, it can write that truth without reference to violence, since none was involved in creating it. There is no value to glorifying founders, leaders, or others who might be used as symbols later. The fewer of these there are, the more elevated the role of non-leaders, who merely resisted. No one has actually “spoken” the truth. To practice violence in the process is to usurp the role of making the truth - thus, even if violence is involved, it is important not to justify or glorify or prosecute or excuse it. In the Republic of South Africa after the fall of apartheid, these principles were applied by the South African Truth and Reconciliation Commission, and were generally successful at achieving inaction (lack of revenge) against figures who had been guilty of crimes under apartheid.


Less strict variations on these themes are generally followed by the civil rights movement in the United States], the modern Green movement or anti-globalization movement. These have adopted some of Gandhi’s methodology but generally failed to see how it has contributed to their own shared “progressive” notion of truth. However the outcome has been a remarkably similar emphasis on elder experience, oral history, shared memory, and an agreement on the seventh generation time horizon. [[Winona LaDuke proposes a Seventh Generation Amendment to the United States Constitution that would require all government actions to be assessed for their impact to this common time horizon.


Time horizons being key to the management of risk, and the inevitability of some such risk being realized being the key motivator of political action, there is a strict coherence to this theory that epistemic notions have not had: to deal at once with the post-facto nature of symbolic description, and the choice a community has in explaining its motives before the fact, and after the fact.




After this very brief discussion of theories of truth, we note that contemporary philosophers tend to favor either some revised correspondence theory or some deflationary theory; but we just haven’t discussed them in enough depth to be able to say that with any certainty. But this survey introduces you to the terrain: among different conceptions of truth there are the correspondence conception, the deflationary conception (including the redundancy theory, minimalism and disquotationalism), Tarski’s semantic conception and the epistemic conception (including the coherence theory and pragmatism).


With such a variety to choose from at the very least you should be convinced that you don’t have to rest content with any sort of relativism that says that truth is just the same as belief, or something you must trust philosophers on.


Information theory


In information theory terms, truth is essentially a concept promoted by systems that have memory, and can thus distinguish temporal changes in state of the universe. No computational system (and therefore perceptual system, i.e., observer) can operate correctly without the ability to reliably make changes in its own state, and since no such system can have perfect knowledge of its own state in full detail, it thus, at the very least, must make assumptions as to its state. In simpler terms, it must eventually accept some things to be true without evidence (i.e., faith). This leads to the notion of a foundation ontology, which is, those distinctions one does accept as “true”.



# “Truth - Something somehow discreditable to someone.” - H.L. Mencken




Value theory


Larrys Text; thus the first-person voice> The Theory of Value asks What sorts of things are good? Or: What does “good” mean? Or: If we had to give the most general, catch-all description of good things, then what would that description be?


It is, arguably, the most important area of philosophy. When governments decide what is good and to be encouraged, they cut taxes on those activities, remove regulations or laws, and provide subsidies. Value theory affects everyone’s life - maybe all life on Earth. It may literally define “good” and “bad” for a community or society.


Moral v other goods


First, two important preliminaries. There’s an important difference between the words “morally good” as applied to persons and actions , as when we say that Mary’s a morally good person and her honesty is good, and “good” in other senses, as when we say that a banana split is good. So what is really worthwhile? What is really desirable? That’s the big fish we have to fry now.


Intrinsic v Instrumental goods


The second preliminary is where things start getting interesting, in my opinion. I want you to draw a distinction. The distinction is between instrumental and intrinsic goodness. An intrinsically good thing, even if it doesn’t help you get anything else that’s good, is still worth having for itself.


Let me give you some plausible examples of both instrumentally and intrinsically good things. First, some instrumental goods: a hammer, or a radio. So hammers and radios, and just about all the stuff we surround ourselves with at home, are all instrumentally good.


Now here are some plausible examples of intrinsically good things: the pleasure we get from listening to a great piece of music, or understanding philosophy. The pleasure I get is good in itself, valuable all by itself; it’s very worthwhile, and would be worthwhile even if it didn’t help me in any other way than give me pleasure.


Or take now understanding. I believe, and I believe many people in academics would agree with me, that understanding is good in itself. Of course, if you disagree with me and you think that understanding is never good in itself, but only good for what you can do with the understanding, then you won’t like mathematics, philosophy, or theoretical physics very much. But the people who do like such subjects will often swear that understanding is something that is worthwhile in itself.


But it’s not like it’s always an either-or proposition. Some things are both good in themselves, and good for getting other things that are good. They are both intrinsically and instrumentally good. Like understanding. Some physicists will swear that physics is just really neat all by itself, understanding the principles of physics is worth having just for its own sake; but they will also be quick to point out that it was an understanding of the principles of physics that put men on the moon.


So to go back to the beginning of the discussion: we’re talking about the theory of value, and the theory of value asks, “What sorts of things are good, or valuable?” And now that we are armed with a distinction, between intrinsic and instrumental goods, we can make the question more precise. Because ultimately we want to know what things are intrinsically valuable. That’s what the theory of value is particularly concerned with: What things are good in themselves?


We all know very well that we have to pursue some instrumentally good things in order to get the intrinsically good things. For example, most people pursue money as merely an intrinsically good thing, so that they can afford what they call “the finer things in life,” and those things, like concerts, vacations, and of course a happy family, are supposed to be good in themselves , or intrinsically good. But it’s ultimately, in any case, the things we believe to be intrinsically good that we want. So up at the top of the heap, the pinnacle of the hierarchy of goods that we aim at, there are the intrinsic goods. And the question before us now is: What are they? Which things are intrinsically good?


Values subjectivism.


In this connection I’m only briefly going to discuss relativism, or subjectivism, about intrinsic goods. Let’s call this values subjectivism . Here is what values subjectivism says: So if you want to answer the question, “What things are intrinsically good?” you need only answer a further question, “Well, what do I, or what does my group, want not merely as a means to something else, but for itself?” I’ll bet you know what I’m going to say about this theory. Couldn’t you be wrong about what is good for you? Let me give you an example. Say Adolph, an SS officer during World War II, runs a Nazi concentration camp. I mean, don’t you rather want to say that Adolph is a vicious criminal, and that the sick pleasure he takes in torturing Jews is not at all valuable or good in any sense? That, in fact, that pleasure is so bad that it is a very great evil ?




But for that matter, it’s a problem for everyone who does not yet have a clearly worked-out account of intrinsic goodness. Think of it like this. “Intrinsically good” means, very roughly, “worth wanting for its own sake.” But aren’t there some things that you might be unsure about whether they’re worth wanting for their own sake? How do you decide?


Here’s another example of something you might wonder is worth wanting for its own sake: money, or what money buys. I guess there are a few people in college, especially business majors, who would say they that money is an intrinsically good thing, and worth wanting for its own sake. They live for money. But of course nearly everyone has entertained the thought that money isn’t valuable for itself. Some people act as though it were; but even most of those people, in their sober moments, will acknowledge that money is good only instrumentally, only as a means, to getting what it can buy.


Well, look at all the different things that money can buy: houses, cars, clothes, jewellery, vacations, social status, and so forth. The world today is filled with people who behave as though these things are good in themselves; a big house and a nice new BMW are worth having for their own sake.


But surely thinking that you’re part of the “in” crowd gives you pleasure -- and that’s why you want to be part of the “in” crowd (that is, if you do want that). Social status -- which for younger people consists of being “hip,” or for older people consists of being “respectable” and “distinguished” -- social status is a source of pleasure to some people.


So some philosophers have gone through this train of thought, or one roughly like it, and come to the conclusion that pleasure is, ultimately, the only intrinsically good thing. That view is called hedonism . Hedonists have the following view about what is intrinsically good: Something is intrinsically good iff it is a type of pleasure.


Shortcomings in Hedonism


But let’s not forget old Adolph who takes great pleasure in torturing Jews. We wanted to say that that pleasure was not good in any sense. So what’s this then? Have we discovered that some pleasures are not good? Well, surely some pleasures are totally corrupt. Suppose that I get great pleasure by being a respected member of the mafia. Is the pleasure I take from being a respected member of the mafia something desirable at all, let alone desirable for its own sake? Surely not! There are all sorts of pleasures which are base and corrupt. John Stuart Mill, a famous 19th-century English philosopher, drew a distinction between base and higher pleasures. I’m sure you can see the difference. After all, you have experience of the difference in your own lives, all the time! You all know very well that some pleasures are bad, and you’ll regret indulging them; other pleasures are much more worthwhile and wholesome. Surely we do not want to say that base pleasures are intrinsically good! They aren’t good at all.


Anyway, one thing we might observe is that Adolph’s torturing pleasure is a pain that result in pain for other people. The pleasure Adolph takes in torturing others leads him to cause them pain; and the pleasure I take in being part of the mafia leads me to bump off inconvenient people, causing great pain to them, their family and friends. So we could revise our definition of hedonism like this: Something is intrinsically good iff it is a type of pleasure that does not lead to pain for oneself or for other people.


But even this is probably not quite right; this definition may exclude some pleasures that are intrinsically good. For example, what if I take great pleasure in running forty miles a week? Still, I might want to say that the pain of sore muscles is worth it. So this definition of hedonism would have to be further refined, if we wanted to allow the pleasure of running long distances to be intrinsically good.


Basically what we’re trying to do here is to say which pleasures are “higher” pleasures, or simply good pleasures. Some pleasures are good, some aren’t so good, some are corrupt, and some are just downright evil .


So consider that now: the only intrinsically good things in the world are good pleasures. But then aren’t we giving a circular account of “good” -- if we saying that the good things are good pleasures, then we’re using the word “good” to define itself. In other words, we try to find out which pleasures will result in the most other pleasures. Then we call those pleasures “intrinsically good,” and only then do we say: “the only instrinsically good things in the world are good pleasures.” That allows us to get around the circularity problem.


Now to make the transition to the next theory of value, I want us to think about something I just said a little more carefully. No, whether a pleasure is worth having depends on its being, as it were, part of a whole series of pleasures that makes up a very pleasant life. Now, do you see where I’m going with this? Why don’t we say that it’s not pleasure per se which is intrinsically good, but instead a happy life ?


Happiness, Eudaimonism and other theories of value.


You can have a good hour, or even a good day, in the sense that you’re in a good mood and have all sorts of pleasant feelings over that amount of time. But you could, in spite of that, have an unhappy life. It’s sad to say, but it’s true. We might not like to think about it, but it is definitely true.


Anyway, the suggestion now is that it’s not pleasures, individual events, which are intrinsically good. Those individual events are themselves only instrumentally good; they are good or worthwhile only as a means to something else. And what they are worthwhile as a means to, is a happy life. A healthy, tasty meal is a good thing. But it’s not good in itself -- it’s only good as a means. The suggestion is that no pleasure is intrinsically good, good in itself; any individual pleasure is good only as a part of a happy life. The only intrinsically good thing in this case would be a happy life.


Now finally I think we have arrived at a view that a lot of you probably hold. The happy life is one in which one has an excellent chance of having pleasure from the beginning of the life to the end. And that is going to include not only “lower” pleasures like eating, drinking, and sex, it is going to include many “higher” pleasures like understanding, the appreciation of art, deep friendship and love, and so forth. The experience of all of those pleasures together is what goes to make up a happy life.


If you think that there are a number of different intrinsic goods or values, then you are a values pluralist . The view that there is only one kind of thing that is intrinsically good could be called values monism. So far, we have considered the views that pleasure and that happiness are the only things that are intrinsically good.




Now let’s look at a closely related view -- the ancient Greek philosopher Aristotle’s view. On this ancient Greek view, it was not happiness , which is a mental state over time, which is intrinsically good -- it is, instead, something like happiness, but eudaimonia , for which there is no word in English, except perhaps the word “flourishing” or “well-being.” Eudaimonia is more than simply happiness; it is a happy life that is well -lived . Happiness is a subjective state. Eudaimonia is an objective state; literally, it means something like “having a good spirit.” To explain the difference between happiness and eudaimonia , let’s look at an example where someone is happy, but not eudaimon , as the Greeks would say. (Eudaimon is the adjective; eudaimonia is the noun.) Say there is a man; call him Tom, who is in the prime of life. He has a happy family, he is well-liked by his colleagues, he is making a lot of money, and generally is very happy himself. Now suppose there is a parallel universe, just exactly like this one, except in the parallel universe, instead of Tom there is Bill. And Bill is exactly like Tom, just as happy, except for a few basic differences. Bill’s family only appears to him to be happy; they’re actually repressed and not very happy at all. In fact, unbeknownst to the hapless Bill, he’s about lose his wife, his job, and his life. And for a long time poor Bill has been living a lie. Tom’s happiness is totally solid, built on honesty, strong relationships, and so forth. From an objective point of view, from a view informed by all the facts, we’d greatly prefer Tom’s life over Bill’s.


Aristotle and the Greeks would say that Tom is both eudaimon and happy. But they deny that Bill is eudaimon, however happy he might be. And they would insist that the important thing in any case is eudaimonia .


Collectivism v Individualism.


But then mightn’t you want to go one step further, and say that an individual person’s flourishing is valuable only as a means to the flourishing of society as a whole? In other words, here is the suggestion: a single person’s life is, ultimately, not important or worthwhile in itself, but only as a means to the success of society as a whole?


So the question at issue now is: Is an individual’s life intrinsically good, or is it merely instrumentally good? Is an individual’s life, well-lived, something that is desirable for its own sake, or is it desirable, ultimately, only as a means to having a happy society?


Lets use the terms “values individualism” and “values collectivism” to mark the dispute. Here are some definitions: Values individualism is the view that only individual lives (or their eudaimonia ) are intrinsically valuable; and so they are valuable not merely as a means to the flourishing of society.


Values collectivism is the view that individual lives (or their eudaimonia ) are only instrumentally valuable, i.e., good only as a means to the flourishing of society; the flourishing of society (whatever this might be) is the only intrinsically good thing.


Radical values environmentalism


For the sake of completeness, let me mention one more view, which is held by some environmentalists. It shouldn’t come as any surprise to you that some people want to take matters one step further . It’s not merely the flourishing of society that is the only intrinsically good thing. It’s the flourishing of all sentient life . Or perhaps all life, period.


Radical values environmentalism is the view that the only intrinsically good thing is a flourishing ecosystem; individuals and societies are merely instrumentally valuable, good only as means to having a flourishing ecosystem.


But this seems an odd point of view, if you think about it: goodness, or value exists within an ecosystem, Earth. What kind of being could validly apply the word to an ecosystem? Who would have the power to assess and judge an ecosystem as good or bad? By what criteria?


Summary: Values pluralism and the grading of values.


Notice now the succession of things that we have considered as the kind of thing which is intrinsically good: we’ve gone from particular events of pleasure, to an individual’s happiness, to an individual’s eudaimonia , to the flourishing of a society, to the flourishing of an entire ecosystem. So I think that you can see that there is a rather difficult problem about the scope of the theory of value. Where do you stop, in this succession of items, in your account of what is valuable for its own sake?


If you say that an individual pleasure is valuable for its own sake, then why don’t you say that an individual’s entire happiness is valuable for its own sake? And so forth: and on reaching the end of this sequence, we find ourselves valuing ecosystems which is itself an activity which seems metaphysical, inexplicable.


As a values pluralist, you might say: every item in this succession of items is intrinsically good. The goodness of a particular experience, of an individual’s whole life, of society, and of an ecosystem, are all worth having for their own sake, and not merely as a means to something else. So as a values pluralist you would say: I don’t have to decide which of these things is intrinsically good, because they are all intrinsically good.


That would be a nice position to be able to hold, but I’m not sure it will stand up to careful scrutiny. Why? Well, notice that sometimes, there is a conflict between different levels of goods. Sometimes we have a choice , for example, to sacrifice our own pleasure, or happiness, or even our own lives, for the sake of many other people. In cases like that, you’re weighing two things: your own individual happiness, and the more general happiness of a lot of other people. And if you conclude that you should sacrifice your own happiness, in one of these ways, what does that amount to?


It seems to me that you could say that your own life is worthwhile in and of itself, and that it is also worthwhile as a means to the happiness of others. Remember, the same thing can be both instrumentally and intrinsically good: I gave understanding, or knowledge, as one possible example. I’m saying now that a human life might be another. So that’s a way you might hold onto values pluralism. Two different things, your life and the good of society, can both be intrinsically good, even though you might sacrifice the first for the second. There’s no contradiction in saying that.


This leaves an issue unresolved, though: the issue of the relative importance of intrinsic values. If I had to rank these things in order of importance, how would the ranking go? So you could be a values pluralist and still be an individualist, or a collectivist, or a radical environmentalist. You would just have to say: the most important thing, the most valuable thing, is my own flourishing; or, instead, the flourishing of society; or, perhaps, the flourishing of the environment.


But this leaves us back at the start of the argument: on what basis do we, should we, choose in cases of conflict? Why is one thing better than another?


I hope, after all this, you can see why I say that the theory of value has an excellent claim on being the most important and the most puzzling area of philosophy. So much else in life rests on the theory of value that we accept. Crucial life decisions you make, and the habits you develop, and your deepest political convictions all ultimately rest on the theory of value you adopt.