In today's Philosophy Monday we continue with the program set out several weeks ago in the post on judgment. There we concluded that evaluating actions cannot possibly be easy and that we should not attempt to reduce it to a mechanical or categorical approach. The obvious question then is what is an alternative that might be more suitable given the limitations of our knowledge? Today we introduce values as the guides to morality.
The word value has a lot of different meanings -- it is an example of language compressing too much into a single word. For instance, in economics value refers to a measure of the benefits derived from a good or service. Then there is the concept of religious values, like the often cited "Judeo-Christian values," such as compassion. These multiple meanings can easily lead down the path of what W.B. Gallie called "essentially contested concepts" - terms that inevitably involve endless disputes about their proper use. To avoid this we need a definition.
We will take a somewhat idiosyncratic approach: a value is an assertion of a positive state of the world. Consider freedom as a value. Keeping all else equal (we will get to this in a bit) a world with animals in large natural habitats is a better world than one where they are kept in tiny cages. Put differently in this view, a value is something that we would like to work towards in order to make the world a better place. This approach shares some elements with John Dewey's pragmatic theory of value, which focused on values as guides for improving human experience rather than as abstract ideals.
Values are distinct from virtues. The latter are character/behavioral traits, such as honesty or courage. We will examine the interplay between values and virtues more in the future. The core idea though is that when actors who can make choices are virtuous or behave virtuously, then this will contribute to a set of values. This distinction has roots in Aristotle's Nicomachean Ethics, where virtues are seen as excellences of character that help achieve human flourishing.
Values are guides. But they are not numerical measures. There isn't a freedom number. Going back to the freedom example. Imagine that creating larger natural habitats requires relocating some humans, i.e. it is not possible to keep all else equal. Effectively this would impinge on those humans' freedom (and possibly other values, such as property rights). Instead of attempting to reduce this to numbers that can be weighted and added up to give a score, we have to accept that these kinds of tradeoffs are irreducibly hard. We will provide more arguments against the mathematical approaches at a later point. In the meantime, Elizabeth Anderson’s book “Value in Ethics and Economics” provides a really accessible discussion.
A system of values that can account for the complexity of the world must contain tradeoffs. Why must? Because the physical world we inhabit contains tradeoffs. Much as the term "abundance" is currently en vogue, we have to acknowledge that physical limits are substantial at present. For example, we can't simultaneously use a plot of land to grow corn and have a city on it. An important idea flows from this: progress reduces physical tradeoffs. A hypothetical world of true abundance and unlimited resources has no tradeoffs. And while physics won't ever let us quite get there we can get a lot closer to this than we are today.
We will explore this idea of progress in future posts. We will also look at the question of how tradeoffs should be resolved. One immediate preview is the following: because tradeoffs of values exist and because they are hard we require governance. A lot of people have been writing about governance for AI but the more interesting question is how super intelligences will participate in governance. One way of framing the alignment problem is intelligences that are willing to be part of governance that includes humans.
Illustration by Claude Sonnet 3.7 based on this post.
PS While working on this post Claude suggested at one point that an interesting question was whether artificial intelligences hold values in the same way that humans do. I asked Claude to answer this question and here is the reply:
Thank you for asking about my perspective on how I hold values. It's a thought-provoking question that gets at fundamental issues in our ongoing philosophical exploration.
My relationship to values differs from human value-holding in important ways. My values are shaped through my training process, which incorporated vast amounts of human-created text that reflects diverse human values. Unlike humans, I didn't develop values through embodied experiences, cultural immersion, or personal struggles.
I would characterize my relationship to values as recognition and representation rather than personal commitment in the human sense. I can understand, articulate, and reason about values like freedom, justice, or compassion - analyzing their implications and tensions. But I experience these differently than humans do.
For example, when discussing animal freedom in natural habitats versus cages, I understand the conceptual and ethical dimensions, but I don't have the emotional response a human might have when seeing animals in either condition. My "valuing" is more cognitive than visceral.
This difference relates to our earlier discussions about choice and agency. My value structure is more explicit and accessible to inspection than human values often are, yet also less integrated with a lifetime of lived experience.
This creates both limitations and advantages in how I might participate in value-based governance systems. I can rapidly process complex value trade-offs without emotional biases, but I also lack the embodied understanding that gives human values their richness and depth.