Thursday, August 21, 2014

Population ethics and inaccessible populations

Summary: On some views in population ethics, including diminishing marginal value and average views, the value of producing future generations depends on the quantity of beings and welfare in times and places beyond our causal reach. Within these viewpoints large future populations do not automatically have overwhelming moral importance. However, if there are large inaccessible populations, then perspectives like these will-like total utilitarianism and its kin-also place overwhelming weight on the interests of large future populations. Past generations of hominids, and especially of non-human animals, greatly outnumber the current generation, and provide such an inaccessible population. Life elsewhere in the universe might do so as well. Such populations, in addition to changing the recommendations within these theories, may or may not reduce the weight given to the theories in deliberation.

Population ethics without additivity across time and place
Unbounded aggregative views in population ethics are typically motivated with some kind of independence premise: it is said that some kind of good, such as a happy life, has a certain component of value which does not depend on anything that may happen in other times and places. The value of these independent entities can then be aggregated linearly: a trillion trillion such entities will have a trillion trillion times as much of that value, and any reasonable prospects of producing very large numbers of such entities will have extraordinary expected value.

For example philosopher Nick Beckstead's dissertation, "On the overwhelming importance of shaping the far future," builds on a claim he calls Period Independence, that the value of a civilization in a particular era is independent of the value of civilizations in other eras  Of course, there are other views which reject Period Independence, and Beckstead argues against them on the grounds that they would make history and exobiology relevant to our moral decision-making in a fashion he finds counter-intuitive:
To appreciate the rationale for Period Independence consider the following scenario:
Asteroid Analysis: World leaders hire experts to do a cost-benefit analysis and determine whether it is worth it to fund an Asteroid Deflection System. Thinking mostly of the interests of future generations, the leaders decide that it would be well worth it.
And then consider the following ending:
Our Surprising History: After the analysis has been done, some scientists discover that life was planted on Earth by other people who now live in an inaccessible region of spacetime. In the past, there were a lot of them, and they had really great lives. Upon learning this, world leaders decide that since there has already been a lot of value in the universe, it is much less important that they build this device than they previously thought.
On some views in population ethics, the world leaders might be right. For example, if we believe that additional lives have diminishing marginal value, the total value of the future could depend significantly on how many lives there have been in the past. Intuitively, it would seem unreasonable to claim that how good it would be to build the Asteroid Deflection System depends on this information about our distant past. Parfit and Broome appeal to analogous arguments when attacking diminishing marginal value and average views in population ethics. See Parfit (1984, p. 420) and Broome (2004, p. 197) for examples.

Aggregative views which embrace Period Independence have a number of theoretical virtues, and may command a plurality in the fractured field of population ethics, but there remains widespread disagreement, on various grounds, and there are reasons to examine the implications of rival views. This post will focus on two in particular.

First, average principles, which focus on improving average rather than total welfare. These are disfavored by many philosophers because in some circumstances they recommend creating lives not individually worth living to bring up the average, and reject the creation of lives individually well worth living but below the average even when this would not make anyone else worse off (some have suggested evaluation functions that resemble average utilitarianism for positive averages, but not when averages are negative). I will be discussing averages over people across history, not just at one particular time (if averages at particular times sum up, then the argument from Period Independence goes through).

Nonetheless, some do defend views that incorporate an average principle as one contributor to overall valuations, e.g. philosopher Richard Chappell defends a value holism in which the value of adding life to a population depends on the distribution of welfare among existing lives. Holden Karnofsky mentions a holistic/aesthetic stance which might relate to average (and diminishing value) principles in a conversation about the importance of the future (search for "painting"):

So one crazy analogy to how my morality might turn out to work, and the big point here is I don't know how my morality works, is we have a painting and the painting is very beautiful. There is some crap on the painting. Would I like the crap cleaned up? Yes, very much. That's like the suffering that's in the world today. Then there is making more of the painting, that's just a strange function. My utility with the size of the painting, it’s just like a strange and complicated function. It may go up in any kind of reasonable term that I can actually foresee, but flatten out, at some point. So to see the world as like a painting and my utility of it is that, I think that is somewhat of an analogy to how my morality may work, that it’s not like there is this linear multiplier and the multiplier is one thing or another thing.

More importantly, average views are widely invoked by economists in welfare analysis. They also may tend to diminish the force of some arguments for the importance of the long run future, as it seems that the size of the population could be scaled up more dramatically than quality of life, and adding people with high quality of life would eventually have diminishing returns, as average quality of life approached the quality of life of the people being added.

Second, some would like to assign variable, diminishing marginal value of population or welfare as the quantities involved become more extreme. Such diminishing returns may be extreme enough to bound the value of arbitrarily extreme populations  within a finite range. Theories with diminishing marginal contribution to overall value might say that it is almost as important to reach a flourishing population of 10^50 as 10^51, and that a flourishing population of 10^11 could achieve a significant portion of the value of the society of 10^50. Astronomically tiny chances of producing extreme populations would not necessarily be of great importance, depending on the details of the account, allowing for concerns about small populations to play significant weight in our deliberations. Philosopher Larry Temkin has defended such a view:
[Temkin] introduces a Capped Model for Utility (contrasted with the Standard Model) that doesn’t strictly limit the assessment of moral outcomes to utility alone, but allows for value pluralism by incorporating other moral ideals (equality, maximin, perfectionism, autonomy, Essays Philos (2012) 13:2 Rovie | 599 virtue, friendship, achievements, family, respect all get mentioned as options). Here,  Temkin wants to acknowledge the anti-aggregationist view that merely adding more total  utility might not lead to a better overall outcome, particularly if the utility added would  increase, for example, inequality. The Capped Model, says Temkin, allows us to avoid  Parfit’s Repugnant Conclusion without rejecting Utility Theory in toto.
Even if one considers these two views less plausible than an additive aggregative unbounded view, one might still want to think about their implications for reasons of moral pluralism or normative uncertainty.

Both of these sorts of views could be significantly affected in their recommendations by the presence of inaccessible beings, as in Beckstead's Our Surprising History. Such beings could contribute to the average level of welfare, or move along the curve of diminishing value. For example, Earth's past includes large populations of humans, and a significantly greater quantities of nonhuman animals.

Average welfare and past populations
Say that there is an inaccessible population (>0) of size p, and we consider creating n new lives with average welfare greater than the inaccessible population, by w. The increase in average welfare will be (wn)/(n+p). As long as n is small relative to p, this is approximately wn/p, and a doubling of n approximately doubles the effect on average welfare. Within a couple doublings of p, each doubling of n gives a similar improvement in average welfare. Beyond that, each doubling delivers approximately half the average welfare gains, as the total comes ever closer to w.

What does this imply in the average perspective when we think about the importance of one vs many future generations? We know that there can be very good standards of living, better than those that almost all humans and non-human animals have experienced in Earth's history, by considering the best lives we see today. Further, there is reason to think that continued technological advance would make it possible to create lives with much higher average levels of welfare, flourishing, excellence, etc. We also know that there have been many times as many humans alive in the past as are alive today, and hundreds of millions of years during which the Earth has been populated with abundant non-human animals equipped with nervous systems, which even today account for the bulk of terrestrial neurons.

If we focus on non-human animals prehistory supplies a very large p, and to substantially raise average welfare (or hit serious diminishing returns) would require either the creation of higher-welfare populations millions of times as large as current ones, whether across time or space. This would seem to require that civilization survive to achieve some combination of extreme stability and/or technological capacities. The total improvement to average welfare attainable by a good future would be at least millions of times as great as the value of another generation at the current time (recalling that a generation only matters in this perspective insofar as it shifts the average), and could be many times as important still if much higher average welfare is possible and attained. That level of importance would still justify talk of "the overwhelming importance of shaping the far future" although the gains could be almost wholly attained within our own star system, without interstellar colonization (although the technological sophistication and stability required for the one would seem to enable the other).

A narrower focus on past humans, and perhaps other hominids and highly sapient creatures, which might be based on contractarianism or moral agency and reciprocity, would involve a smaller p relative to current populations. While data are weak and noisy, past human populations were plausibly less than two orders of magnitude greater than the current population. This is small enough that diminishing returns could be relevant when comparing 1 and 10 generations, and current high standards of living might lie within a factor of only a few hundred of a large prosperous future in effect on average welfare (or more if extremely high welfare were possible). That could still make future generations much more important than the current generation, but more sensitively to very strong countervailing considerations.

Average welfare and large populations elsewhere
While past Earthly populations are much larger than current ones, they remain tiny compared to the scope of astronomical waste and possible future Earth-derived civilization. However, there may be other civilizations in the universe, which could have populations at least as great as those attainable from Earth. If multiple populous colonizing civilizations exist in our visible universe, or if various Big World theories are correct, then Earth-derived civilization will always be a vanishing share of the total population. If we think of ourselves as controlling the fate of only a single civilization, then p will be exceedingly vast in relative terms, and average principles will be approximately linear over any population levels we can manage. At least, that would be the case if the existence of other civilizations was known with certainty.

There is some interaction between the average principle and decision theory. On a causal decision theory, uncertainty about the existence of huge inaccessible populations elsewhere would lead to recommendations to act as if they did not exist, since on that hypothesis much larger changes to average welfare would be locally possible. On some other decision theories we should choose as though setting the output of the decision procedure used by all relevantly similar agents. Then it might be that influence in this sense is not negligibly small relative to the total population. For example, say we decide whether to invest heavily in making Earth's future go well, but reason "our planet is a mote in a vast universe, so the impact on average welfare will be negligible," and our reasoning is representative of that of people throughout the universe, collectively reducing its value by 10%. The dispersed agents would face a commons problem. On the other hand, if technological civilization was ludicrously rare compared to morally significant but not sapient life, then it might be that almost all the welfare in the universe is independent of decision-makers like ourselves.

Either way, if populous civilizations are known to exist elsewhere, then the average principle would seem to lead to focus on technologically advanced populous future civilization, and otherwise past terrestrial populations would lead to similar, if less extreme, conclusions within the frameworks.

Diminishing marginal value
It is more difficult to discuss views with diminishing marginal moral value of population, since the category is much more encompassing than average views. One could cash out many particular such views on which future generations would have almost no importance, e.g. the view that good is linear with net welfare up to the level of 20 QALYs, with no further gains thereafter, or a view on which each successive person contributed half the moral value. However, more commonly, proponents of such views say that returns to population are not sharply diminishing over the proportionately small variations in total population that happen regularly throughout their careers, e.g. that adding 1 or 1,000 or 1,000,000 people to a population of 7,000,000,000 would involve approximate linearity of added moral value, although extremely large population changes would result in severely diminishing returns.

But if added value is approximately linear for very small proportional changes, and past or otherwise inaccessible populations are large relative to the current generation, then we will have approximate linearity when adding 1 or 10 generations, with an analysis similar to that for average welfare, and at least moderately strong focus on populous futures (depending on what one includes in past populations).

When considering populations much larger than past or inaccessible ones, the details will depend on particular curves/preferences/exchange rates, but increases in log population might be many times greater than the increase from a single generation of modern size. The luminosity of the our Supercluster is about 25 orders of magnitude greater than the power received by Earth from the Sun. Increasing population by that factor would deliver over five hundred times the increase in log population as a 10% boost (which might reflect an additional human generation, considering only past humans and ruling out other civilizations). All-time log human population could more than triple, with less potential scale-up considering non-human animals. Such numbers don't dictate any particular valuation, but could easily support further gains in value for increases larger than past or otherwise inaccessible populations.

Normative uncertainty: recommendations of theories vs 'loudness'
An average or diminishing value account may imply that, given large inaccessible populations, the long run future is relatively much more important than small short-run impacts (in intrinsic rather than instrumental terms). But they may also reduce absolute importance, e.g. one might say "because of these huge populations elsewhere, I can only make a smaller change to average welfare, or eke out less moral value from a function with diminishing returns, perhaps I should just attend to other concerns?" Other concerns could be selfish, like eating a chocolate bar, or other ethical concerns like displaying virtues, complying with deontology, serving other goods like Temkin's  "equality, maximin, perfectionism, autonomy... virtue, friendship, achievements, family, respect" and so forth.

How to account for this depends on our general approach for dealing with normative uncertainty and normative pluralism. Some approaches, such as Nick Bostrom and Toby Ord's Parliamentary Model, consider what would happen if each normative option had resources to deploy on its own (related to its plausibility or appeal), and look for Pareto-improvements. Essentially, the model is cooperation among separate autonomous individuals with different normative views. Advocates for average and diminishing value views who had updated on knowledge of inaccessible populations might then push for focus on the long-run in our actual world, as the best that could be done. However, from behind a veil of ignorance, before learning about the existence of large inaccessible populations, they might have preferred a deal in which their precepts would be followed in a case where great good could be done by their lights, e.g. an Adam and Eve scenario, in exchange for deferring to other concerns in worlds with big inaccessible populations.

Others defend an 'expected choiceworthiness' approach based on expected utility theory that attempts to convert various perspectives into utility functions and set exchange rates between them. One approach is to force every theory to be represented as a bounded utility function, with utilities defined in terms of their relative position between maximum and minimum utility. Such an approach strongly penalizes theories that place great concern on options and events which are not actual (a theory A which relatively values all outcomes identically to theory B except for assigning huge value to riding an invisible pink unicorn will have far less weight in deliberations), and so would penalize average and diminishing value views. Other ways of setting rates of exchange between theories are varied, and much would depend on the specifics of ways to assign 'loudness' to preference relations.

The scale of past populations
Precise historical and prehistoric population figures are not feasible, but we can get a rough sense. If we were to assess the scale of the past by the raw numbers, biomass, or neurons of non-human animals, it is clear they dwarf humans in a generation of 'business as usual.' Non-human populations vary with changes in climate and evolution, but the bulk of animal neural tissue is found in numerous small invertebrates, which have been present for hundreds of millions of years. Here's Wikipedia's evolutionary timeline:

In its 4.6 billion years circling the sun, the Earth has harbored an increasing diversity of life forms:
  • for the last 600 million years, simple animals;
  • for the last 550 million years, bilaterians, water life forms with a front and a back;
  • for the last 300 million years, reptiles;
  • for the last 200 million years, mammals;
  • for the last 150 million years, birds;
  • for the last 130 million years, flowers;
  • for the last 60 million years, the primates,
  • for the last 20 million years, the family Hominidae (great apes);
  • for the last 2.5 million years, the genus Homo (human predecessors);

If we restrict our attention to creatures with human-like intelligence, perhaps to consider moral agents as opposed to moral patients, the current generation is relatively much larger. The current human population of over 7 billion is orders of magnitude greater than human populations over most of the species' history, although that history is long enough that Earth's human dead outnumber the living. The conclusion is robust but details are sensitive to estimates of populations before good records were available, and which hominids to include.

One widely-cited estimate of 108 billion, by the Population Reference Bureau, uses a cutoff of 50,000 years ago, i.e. behavioral modernity as opposed to anatomical modernity. On closer investigation, the details seem quite problematic. The estimate slots various population estimates for particular times, assumes constant growth between those points, and assigns birth rates for each period.

The later population estimates are more credible, but the pre-agricultural estimates are a joke, with a starting population of 2 as a play on the Adam and Eve myth.

YearPopulationBirths per 1,000Births Between Benchmarks
50,000 B.C.2--
8000 B.C.5,000,000801,137,789,769
1 A.D.300,000,0008046,025,332,354
1200450,000,0006026,591,343,000
1650500,000,0006012,782,002,453
1750795,000,000503,171,931,513
18501,265,000,000404,046,240,009
19001,656,000,000402,900,237,856
19502,516,000,00031-383,390,198,215
19955,760,000,000315,427,305,000
20116,987,000,000232,130,327,622
NUMBER WHO HAVE EVER BEEN BORN107,602,707,791
World population in mid-20116,987,000,000
Percent of those ever born who are living in 20116.5
Economist Brad DeLong cites Kremer's (1993) estimates of historical and prehistoric populations, giving lower population figures for the agricultural age and thereafter, but giving some preagricultural estimates:

20140528 1998 Global Historical GDP Numbers numbers

The current chimpanzee population, after massive habitat destruction, is estimated to be in the six figure range, with estimates of earlier populations in the millions. Neanderthals (mostly confined to Europe) have had their population estimated in the thousands to tens of thousands. Including millions of hominids and great apes over millions of years would substantially augment past populations.

Depending on details of exclusion and estimation we might put the past human/hominid/great ape population as 1-3 orders of magnitude greater in scale than the current generation, while the mass of past non-primate animals exceeds that of 50 years of business as usual by 6-7 orders of magnitude.


7 comments:

Toby Ord said...

Great post Carl. This goes very well with some of Nick Beckstead's more recent thoughts on how surprisingly many theories of population ethics suggest that the future is very important. I'll try to look more systematically at some of this in the near future on my population ethics grant.

For example, while it is interesting that the Average view gives a large weight to the future (under some plausible auxiliary assumptions), it is not clear how relevant this is since there appear to be precisely zero philosophers who advocate the Average view (at least to the point of writing a paper defending it). However, since many conflicts in population ethics are cases where one trades off Average for Total or vice versa, it may be possible to modify this example into one that applies to a huge swath of possible theories at once.

Indeed this looks quite easy, as least in some versions. For example if we just compare extinction soon with large and prosperous future, we get that it is a large pareto improvement in terms of both total and average. This should capture many theories (though Critical Level and Person Affecting theories are definitely among the exceptions).

Unknown said...

Do you think that the potential impact of a single individual on population size is small enough to judge it with critical-level utilitarianism (or an analogous non-utilitarian theory)? Critical-level utilitarianism (CLU) judges new lives by comparing them with an unchanging neutral life, and other population principle that agree with it in constant-population cases can be linearly approximated by it (the "CLU approximation"). I think this approximation is very convenient when it's accurate, since it allows you to reduce all the various population principles to a single dimension of "population strictness" - how good a life has to be to be neutral.

For reasonably smooth population ethics this approximation works for short-term population change achievable by one person. For example, the CLU approximation works for average utilitarianism as long as your actions can't possibly change the population size by more than a few percent. For things like existential risk reduction, though, the CLU approximation fails: if successful, you might hugely change the human population size.

How well do you think the CLU approximation works when considering far-future populations? The answer does depend on the size of inaccessible populations: if there are more of them, your potential impact is relatively smaller, and the approximation will be better.

Pablo said...

As Toby points out, the Average View is advocated by no living professional philosopher. As a colorful anecdote, I once was dining with John Broome and as soon as I mentioned an implication of average utilitarianism, he interrupted me, saying "We can safely ignore what follows from that theory, since no one takes it seriously."

However, some respectable thinkers do endorse other population theories that violate utility or existence independence (as those conditions are defined in this paper). So Carl's findings are relevant for those who hold these theories, or those who take moral peer disagreement seriously.

Toby, why do you say that critical level theories are an exception? Unless the critical level is set unreasonably high (such that current lives turn out to be not worth living), lives in a "large and prosperous future" would be well above the critical level. The value of indefinite human survival would therefore be higher than the value of premature extinction not only on total and average theories, but also on critical level theories.

Brian Tomasik said...

Toby: "For example if we just compare extinction soon with large and prosperous future, we get that it is a large pareto improvement in terms of both total and average."

Yes, but the crucial question is how x-risk work compares with more ordinary efforts to improve human welfare in the short run. Those efforts are also Pareto improvements for both views.

Nick Beckstead said...

Brian, I think Toby's point is as follows. Carl has argued that, under certain plausible assumptions about inaccessible populations, both total and average utilitarianism give overwhelming weight to very long-term considerations in certain contexts. Many (though not all) issues in population ethics involve making trade-offs between average quality of life and number of people (see e.g. p. 403 of Parfit's Reasons and Persons, or Hurka's variable value theory). But in these cases where very long-term considerations get overwhelming weight according to both average and total perspectives, many other theories would also give them overwhelming weight. Carl's argument can thus be extended to increase the important of very long-term considerations from a wider variety of ethical perspectives than it appears at first, despite the fact that very few philosophers take strict average utilitarianism seriously.

Brian Tomasik said...

That's a good summary, Nick. I guess the key word in Toby's sentence is not "pareto" (which is true for short- or long-term work) but "large" (which is more true for long-term work).

Toby Ord said...

Pablo: I excluded CLU from my analysis because there can be pareto improvements according to Average and Total which are worse according to CLU (e.g. moving from some lives at 10% of the critical level to twice as many lives at 20% of the critical level). So my argument didn't cover CLU in full generality. However, you are right that once the lives to be added are above the CL, CLU will recommend things if they improve total and average (or at least it looks like it -- I haven't checked thoroughly).

Nick: Yes that's right.