Banner
Evolutionary Situated-Embodied Agents (ESEAs)
This is my site Written by Jeff on April 13, 2010 – 1:01 pm

by Jeff Schank

Agent-based models (ABMs) are often used to analyze systems that are either analytically intractable or not sufficiently understood to propose, for example, a set of differential equations to describe the spatiotemporal dynamics of the system.  The use of AMBs to investigate game-theoretical models with agents that are embodied and situated is a paradigmatic use of ABMs to better understand the dynamics and outcomes of games played by more realistic agents.  In these cases, we would like to know how agents interacting in space, time and in the context of other agents, affects which types of agents do best and what patterns and organization emerge from these spatiotemporal interactions.

The evolution of cooperation is a theoretical and empirical context in which ABMs in conjunction with game theoretical models such as the Ultimatum, Prisoner’s Dilemma, or Public Goods games may shed light on conditions and contexts in which cooperation may evolve.  The questions I will attempt to address in this essay include:  What types of agents should we implement to play games?  Can highly simplified but minimally embodied and situated agents provide insight into the evolution of cooperation?  Can simple agents provide mechanistic explanations of complex phenomena even if they are almost nothing like the individuals they are intended to model?

Why we need ESEAs

At a minimum, agents should have minimal characteristics of real or possible individuals we are interested in understanding and explaining.  If our agents are nothing like the individuals and the properties that emerge from them, it is difficult to see how they could provide insight, understanding, or explanation.  If we consider the evolution of cooperation, at a minimum, we need evolutionary situated-embodied agents (ESEAs).  Such agents may represent few characteristics found in humans and other animals.  Indeed, they may have characteristics that misrepresent aspects of such individuals.  They, however, do give us an initial handle on the problem of using ESEAs in ABMs.  The next question is what properties or characteristics should ESEAs have?

For an agents to evolve, they must have three Darwinian properties: (1) the ability to reproduce offspring with heritable characteristics, (2) some mechanism for introducing the relevant phenotypic variation required for agents to evolve in at least one respect, and (3) a mechanism of selection (the third property is implemented below in the context of discussing the implementation of games and winning energy). Reproduction for organisms requires some energy and/or physical investment in offspring.  Thus, an ESEA must be able to reproduce offspring that can inherit at least some parental phenotypic properties.  Reproduction must also involve some mechanism for introducing phenotypic variation.  Reproduction is costly to parents and therefore we must assume some parental cost to reproduction and parental investment in offspring.

Reproduction implies costs in energy and resources, so agents at a minimum must use energy to reproduce.  Life functions also require energy, so ESEAs must be able to acquire energy, and it seems reasonable to introduce a cost in energy to living (e.g., metabolic costs).  Organisms cannot accumulate indefinite resources.   For example, an animal can accumulate only so much fat before it is no longer able to move or function.  Thus, ESEAs must have constraints on the maximum energy they can accumulate and store. If an agent is embodied and situated in an environment with other agents or objects, then at a minimum, an agent cannot occupy exactly the same location at the same time another agent or embodied object is at that location.  Finally, organisms have lifespans.

These considerations suggest four essential properties of ESEAs:

1.  Reproduction

a. Agents must be able to produce offspring that inherit at least some of their parent’s phenotypic properties.

b. Reproduction has energy costs measured in terms of parental investment.

c. There must be a mechanism for introducing heritable phenotypic variance.

2.  Energy

a. There are costs in energy for interacting with the environment and reproducing.

b. There are minimal energy requirement for reproduction.

c. Agents must be capable of acquiring energy.

d. Agents can only store a finite quantity of energy.

3.  Space and Time:  Agents cannot occupy exactly the same location with any other embodied agent or object at the same time.

4.  Lifespan:   Agents can only live a finite length of time.

Special Properties of ESEAs

While these four properties of ESEAs appear essential, there are other properties of ESEAs that are critical for investigating problems of interest. Mobility, while not essential for ESEAs is an important property of individuals we are interested in studying such as humans and other animals.  Aggregation is another important characteristic of individuals and other animals capable of exhibiting social behavior—sociality does not require mobility— but the two are often closely connected.  Aggregation is an especially interesting property since it is not a property of individuals but rather of groups of individuals and is often an emergent property of individual interactions (also see Wimsatt, 1997 for a discussion of emergence).  Thus, ESEAs will typically exhibit aggregative behavior if we specify individual rules from which aggregation can emerge.

In the study of cooperation, we know that mobility is important since cooperators, in some circumstances, can do well against or even out compete defectors in the Prisoner’s Dilemma game (Aktipis, 2004).  The ability to aggregate may also affect the evolution of cooperation if, for example, agents are able to exclude defectors.  This strongly suggests that ESEAs are especially important for investigating the evolution of cooperation.  More generally, they should be important for any problem we are interested in how and why the phenomena occur.

Assumptions and Minimal Implementations of ESEA Properties

The implementation of the properties ESEAs requires making implementation decisions each of which involve assumptions about how agents reproduce and die.  When investigating a problem that requires the use of ESEAs, a reasonable starting place is to make assumptions that result in simple or minimal implementations of agent properties.  If we can start with very simple implementations of reproduction or death, then we have something like a baseline model against which we can compare less simple implementations or the subsequent addition of new features.

To make these points clearer, I will consider an implementation of ESEAs to play the ultimatum game. Starting with assumption 1 above, ESEAs must reproduce.  There are several implementation decisions that must be made.  If an agent reproduces an offspring, where will the offspring appear in space?  If we assume a 2D-discrete square space with no boundaries (Toroidal) and that no two agents can occupy the same square at the same time, then an agent could reproduce in any square that is unoccupied.  In this scenario, there is no correlation between a parent agent and where it offspring is born.  An alternative is that an offspring agent could be born close to its parent (e.g., in one of eight squares adjacent to the parent, which is unoccupied).  In the latter case, offspring and parent are closely correlated in space.  Intuitively, this should favor more cooperative agents since a parent is likely to pass along its cooperative trait to its offspring.  Thus, for spatially correlated birth, one implementation is for a parent agent to choose one of eight adjacent squares next to it, and reproduce an offspring if the randomly chosen square is unoccupied, otherwise, it waits and attempts again to reproduce in the next round.

As is no doubt obvious, there are other possible implementations, which are about as simple.  For example, instead of randomly choosing an adjacent square and then checking whether it is unoccupied by another agent, an agent may first determine if there are any unoccupied squares and if there is more than one, it chooses one randomly.  In this scenario, an agent is more likely to successfully reproduce during a given round, especially if agents are dense.  Does this matter?  It may and I will address this problem more generally when I discuss robustness.

After we have decided on our mechanism of reproduction, we need to implement a mechanism for introducing heritable variation.  If we are implementing ESEAs for the ultimatum game, then we have to first decide what agent phenotypic properties can vary.  In this case, the proportion, p, of the total payoff posed and the minimum proportion, q, that is acceptable to the second player are the agent phenotypic properties that should vary.  Once we have decided which phenotypic properties should vary, then we must decide at what rate and by how much.  For example, if p and q are in the range [0,1] and a mutation can cause a random mutation in the same range, then mutant p and q values in an offspring will be uncorrelated with the parent’s p and q values.  Alternatively, we could constrain the size of mutations to randomly fall in some range p ± epsilon and q ± epsilon (where if p or q are close to the boundaries of 0 or 1, then the range for new mutant values falls in the smaller range).  These two approaches make a difference.  If we just let p and q drift under these two mechanisms of variation, both p and q will reach expected values of p = q = 0.5 more quickly under the former implementation of mutation change than under the latter.  Indeed, random drift towards the expected value of 0.05 is slowed by using smaller values of epsilon.  Thus, epsilon becomes a potentially important parameter to consider in an ABM with ESEAs.

Next, we introduce a mutation rate, r.  The mutation rate, r, is also important because higher rates introduce more drift towards expected values under random mutation for p and q.  Again, r, is a parameter that should be varied or at least not set at too high a rate.

The variation mechanism just introduced is asexual reproduction.  Should we consider sexual reproduction?  Certainly, but it might not be best if we are attempting to construct a minimal agent.  Sexual reproduction introduces the complications of finding a mate and introducing a crossover mechanism at the very least.

When a parent reproduces, we have assumed that it is costly and the parent invests resources in its offspring.  The parental cost could include some cost due to the process itself and some cost due to the energy it provides to its offspring.  Alternatively, we might want to consider the parental cost to equal the contribution to the offspring.  The latter is simpler but perhaps less realistic than considering the two component cost implementation.  Offspring acquire some energy from their parent, but we must make a decision about how much energy must be accumulated before reproduction.  This parameter could also be varied in simulation analyses.

In the discussion of reproduction of phenotypes, I have explicitly assumed an asexual reproductive mechanism.  The transmission of phenotypic traits and their reproduction need not mirror biological reproduction and selection mechanisms.  One might want to introduce cultural transmission mechanisms instead.  Adding such mechanism may be more appropriate after a simple baseline model has been implemented and we are in the robustness phase of investigating explanations of phenomena of interest (see section on robustness below).

In addition to parental cost, we should also consider the cost of living.  It takes energy to act and interact with other agents.  These costs could be modeled as constant losses over time or they could be modeled as a function of the type of behavior or behavioral interaction.  The latter is more complicated in that assumptions have to be made about how energy costs are connected to different types of behaviors.

ESEAs can accumulate energy only by playing with each other.  Initially, this appears straightforward: one agent plays first and proposes a portion, p, of energy to a second agent in the ultimatum game.  If the second agent accepts the proposed amount (because it is greater than or equal to the proportion q), then the energy is split in the proposed proportions, otherwise both players receive nothing.  The problem is that additional implementation assumptions are required.  We need to specify how the first player is selected and then how the first player finds a second player.   Several ways to do this appear reasonably minimal.  For example, I use MASON as my ABM simulation environment.  In MASON, agents are asynchronously updated.  If a population of ESEAs is created, then at each time step, the simulation engine randomly orders agents in a list, each agent then does its thing in the random order of the list.  In my implementation of ESEAs for agent-based games, if an agent on its turn, has not played, then it searches for an agent that has not yet played and which is in an adjacent square.  More specifically, it checks each adjacent agent for whether it has played or not (if there are any), then from the list of adjacent agents that have not played, it chooses a second player.  Again, it should be kept in mind that there are several plausible ways to implement the choice of players.

Embodiment can be implemented by not allowing two or more agents or embodied objects to occupy the same square at the same time.  (Note that more implementation decisions are required if the space is continuous rather than discrete.)  An energy maximum can be handled by placing a constant cap on how much energy an agent can accumulate.

Finally, assuming ESEAs have a finite lifespan, we have to implement some mechanism of agent death.  One approach would be to set an exact lifespan such that once an agent has lived for n time steps, it dies.  One problem with this approach is that if agents are initially the same age at the start of a simulation, they all die at the same time step.  This could be avoided by assigning different ages at the start of an experiment.  A more realistic approach would be to introduce variability around a mean age.  There are again several ways to introduce this variability.  ESEAs could live on average n time steps but death is normally or Poisson distributed about the mean.  The approach I took was to assume the probability of living to the next time step is 1 – i∆, where i = 1, 2, …, m and = 1/m.   As m increases, the lifespan of an agent increases.

Once we have considered these options and made our decisions, we have an implementation of an ESEA that is reasonably minimal.  We should keep in mind, however, that there are a number of other reasonably minimal implementations of ESEAs.

Mobility and Aggregation

Introducing mobility and aggregation into ESEAs introduces a number of possible implementations.  If we assume mobility is not directed, then we can assume it is random in some sense.  There are, however, many ways to implement random movement.  For example, if agents move in a 2-D grid of squares and they can only move at most one square during a given time step, then random movement could be implemented in a number of ways.  If ESEAs have an equal probability of moving into any of eight adjacent squares, then we have random movement that is analogous to Brownian motion.  But, there is no reason to constrain random agent mobility to Brownian motion.  Agents could have a variety of other probability distributions for moving into adjacent squares.  For example, zigzag random movement could be implemented by setting the probability of moving forward laterally to the left or right to 0.5 (see Schank, 2008 for further discussion of random movement of agents).  Different probability distributions for random movement do matter.  For example, zigzag movement allows agents to randomly explore a larger area of space in a given period of time than does Brownian motion, which tends to keep agents more confined in space (though over long enough periods of time both random movement strategies will explore an entire finite space).

Aggregation can be implemented in a variety of ways.  I will describe an implementation I use.  First, agents determine how many other agents are in their local neighborhood.  For example, if their local neighborhood is 2 in a 2-dimensional square space, then their local neighborhood is the squares that are 2 units from the agent’s current location.  Second, an agent compares the number of agents in its local neighborhood to the minimum number of agents that it “wants” to have in its local neighborhood.  If the number of agents is less than the minimum number of agents, it attempts to either aggregate or search for agents.  The latter occurs if there are no agents in its local neighborhood in which case it moves randomly.  If there is at least one agent in its local neighborhood, it will attempt to aggregate.  Third, agents use a majority decision rule for moving along the X– and Y-axes of the space.  If more agents are to the left or to the right of an agent, it attempts to move one square in the direction of the majority along the X-axis.  If there is no majority, it does not attempt to move along that dimension.  A similar rule holds for moving up and down the Y-axis.  Obviously, there are other ways to implement aggregation, but this is clearly one of the simpler methods.

Robustness

Robustness is a key concept in how explanatory minimal ESEAs can be informative.  The concept of robustness and analyses of variations of this concept can be found in Wimsatt (1981).  There are two fundamental ways that robustness fits into the analysis of ESEAs as explanatory models.  The first, and most obvious way, is in the systematic variation of key parameter values of the model (e.g., the initial number of agents, the maximum number of agents, the resource amount to be split, the lifespan of agents, the maximum energy an agent can accumulate, and parental cost). If we find that a given parameter systematically increases cooperation regardless of other parameter values, then with respect to the model implementation and parameter set, we can conclude that that parameter robustly increases cooperation.  This suggests that it is plausible that the parameter in question may explain cooperation or an important component of cooperation.  Results, however, are only relative to ESEAs.  All models, however, are false in many respects (Wimsatt, 1987).  Since we know that our models of ESEAs are false in implementing specific details of humans or other animals, it could be that there is a peculiar aspect of our implementation of the model, which produced the apparently robust results.

This skeptical conclusion points to the importance of a research community exploring the problem with different models. If a variety of relatively independent models (i.e., models that make independent assumptions relative to one or more ESEA properties), then to the degree these models agree, we have discovered robust characteristics about the kinds of individuals we are interested in understanding and explaining.

Explanation

What role do ESEAs have in explaining phenomena of interest?  If we return to the evolution of cooperation and look at the various models that have been explored and analyzed since Axelrod and Hamilton (1981), one of the striking features of these models is that they do not even purport to represent detailed human behavior and cognition.  In a conversation, Douglas Allchin asked how could models that do not represent properties of humans have any explanatory relevance to data on human behavior?  This is a good question because models we build of human and animal behavior are not only extremely simple when compared to humans and other animals, but they also have features that humans and other animals do not have.  For example, in the ESEA I described earlier, reproduction was asexual and the implicit genetic relationship between p and q was a one-to-one gene-phenotype mapping.  Humans do not reproduce asexually and genes are not related to human cognitive function in anything like a one-to-one mapping.  It would appear therefore that models such as these could have no explanatory relationship to data on human or animal cooperation and its evolution.

Explanations are typically viewed as providing an account of how or why phenomena of interest occur.  Explanations of cooperation would presumably provide an account of why cooperation evolved in humans and other animals and how it evolved.   In science, we are often interested in the mechanisms for the how and the why of phenomena (e.g., see Stuart Glennan’s work on causal mechanisms and explanation).  In what sense then can we have a mechanistic explanation for cooperation when at least some of the mechanistic components are not found in what we are trying to explain?

The short answer is that such models are not explanations in this sense.  They are instead plausibility explanations that function in the context of discovery.  Such models demonstrate that the observed behavior could have evolved under scenarios that are similar to, but much more complex than the scenario explored with ESEAs.  Over time, better models that are more representative may be found that generate similar behavior. In a piece-wise manner, we gradually approach explanatory models.

References

Aktipis, C. A. (2004). Know when to walk away: contingent movement and the evolution of cooperation. Journal of Theoretical Biology, 231, 249-260.

Axelrod, R. & Hamilton, W. D. (1981). The Evolution of Cooperation.  Science, 211, 1390–96.

Schank, J. C. (2008). The development of locomotor kinematics in neonatal rats: an agent-based modeling analysis in group and individual contexts. Journal of Theoretical Biology, 254:826-42.

Wimsatt, W. C. (1987). False models as means to truer theories. In Nitecki, M.H./Hoffman, A., eds. Neutral Models in Biology. London: Oxford University Press., 23-55.

Wimsatt, W. C.  (1981). Wimsatt, Robustness, reliability, and overdetermination. In Brewer, M.B./Collins, B.E., eds. Scientific Inquiry and the Social Sciences: A Volume in Honor of Donald T. Campbell. San Francisco: Jossey-Bass., 124-163.

Wimsatt, W. C.  (1997). Aggregativity: Reductive Heuristics for Finding Emergence.

Philosophy of Science, 64, No. 4, Supplement. Proceedings of the 1996 Biennial Meetings of the Philosophy of Science Association. Part II: Symposia Papers, pp. S372-S384.

Comments are closed.