by Jeff Schank
In its ordinary sense, a scaffold is a temporary structure used to support people and material in the construction or repair of a building or physical structure. This ordinary sense of scaffold has been metaphorically applied to educational settings in which instructional scaffolding is a temporary educational support structure that provides resources, tasks, guidelines, and guidance in learning. As students learn, the educational scaffolding supporting learning is gradually removed as students develop their own learning strategies. In this article, I will explore some of the ways in which the idea of how the notion of scaffolding explicates how agent-based modeling builds insight and understanding into complex systems.
All Models are False
When we build models in science, we may have one or more aims in mind. We may build a model to help explain a phenomenon. For example, if we believe that a phenomenon (e.g., flocking in birds) is caused by some causal mechanism that involves the interactions of birds, then a model of the mechanism may reveal whether the proposed mechanism could cause the phenomenon in question and thus explain it. Flocking in birds is a good example because a few simple rules characterizing the local causal interactions in birds can explain flocking (see boids simulation). Such models, however, while possibly explanatory are also false in many respects (i.e., they make assumptions that are false and unrealistic). So, if a model is false in different respects, how can it explain anything? That is, how can unrealistic models explain real phenomena? How can unrealistic models build insight and understanding into complex systems?
It could be argued that in the context of prediction, the truth or falsity of a model does not matter. The only thing that matters is whether a model generates good predictions. This is an instrumentalist view of models: models are good in so far as they generate true predictions about the system in question. It does not matter whether the model is realistic or makes true assumptions about the causal mechanism operating in the system. All that matters are the predictions that can be cranked out of it. The problem is that most scientific modelers are not instrumentalists. They attempt to represent at least some important aspects of the systems they model (e.g., see Grim, et al., 2005). This is especially true for agent-based modeling with its focus on modeling individuals (at one or more levels) and their interactions (e.g., see Schank, 2000; Grim, et al., 2005). It is hard, if not impossible, to use agent-based models and not be a scientific realist in some sense. If agent-based modelers are realists in the sense of Grim, et al. (2005), then we do have to be concerned about all the false assumptions we make.
A starting point for resolving the problem of inescapably false models is the recognition that our use of models, especially ABMs, is to gain insight and understanding into phenomena. It is in the context of understanding, insight, and context of discovery that the idea of scaffolding can help explicate how agent-based modeling can provide insight and understanding into phenomena and ultimately explanation and prediction even though the models we build are false in many respects.
False Models at Means to Truer Theories
This heading was the title of a paper written by William Wimsatt (1986), which tackled the problem of how false models could lead to truer models, theories, and explanations of phenomena. His examples illustrate that models have to be viewed in a broader context of a model building process in which models are analyzed, tested, compared, discussed, revised, discarded, and rebuilt. By truer Wimsatt meant that the model building process discovers (1) models better supported by data, (2) properties of causal mechanisms operating in the system, (3) which false assumptions have small effects, and (4) which models are robust to different false assumptions.
Wimsatt’s view of template matching is especially important for agent-based modeling and the idea that ABMs are scaffolds to insight and understanding. We are often in situations where we lack data on a system (especially at multiple levels) and have at best an incomplete understanding of the causal mechanisms operating in the system. In such cases, a good strategy is to build a simple model with some of the mechanisms and properties of the system, analyze the model, and compare it to what data we have or our expectations about how the system should behave. The model will likely fail to match the behavior of the system in, at least, some important respects, nevertheless it can serve as a template for either building new models or revising the model itself.
Building or revising models can happen in at least two ways. First, by examining the original model, its behavior may help us localize assumptions that should be changed. Second, the original model provides a template against which to compare new or revised models. New or revised models may not match the data well, but by comparing a new model to a previous model, we may see whether the model behaves better than the original model in one or more respects, indicating that we are making progress in our model building and therefore gaining insight and understanding of a system.
Model-Building as Scaffolds to Understanding and Insight
Understanding is an interesting term in ordinary language. To have an understanding of a phenomenon can mean having an explanation for how and/or why it occurs. It can also mean that understanding may lead to better prediction or manipulation of phenomena. Understanding also comes in degrees. We can have partial understanding of a phenomenon. That is, we may have evidence for some factors playing a causal role in the generation of the phenomenon in question, but we do not know how the factors dynamically interact with others to produce the phenomenon. For example, there is growing evidence that the hormone oxytocin plays a role in pair bonding in some species of mammals (Carter, et al.,1995) and generosity in humans (Zak, Stanton, and Ahmadi, 2007). We even have growing knowledge of oxytocin receptors and their distribution in brains, but we are far from having a dynamic and detailed understanding of how the system works to facilitate pair bonding or generosity. Indeed, if we look at the history of oxytocin research in these areas, we first find research reporting a relationship between oxytocin and pair bonding in prairie voles (Carter, et al. 1995), which provided the initial scaffolding for further studies examining other species, the neurophysiology, and neuroanatomy of the oxytocin system (Bales and Carter, 2007). This is analogous to the way that modeling, and especially agent-based modeling, provides a scaffolding to understanding complex phenomena.
Consider cooperative or altruistic behavior. Such behavior is especially interesting when cooperation or altruism has direct costs to the cooperator or altruist. Social insects are paradigms of cooperative and altruistic behavior, where workers perform specific tasks that benefit the colony but at the cost of forgoing individual reproduction. How could altruistic behavior evolve if individuals that are altruistic forgo reproduction? Surely, individual worker bees that cheat and reproduce their own young would do well against cooperators since they are directly reproducing at least a few offspring while cooperative workers produce none. This is an example of the problem of understanding how and why cooperation evolves; why cheaters do not always succeed.
There have been several models proposed for understanding the evolution of cooperation. For example, Hamilton (1966) proposed kin selection as an explanation cooperation in social insects and other animal social systems in which animals forgo some or all of their reproductive fitness. His key insight was that forgoing reproduction does not mean a total loss of fitness. Individuals share genes and phenotypic characters in common with their relatives. Thus, by helping relatives, indirect fitness effects of promoting the survival and reproduction of relatives can out weigh the direct effects of reproduction. The problem is that kin selection may not be a good explanation for the evolution of cooperation in all social systems where individuals forgo some or all reproduction. For example, there are a number of social insect species in which colonies consist of number of unrelated queens and workers (Keller, 1995). Unrelated workers feed offspring of unrelated queens and forgo reproduction. How does cooperation evolve when kin selection does not appear to apply?
The evolution of cooperative behaviors are a class of phenomena for which we lack any thing close to complete understanding. Nevertheless, modeling approaches are proving invaluable in providing insight and understanding into the evolution of cooperation. Kin selection models have provided insight and understanding but not without problems (Keller, 1995) and game theory is providing another source of insight. Game theory has provided a small set of initially simple models to investigate how and why cooperation occurs in many social contexts (e.g., Maynard Smith and Price, 1973; Axelrod, 1984). One paradigm for investigating cooperative behavior is the prisoner’s dilemma (also see SEP). In this game, there are two players and each player has two strategies: cooperate or defect. Thus, there are four possible outcomes to the game. If both decide to cooperate, then each receives the cooperative reward payoff, R. If, however, the first player cooperates and the second player defects, then the first player receives the sucker’s payoff, S, the second player receives the temptation to defect payoff, T. The same holds if the first player defects and the second player cooperates. If both players defect, then they both receive the mutual punishment payoff, P. It is assumed that T > R > P > S.
The matrix above illustrates the normal form of the prisoner’s dilemma game. If we assume that players seek to maximize their expected payoff and play only once, then if we substitute any values into the normal form of the game above that conform to the assumed constraints on the payoff, there is only one equilibrium solution and that is for both players to defect. This is, however, Pareto-suboptimal solution. That is, both players could do better if they both cooperated.
The prisoner’s dilemma game as just described does not explain cooperation. In this sense, it is clearly a false model of cooperation. It does, however, provide insight into understanding cooperation and that understanding cooperation is not going to be easy. It also formalizes and makes precise a context in which individuals can engage in cooperative and non-cooperative strategies. Perhaps most importantly, it provides a template against which to evaluate new and revised models of cooperative and non-cooperative behavior.
One way to extend the prisoner’s dilemma game is make it more realistic by allowing players to play each other repeatedly. Robert Axelrod (1984) introduced simple agents in the prisoner’s dilemma game. In his simulated tournament, all agents played n rounds and agents could have memory of previous encounters and use their memory of previous encounters to choose to either cooperate or defect. In the first tournament, Axelrod invited colleagues to submit strategies. He found that nice strategies (i.e., they do not defect until an opponent defects), which retaliate (i.e., at some point, they defect in response to opponent defections) but are forgiving (i.e., they will at some point return to cooperation after an opponent defects) did best. In particular, the strategy tit for tat (TFT) did the best. TFT always cooperates on the first round with an opponent and then copies its opponent’s strategy on all subsequent rounds.
The introduction of agents with minimal properties of memory and the ability to play repeatedly, reveals that prisoner’s dilemma situations may provide insight and understanding into the evolution of cooperation. In this sense, performing simulations of agents with memory and using different strategies based on an agent’s memory, is the erection of new scaffolding for building insight and understanding into the evolution of cooperation. Of course, the scaffolding may be faulty and our apparent understanding and insight may collapse or the scaffolding may allow us to only build insight and understanding in only a limited respect. Nevertheless, it is a starting point from which other work can be compared.
Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books.
Bales, K. L. , Carter, C. S. (2007). Neuropeptides and the Development of Social Behaviors Implications for Adolescent Psychopathology. Adolescent psychopathology and the developing brain: integrating brain and prevention science. Oxford University Press, pp. 173-195.
Carter, C. S. et al. (1995). Physiological substrates of mammalian monogamy: the prairie vole model. Neurosci. Biobehav. Rev. 19: 303–314.
Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Mooij, W. M., Railsback, S. F., Thulke, H., Weiner, J., W. Thorsten, DeAngelis, D. L. (2005). Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology. Science, 310: 987-991.
Keller, L. (1995). Social life: the paradox of multiple-queen colonies. Trends in Ecology & Evolution, 10: 355-360.
Maynard Smith, J. & Price, G. R. (1973). The logic of animal conflict. Nature, 246: 15-18.
Nowak, M. A. & May, R. M. (1992). Evolutionary games and spatial chaos. Nature, 359: 826-829.
Zak P. J., Stanton, A. A. & Ahmadi, S. (2007). Oxytocin Increases Generosity in Humans. PLoS ONE, 2: e1128.
A chapter on this topic will be appearing in the Vienna Series in Theoretical Biology, MIT Press.