Information

Is there a random walk theory that can account for situations with more than two choices?

Is there a random walk theory that can account for situations with more than two choices?

In the article "Two-stage Dynamic Signal Detection: A Theory of Choice, Decision Time, and Confidence" from 2010 by Pleskac and Busemeyer, a random walk model is presented for situations where a discrete choice is made (that is, signal present/not present, yes/no, 1/0 et cetera). Here, if there is no time pressure involved, a threshold value is set which has to be reached before an answer is given. Contrary, if there is a time pressure involved, the value at the last time point is used to decide which answer is given, even if the threshold point hasn't been reached at that time.

I wonder whether there are any extension of this model which can incorporate situations with more than two choices (from 3 alternatives and up to a fully continius choice space). I guess you just could incorporate more dimensions (the model referred to above is two-dimensional), but if the answers are non-independent (for example, when using a rating scale), I guess the random walk function would have to be fairly complicated. That is, if strong evidence is accumulated for number 3, how much evidence should be added to 4 and 2? Should evidence be subtracted from 1 and 5?


Instead of having a single integrator with two bounds for two choices (symmetric random walk model), you can have many competing integrators each with a bound (race model). For example, see Fig 2. of Gold and Shadlen 2007 and references therein.

As for the continuous choices case, it is important to understand a limit of discrete choices can be very different from a continuous choice. For such a limit to make sense, there should be a notion of similarity between the choices, and I don't think those race models would generalize easily. A high dimensional continuous attractor dynamics is more likely to support such theory.


Diederich & Busemeyer (2003) presented a diffusion model for three choice alternatives (p. 314). The paper is a tutorial for calculating diffusion models with (discrete) matrix methods. The extension to three choice alternatives is reached by defining a two-dimensional diffusion process on a triangular plane (state space).

Recently, Wollschläger & Diederich (2012) presented the 2N-ary choice tree model which models multi-alternative choices by random walks on decision trees.

References
Diederich, A. & Busemeyer J. R. (2003). Simple matrix methods for analyzing diffusion models of choice probability, choice response time, and simple response time. Journal of Mathematical Psychology, 47(3), 304-322. doi:10.1016/S0022-2496(03)00003-8 (preprint pdf)

Wollschläger LM and Diederich A (2012) The 2N-ary choice tree model for N-alternative preferential choice. Front. Psychology 3:189. doi: 10.3389/fpsyg.2012.00189


Bogacz et al. (2006) provide the most comprehensive overview of models in this domain. This includes comparisons of the Drift Diffusion Model (Ratcliff, 1978), Ornstein-Uhlenbeck (O-U) Model (e.g. Busemeyer & Townsend, 1993; "Decision Field Theory"), race models without inhibition (e.g. Vickers, 1970), and race models with inhibition (e.g. Usher & McClelland, 2001; "leaky competing accumulator model") and others.

While diffusion models are limited to 2 alternatives (e.g. DDM, DFT), race models are not. (Note that diffusion models are essentially the same as random walk models, except that random walks involve discrete steps, while diffusion models are continuous). The race models with inhibition reviewed by the authors are computationally reducible to DDM, while race models without inhibition are different, and generally not preferable.

Beck et al. (2008) also provide a model that accommodates continuous choice, rather than a limited number of alternatives. It's relationship to other models is unclear.

So yes, there are many similar models that account for more than 2 alternatives. They are not technically random walk models, because a random walk only allows for 2 bounds. But they are computationally equivalent.

Beck, J. M., Ma, W. J., Kiani, R., Hanks, T., Churchland, A. K., Roitman, J.,… & Pouget, A. (2008). Probabilistic population codes for Bayesian decision making. Neuron, 60(6), 1142-1152. PDF

Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J. D. (2006). The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychological review, 113(4), 700. PDF

Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychological review, 100(3), 432. PDF

Ratcliff, R. (1978). A theory of memory retrieval. Psychological review, 85(2), 59. PDF

Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: the leaky, competing accumulator model. Psychological review, 108(3), 550. PDF

Vickers, D. (1970). Evidence for an accumulator model of psychophysical discrimination. Ergonomics, 13(1), 37-58.


Summary

Random forest is a great algorithm to train early in the model development process, to see how it performs. Its simplicity makes building a “bad” random forest a tough proposition.

The algorithm is also a great choice for anyone who needs to develop a model quickly. On top of that, it provides a pretty good indicator of the importance it assigns to your features.

Random forests are also very hard to beat performance wise. Of course, you can probably always find a model that can perform better, like a neural network for example, but these usually take more time to develop, though they can handle a lot of different feature types, like binary, categorical and numerical.

Overall, random forest is a (mostly) fast, simple and flexible tool, but not without some limitations.

Niklas Donges is an entrepreneur, technical writer and AI expert. He worked on an AI team of SAP for 1.5 years, after which he founded Markov Solutions. The Berlin-based company specializes in artificial intelligence, machine learning and deep learning, offering customized AI-powered software solutions and consulting programs to various companies.


Experiments, sample space, events, and equally likely probabilities

The fundamental ingredient of probability theory is an experiment that can be repeated, at least hypothetically, under essentially identical conditions and that may lead to different outcomes on different trials. The set of all possible outcomes of an experiment is called a “sample space.” The experiment of tossing a coin once results in a sample space with two possible outcomes, “heads” and “tails.” Tossing two dice has a sample space with 36 possible outcomes, each of which can be identified with an ordered pair (i, j), where i and j assume one of the values 1, 2, 3, 4, 5, 6 and denote the faces showing on the individual dice. It is important to think of the dice as identifiable (say by a difference in colour), so that the outcome (1, 2) is different from (2, 1). An “event” is a well-defined subset of the sample space. For example, the event “the sum of the faces showing on the two dice equals six” consists of the five outcomes (1, 5), (2, 4), (3, 3), (4, 2), and (5, 1).

A third example is to draw n balls from an urn containing balls of various colours. A generic outcome to this experiment is an n-tuple, where the ith entry specifies the colour of the ball obtained on the ith draw (i = 1, 2,…, n). In spite of the simplicity of this experiment, a thorough understanding gives the theoretical basis for opinion polls and sample surveys. For example, individuals in a population favouring a particular candidate in an election may be identified with balls of a particular colour, those favouring a different candidate may be identified with a different colour, and so on. Probability theory provides the basis for learning about the contents of the urn from the sample of balls drawn from the urn an application is to learn about the electoral preferences of a population on the basis of a sample drawn from that population.

Another application of simple urn models is to use clinical trials designed to determine whether a new treatment for a disease, a new drug, or a new surgical procedure is better than a standard treatment. In the simple case in which treatment can be regarded as either success or failure, the goal of the clinical trial is to discover whether the new treatment more frequently leads to success than does the standard treatment. Patients with the disease can be identified with balls in an urn. The red balls are those patients who are cured by the new treatment, and the black balls are those not cured. Usually there is a control group, who receive the standard treatment. They are represented by a second urn with a possibly different fraction of red balls. The goal of the experiment of drawing some number of balls from each urn is to discover on the basis of the sample which urn has the larger fraction of red balls. A variation of this idea can be used to test the efficacy of a new vaccine. Perhaps the largest and most famous example was the test of the Salk vaccine for poliomyelitis conducted in 1954. It was organized by the U.S. Public Health Service and involved almost two million children. Its success has led to the almost complete elimination of polio as a health problem in the industrialized parts of the world. Strictly speaking, these applications are problems of statistics, for which the foundations are provided by probability theory.

In contrast to the experiments described above, many experiments have infinitely many possible outcomes. For example, one can toss a coin until “heads” appears for the first time. The number of possible tosses is n = 1, 2,…. Another example is to twirl a spinner. For an idealized spinner made from a straight line segment having no width and pivoted at its centre, the set of possible outcomes is the set of all angles that the final position of the spinner makes with some fixed direction, equivalently all real numbers in [0, 2π). Many measurements in the natural and social sciences, such as volume, voltage, temperature, reaction time, marginal income, and so on, are made on continuous scales and at least in theory involve infinitely many possible values. If the repeated measurements on different subjects or at different times on the same subject can lead to different outcomes, probability theory is a possible tool to study this variability.

Because of their comparative simplicity, experiments with finite sample spaces are discussed first. In the early development of probability theory, mathematicians considered only those experiments for which it seemed reasonable, based on considerations of symmetry, to suppose that all outcomes of the experiment were “equally likely.” Then in a large number of trials all outcomes should occur with approximately the same frequency. The probability of an event is defined to be the ratio of the number of cases favourable to the event—i.e., the number of outcomes in the subset of the sample space defining the event—to the total number of cases. Thus, the 36 possible outcomes in the throw of two dice are assumed equally likely, and the probability of obtaining “six” is the number of favourable cases, 5, divided by 36, or 5/36.

Now suppose that a coin is tossed n times, and consider the probability of the event “heads does not occur” in the n tosses. An outcome of the experiment is an n-tuple, the kth entry of which identifies the result of the kth toss. Since there are two possible outcomes for each toss, the number of elements in the sample space is 2 n . Of these, only one outcome corresponds to having no heads, so the required probability is 1/2 n .

It is only slightly more difficult to determine the probability of “at most one head.” In addition to the single case in which no head occurs, there are n cases in which exactly one head occurs, because it can occur on the first, second,…, or nth toss. Hence, there are n + 1 cases favourable to obtaining at most one head, and the desired probability is (n + 1)/2 n .


Rational Choice Theory: Cultural Concerns

2.4 Some Common Misunderstandings

Rational choice theory is often criticized, sometimes with good arguments, and sometimes with bad. Although some of the bad arguments may apply to bad versions of the theory, critics ought to address the best versions. The most common misunderstanding is that the theory assumes agents to have selfish motivations. Rationality is consistent with selfishness, but also with altruism, as when someone is trying to select the charity where a donation can do the most good.

Rational choice theory is also sometimes confused with the principle of methodological individualism. True, the theory presupposes that principle. Talk about beliefs and desires of supra-individual entities, such as a class or a nation, is in general meaningless. The converse does not hold, however. Some of the alternatives to rational choice theory also presuppose methodological individualism.

Furthermore, it is sometimes asserted that the theory is atomistic and that it ignores social interactions. Almost the exact opposite is true. Game theory is superbly equipped to handle three important interdependencies: (a) the welfare of each depends on the decisions of all (b) the welfare of each depends on the welfare of all and (c) the decision of each depends on the decisions of all. It is also sometimes asserted that the theory assumes human beings to be like powerful computers, which can instantly work out the most complex ramifications of all conceivable options. In principle, rational choice theory can incorporate cognitive constraints on a par with physical or financial constraints. In practice, the critics often have a valid point.

Finally, it is sometimes asserted that the theory is culturally biased, reflecting (and perhaps describing) modern, Western societies or their subcultures. However, the minimal model set out above is transhistorically and transculturally valid. This statement does not imply that people always and everywhere act rationally, nor that they have the same desires and beliefs. It means that the normative ideal of rationality embodied in the model is one that is explicitly or implicitly shared by all human beings. People are instrumentally rational because they adopt the principle of least effort. ‘Do not cross the river to fetch water,’ says a Norwegian proverb. Also, as people know that acting on false beliefs undermines the pursuit of their ends, they want to use cognitive procedures that reduce the risk of getting it wrong when getting it right matters. That they often fail to adopt the right procedures does not undermine the normative ideal.


Conclusion

Random forests are a personal favorite of mine. Coming from the world of finance and investments, the holy grail was always to build a bunch of uncorrelated models, each with a positive expected return, and then put them together in a portfolio to earn massive alpha (alpha = market beating returns). Much easier said than done!

Random forest is the data science equivalent of that. Let’s review one last time. What’s a random forest classifier?

The random forest is a classification algorithm consisting of many decisions trees. It uses bagging and feature randomness when building each individual tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree.

What do we need in order for our random forest to make accurate class predictions?

  1. We need features that have at least some predictive power. After all, if we put garbage in then we will get garbage out.
  2. The trees of the forest and more importantly their predictions need to be uncorrelated (or at least have low correlations with each other). While the algorithm itself via feature randomness tries to engineer these low correlations for us, the features we select and the hyper-parameters we choose will impact the ultimate correlations as well.

Thanks for reading. I hope you learned as much from reading this as I did from writing it. Cheers!

More by me on data science and other machine learning algorithms:


Discrete Distribution

A discrete distribution is a distribution of data in statistics that has discrete values. Discrete values are countable, finite, non-negative integers, such as 1, 10, 15, etc.

Understanding Discrete Distributions

The two types of distributions are:

A discrete distribution, as mentioned earlier, is a distribution of values that are countable whole numbers. On the other hand, a continuous distribution includes values with infinite decimal places. An example of a value on a continuous distribution would be &ldquopi.&rdquo Pi is a number with infinite decimal places (3.14159&hellip).

Both distributions relate to probability distributions, which are the foundation of statistical analysis and probability theory.

A probability distribution is a statistical function that is used to show all the possible values and likelihoods of a random variable Random Variable A random variable (stochastic variable) is a type of variable in statistics whose possible values depend on the outcomes of a certain random phenomenon in a specific range. The range would be bound by maximum and minimum values, but the actual value would depend on numerous factors. There are descriptive statistics used to explain where the expected value may end up. Some of which are:

  • Mean (average)
  • Median
  • Mode
  • Standard deviation Standard Deviation From a statistics standpoint, the standard deviation of a data set is a measure of the magnitude of deviations between values of the observations contained
  • Skewness
  • Kurtosis

Discrete distributions also arise in Monte Carlo simulations. A Monte Carlo simulation Monte Carlo Simulation Monte Carlo simulation is a statistical method applied in modeling the probability of different outcomes in a problem that cannot be simply solved is a statistical modeling method that identifies the probabilities of different outcomes by running a very large amount of simulations. From Monte Carlo simulations, outcomes with discrete values will produce a discrete distribution for analysis.

Discrete Distribution Example

Types of discrete probability distributions include:

Consider an example where you are counting the number of people walking into a store in any given hour. The values would need to be countable, finite, non-negative integers. It would not be possible to have 0.5 people walk into a store, and it would not be possible to have a negative amount of people walk into a store. Therefore, the distribution of the values, when represented on a distribution plot, would be discrete.

Observing the above discrete distribution of collected data points, we can see that there were five hours where between one and five people walked into the store. In addition, there were ten hours where between five and nine people walked into the store and so on.

The probability distribution above gives a visual representation of the probability that a certain amount of people would walk into the store at any given hour. Without doing any quantitative analysis Quantitative Analysis Quantitative analysis is the process of collecting and evaluating measurable and verifiable data such as revenues, market share, and wages in order to understand the behavior and performance of a business. In the era of data technology, quantitative analysis is considered the preferred approach to making informed decisions. , we can observe that there is a high likelihood that between 9 and 17 people will walk into the store at any given hour.

Continuous Distribution Example

Continuous probability distributions are characterized by having an infinite and uncountable range of possible values. The probabilities of continuous random variables are defined by the area underneath the curve of the probability density function.

The probability density function (PDF) is the likelihood for a continuous random variable to take a particular value by inferring from the sampled information and measuring the area underneath the PDF. Although the absolute likelihood of a random variable taking a particular value is 0 (since there are infinite possible values), the PDF at two different samples is used to infer the likelihood of a random variable.

Consider an example where you wish to calculate the distribution of the height of a certain population. You can gather a sample and measure their heights. However, you will not reach an exact height for any of the measured individuals.

For calculating the distribution of heights, you can recognize that the probability of an individual being exactly 180cm is zero. That is, the probability of measuring an individual having a height of exactly 180cm with infinite precision is zero. However, the probability that an individual has a height that is greater than 180cm can be measured.

In addition, you can calculate the probability that an individual has a height that is lower than 180cm. Therefore, you can use the inferred probabilities to calculate a value for a range, say between 179.9cm and 180.1cm.

Observing the continuous distribution, it is clear that the mean is 170cm however, the range of values that can be taken is infinite. Therefore, measuring the probability of any given random variable would require taking the inference between two ranges, as shown above.

More Resources

CFI offers the Commercial Banking & Credit Analyst (CBCA)&trade Program Page - CBCA Get CFI's CBCA&trade certification and become a Commercial Banking & Credit Analyst. Enroll and advance your career with our certification programs and courses. certification program for those looking to take their careers to the next level. To keep learning and developing your knowledge base, please explore the additional relevant resources below:

  • Central Limit Theorem Central Limit Theorem The central limit theorem states that the sample mean of a random variable will assume a near normal or normal distribution if the sample size is large
  • Poisson Distribution Poisson Distribution The Poisson Distribution is a tool used in probability theory statistics to predict the amount of variation from a known average rate of occurrence, within
  • Cumulative Frequency Distribution Cumulative Frequency Distribution Cumulative frequency distribution is a form of a frequency distribution that represents the sum of a class and all classes below it. Remember that frequency
  • Weighted Mean Weighted Mean The weighted mean is a type of mean that is calculated by multiplying the weight (or probability) associated with a particular event or outcome with its

Financial Analyst Certification

Become a certified Financial Modeling and Valuation Analyst (FMVA)® Become a Certified Financial Modeling & Valuation Analyst (FMVA)® CFI's Financial Modeling and Valuation Analyst (FMVA)® certification will help you gain the confidence you need in your finance career. Enroll today! by completing CFI&rsquos online financial modeling classes and training program!


Simple Random Sampling: Definition and Examples

Definition: Simple random sampling is defined as a sampling technique where every item in the population has an even chance and likelihood of being selected in the sample. Here the selection of items entirely depends on luck or probability, and therefore this sampling technique is also sometimes known as a method of chances.

Simple random sampling is a fundamental sampling method and can easily be a component of a more complex sampling method. The main attribute of this sampling method is that every sample has the same probability of being chosen.

The sample size in this sampling method should ideally be more than a few hundred so that simple random sampling can be applied appropriately. They say that this method is theoretically simple to understand but difficult to implement practically. Working with large sample size isn’t an easy task, and it can sometimes be a challenge to finding a realistic sampling frame.

Simple random sampling methods

Researchers follow these methods to select a simple random sample:

  1. They prepare a list of all the population members initially, and then each member is marked with a specific number ( for example, there are nth members, then they will be numbered from 1 to N).
  2. From this population, researchers choose random samples using two ways: random number tables and random number generator software. Researchers prefer a random number generator software, as no human interference is necessary to generate samples.

Two approaches aim to minimize any biases in the process of simple random sampling:

Using the lottery method is one of the oldest ways and is a mechanical example of random sampling. In this method, the researcher gives each member of the population a number. Researchers draw numbers from the box randomly to choose samples.

The use of random numbers is an alternative method that also involves numbering the population. The use of a number table similar to the one below can help with this sampling technique.

Simple random sampling formula

Consider a hospital has 1000 staff members, and they need to allocate a night shift to 100 members. All their names will be put in a bucket to be randomly selected. Since each person has an equal chance of being selected, and since we know the population size (N) and sample size (n), the calculation can be as follows:

Example of simple random sampling

Follow these steps to extract a simple random sample of 100 employees out of 500.

  1. Make a list of all the employees working in the organization. (as mentioned above there are 500 employees in the organization, the record must contain 500 names).
  2. Assign a sequential number to each employee (1,2,3…n). This is your sampling frame (the list from which you draw your simple random sample).
  3. Figure out what your sample size is going to be . (In this case, the sample size is 100).
  4. Use a random number generator to select the sample, using your sampling frame (population size) from Step 2 and your sample size from Step 3. For example, if your sample size is 100 and your population is 500, generate 100 random numbers between 1 and 500.

Simple random sampling in research

Today’s market research projects are much larger and involve an indefinite number of items. It is practically impossible to study every member of the population’s thought process and derive interference from the study.

If, as a researcher, you want to save your time and money, simple random sampling is one of the best probability sampling methods that you can use. Getting data from a sample is more advisable and practical

Whether to use a census or a sample depends on several factors, such as the type of census, the degree of homogeneity/heterogeneity, costs, time, feasibility to study, the degree of accuracy needed, etc.


Phenomenological Approach

From its beginnings there has been a series of critical objections against the phenomenology of religion.

Especially the phenomenological approach of Eliade is the target of Wagner's ( 1986 ) criticism who objects that his concept of the homo religiosus is guided by a notion of ‘natural religion.’ He argues that it presupposes some unwarranted knowledge about the religious situation in what Eliade considers to be ‘archaic cultures.’

With respect to religion, phenomenology of religion takes a decidedly substantialist position (Luckmann 1983 ). Moreover, critics charge that this substantialism is based on nonempirical, extraphenomenological, and theological assumptions and intentions enter into the analyses of such classic representants of phenomenology of religion such as Kristensen, Eliade, or Van der Leeuw.

To many critiques, this is due to the fact that methods are rarely unveiled. Despite the reference to phenomenological methods, the basic presuppositions often seem arbitrary, and also theoretical reflections are criticized to fall short of the diligent collection and classification of data.

Since phenomenology provides the basic methodology for phenomenology of religion, it is particularly consequential that phenomenology of religion lost contact with the developments in philosophical phenomenology from about the 1950s. As a result, phenomenology of religion rarely is considered of importance in the analysis of religion within the tradition of phenomenological philosophy (Guerrière 1990 ).

The phenomenology of religion has been criticized for ignoring the social and cultural contexts of religious phenomena. Moreover, as phenomenology in general has been criticized for its naive attitude towards language and cultural perspectivism, also the phenomenology of religion is subject to criticism as to the linguistic and cultural bias implicit in the analysis of ‘phenomena’ and ‘symbols.’ Thus, the results of free variations depend on the phenomenologists' cultural background (Allen 1987 ).


The Lifestyle Theory

The next theory is the lifestyle theory. This theory purports that individuals are targeted based on their lifestyle choices, and that these lifestyle choices expose them to criminal offenders, and situations in which crimes may be committed. Examples of some lifestyle choices indicated by this theory include going out at night alone, living in "bad" parts of town, associating with known felons, being promiscuous, excessive alcohol use, and doing drugs.

In addition to theorizing that victimization is not random, but rather a part of the lifestyle the victims pursues, the lifestyle theory cites research that victims "share personality traits also commonly found in law violators, namely impulsivity and low self control" (Siegel, 2006). This previous statement was discussed in a psychology journal by Jared Dempsey, Gary Fireman, and Eugene Wang, in which they note the correlation between victims and the perpetrators of crimes, both exhibiting impulsive and antisocial-like behaviors (2006). These behaviors may contribute to their victimization since they cause the individual to put themselves at higher risk for victimization than their more conservative lifestyle counterparts.


Strategic dominance

Example of strategic dominance

Consider a situation where two companies, called Startupo and Megacorp, are competing in a new market.

This market has one product that is sold in two different versions: the consumer version and the professional version. Both versions are equally profitable for the company selling them, and the companies’ only concern is to make more money by selling more units. However, due to practical constraints, a company can only manufacture one type of product

Most people in the market (80%) are interested in the consumer version, and only a few (20%) are interested in the professional version. Each company can decide whether it wants to sell the consumer version or the professional version of the product. If both companies decide to sell the same type of product, then the companies will have to split the market for that product. Otherwise, each company will have the full consumer or professional market for itself.

As such, each company has two possible strategies to choose from, and there are four possible outcomes to the scenario:

  • Both companies enter the consumer market. This means that the companies split the consumer market (which accounts for 80% of the total market share), and that each company therefore gets 40% of the total market share.
  • Both companies enter the professional market. This means that the companies split the professional market (which accounts for 20% of the total market share), and that each company therefore gets 10% of the total market share.
  • Startupo enters the consumer market, and Megacorp enters the professional market. This means that Startupo gets the full consumer market (80% of the total market share), and Megacorp gets the full professional market (20% of the total market share).
  • Megacorp enters the consumer market, and Startupo enters the professional market. This means that Megacorp gets the full consumer market (80% of the total market share), and Startupo gets the full professional market (20% of the total market share).

Based on this, if a company chooses to enter the consumer market, then it will get either 40% or 80% of the total market share. Conversely, if a company chooses to enter the professional market, then it will get either 10% or 20% of the total market share.

Accordingly, for both companies, the (strictly) dominant strategy is to enter the consumer market, since they will end up with a bigger market share this way, regardless of which move the other company makes.

Conversely, for both companies, the (strictly) dominated strategy is to enter the professional market, since they will end up with a smaller market share this way, regardless of which move the other company makes.

Note that this scenario can become more complex by adding factors that often appear in real life, such as additional players, additional products, and different profitability margins for different products. However, although these additional factors make it more difficult to analyze the strategic dominance in a situation, the basic idea behind dominant and dominated strategies remains the same regardless of this added complexity.

Strategic dominance in single-player games

In scenarios where there is only one player, there can still be dominant and dominated strategies.

For example, consider a situation where you are walking along a street, and you need to eventually cross the road. Just as you reach the first of two identical crosswalks that you can use, the crosswalk light turns red. You now have two strategies to choose from:

  • Wait for the light at this crosswalk to turn green.
  • Keep walking until you reach the next crosswalk, and then cross there.

Given that your goal is to minimize the time spent waiting at the crosswalk, the dominant strategy in this case is to keep going until you reach the next sidewalk. This is because, if you decide to cross at the current crosswalk, you’re going to have to wait for the full length of time that it takes the light to turn green. Conversely, if you keep going until you reach the next crosswalk, then once you get there, one of three things will happen:

  • You will reach the second crosswalk while the light is green, in which case you won’t have to wait at all, which represents an outcome that is better than the outcome that you would have gotten if you chose to wait at the first crosswalk.
  • You will reach the second crosswalk while the light is already red, in which case you will have to wait for less time than you would have had to wait at the first crosswalk, which represents an outcome that is better than the outcome that you would have gotten if you chose to wait at the first crosswalk.
  • You will reach the second crosswalk just as the light turns red again, in which case you will have to wait the same length of time that you would have had to wait at the first crosswalk, which represents an outcome that is equal to the outcome that you would have gotten if you chose to wait at the first crosswalk.

Since the strategy of going for the next crosswalk leads to an outcome that is equal to or better than the outcome of waiting at the current crosswalk, it’s the (weakly) dominant strategy in this case.

Note that, in this game, though there is only one player, the concept of “luck”, in the form of whether or not the next light will be green or red, can be viewed as representing a second player, when it comes to assessing the dominance of your strategies.

However, the concept of strategic dominance can occur in even simpler situations, where there is no element of luck. For example, consider a situation where you need to choose between buying one of two identical products, with the only difference between them being that one costs $5 and the other costs $10. Here, if your goal is to minimize the amount of money you spend, then buying the cheaper product is the dominant strategy.

Games with no strategic dominance

There are situations where there is no strategic dominance, meaning that none of the available strategies are dominant or dominated.

For example, in the game Rock, Paper, Scissors, each player can choose one of three possible moves, which lead to a win, a loss, or a draw with equal probability, depending on which move the other player makes:

  • Rockwins against scissors, loses against paper, and draws against rock.
  • Paperwins against rock, loses against scissors, and draws against paper.
  • Scissorswin against paper, lose against rock, and draw against scissors.

Accordingly, none of the available strategies dominates the others, because none of the strategies is guaranteed to lead to an outcome that is as good as or better than the other strategies. Rather, there is a cycle-based (non-transitive) relation between the strategies, since choosing rock is better if the other player chooses scissors, and choosing scissors is better if the other player chooses paper, but choosing paper is better if the other person chooses rock.

In addition, note that if multiple strategies always lead to the same outcomes, then they are said to be equivalent. For example, if a company needs to choose between online and offline advertising based on the profit that each option leads to, and both options lead to a profit of $20,000, then these two options are equivalent to one another, at least as long as no other related outcomes are taken into account.

Using strategic dominance to guide your moves

To use strategic dominance to guide your moves, you should first assess the situation that you’re in, by identifying all the possible moves that you and other players can make, as well as the outcomes of those moves, and the favorability of each outcome. Once you have mapped the full game tree, you can determine the dominance of the strategies available to you, and use this information in order to choose the optimal strategy available to you, by preferring dominant strategies or ruling out dominated ones.

For example, let’s say you have three possible strategies, called A, B, and C:

  • If strategy A leads to better outcomes than strategies B and C, then strategy A is dominant, and you should use it.
  • If strategy A leads to an equal outcome as strategy B, but both lead to better outcomes than strategy C, then strategy C is dominated, and you should avoid it.

In addition, it can sometimes be beneficial to rule out your and your opponents’ strictly dominated strategies, which are always inferior to alternatives strategies, before re-assessing the available moves, in a process called iterated elimination of strictly dominated strategies (or iterative deletion of strictly dominated strategies). It’s possible to also eliminate weakly dominated strategies in a similar manner, but the elimination process can be more complex in that case.

Finally, in situations where multiple available strategies lead to equal outcomes, you can pick one of them at random. In the case of games with multiple players, doing this has the added advantage of helping you make moves that are difficult for other players to predict, which can be beneficial in some situations.

Note: and associated concept is mixed strategies, which involves choosing out of several strategies at random (based on some probability distribution), in order to avoid being predictable.

Using strategic dominance to predict people’s behavior

Understanding the concept of strategic dominance can help you predict other people’s behavior, which can allow you to better prepare for the moves that they will make.

Specifically, there are two key assumptions that you should keep in mind:

  • People will generally prefer to use their dominant strategies.
  • People will generally prefer to avoid their dominated strategies.

However, it’s also important to keep in mind that in certain situations, people might not pick a strategy that you think is dominant, or might pick a strategy that you think is dominated, for reasons such as:

  • They know something that you don’t.
  • They value outcomes in a different way than you expect them to.
  • They are irrational, so they won’t do what’s best for them from a strategic perspective.
  • They lack certain knowledge about the structure of the game (i.e., who are the players, and which strategies and payoffs are available), or about the other players (i.e., about the other players’ rationality and knowledge of the game and the other players).

You can account for such possibilities in various ways. For example, you can incorporate these possibilities into your predictions, in order to make the predictions more accurate, for example by noting that someone tends to choose their strategy based on ego rather than logic. Similarly, you can consider these possibilities when estimating the certainty associated with your predictions, even if you don’t modify the predictions themselves, in order to understand how uncertain your predictions are.

Overall, you can use the concept of strategic dominance to predict people’s behavior, by expecting them to generally use dominant strategies and avoid dominated ones. However, when doing this, it’s important to account for various factors that could interfere with your predictions, such as people’s irrationality, or your incomplete knowledge regarding people’s available moves.

Note: the term “mutual knowledge” refers to something that every player in a game knows. The term “common knowledge” refers to something that every player in a game knows, and that every player knows that every player knows, and so on.


Watch the video: EN Team Secret vs Quincy Crew - Dota 2 The International 2021 - Group Stage Day 3 (January 2022).