Information

How to prevent falling for Jingle-Jangle fallacies?

How to prevent falling for Jingle-Jangle fallacies?

What are some ways to avoid the Jingle-Jangle fallacies when it comes to choosing a form of measurement in psychology research?


The Jingle–Jangle Fallacy in Adolescent Autonomy in the Family: In Search of an Underlying Structure

The construct of autonomy has a rich, though quite controversial, history in adolescent psychology. The present investigation aimed to clarify the meaning and measurement of adolescent autonomy in the family. Based on theory and previous research, we examined whether two dimensions would underlie a wide range of autonomy-related measures, using data from two adolescent samples (N = 707, 51 % girls, and N = 783, 59 % girls, age range = 14–21 years). Clear evidence was found for a two-dimensional structure, with the first dimension reflecting “volition versus pressure”, that is, the degree to which adolescents experience a sense of volition and choice as opposed to feelings of pressure and coercion in the parent–adolescent relationship. The second dimension reflected “distance versus proximity”, which involves the degree of interpersonal distance in the parent-adolescent relationship. Whereas volition related to higher well-being, less problem behavior and a secure attachment style, distance was associated mainly with more problem behavior and an avoidant attachment style. These associations were not moderated by age. The discussion focuses on the meaning of adolescent autonomy and on the broader implications of the current findings.

This is a preview of subscription content, access via your institution.


What is “asking for it”?

On the face of it, it may seem logical to some people that a woman who’s drunk or wearing a short skirt is somehow “asking for it” in reality, though, it’s absurd. To use an utterly ridiculous example, let’s say that my sexual kink is whacking men’s bums with a rubber chicken. Let’s also say that men wearing sunglasses are my biggest turn-on. Furthermore, let’s say I were to accost a man wearing sunglasses and get busy on his bum with my rubber chicken. Is there any chance that anyone would say that the rubber chickening is the man’s fault for wearing sunglasses, or that wearing sunglasses constituted implied consent? Is there anything at all that he could possibly do to make people conclude the rubber chickening was his fault? I highly doubt it.

So, we don’t blame the victim in this albeit ridiculous scenario, but people blame a woman for being sexually assaulted because she’s wearing a miniskirt? Even though one scenario is ridiculous and the other happens far too often, they are fundamentally the same. They’re both situations where the perpetrator is fully responsible for their actions, and the victim is 0% responsible.

The cognitive biases that tend to underlie victim-blaming may be easy to fall into, but that’s no excuse for anyone not to check in with themselves and reflect on who it is that’s actually done something wrong.

The Psychology Corner has an overview of terms covered in the What Is… series, along with s a collection of scientifically validated psychological tests.

Visit the [email protected] Resource Pages hub to see other themed pages from Mental Health @ Home.


4. Current issues in fallacy theory

4.1 The nature of fallacies

A question that continues to dog fallacy theory is how we are to conceive of fallacies. There would be advantages to having a unified theory of fallacies. It would give us a systematic way of demarcating fallacies and other kinds of mistakes it would give us a framework for justifying fallacy judgments, and it would give us a sense of the place of fallacies in our larger conceptual schemes. Some general definition of &lsquofallacy&rsquo is wanted but the desire is frustrated because there is disagreement about the identity of fallacies. Are they inferential, logical, epistemic or dialectical mistakes? Some authors insist that they are all of one kind: Biro and Siegel, for example, that they are epistemic, and Pragma-dialectics that they are dialectical. There are reasons to think that all fallacies do not easily fit into one category.

Together the Sophistical Refutations and Locke&rsquos Essay are the dual sources of our inheritance of fallacies. However, for four reasons they make for uneasy bedfellows. First, the ad fallacies seem to have a built-in dialectical character, which, it can be argued, Aristotle&rsquos fallacies do not have (they are not sophistical refutations but are in sophistical refutations). Second, Aristotle&rsquos fallacies are logical mistakes: they have no appropriate employment outside eristic argumentation whereas the ad-fallacies are instances of ad-arguments, often appropriately used in dialogues. Third, the appearance condition is part of the Aristotelian inheritance but it is not intimately connected with the ad-fallacies tradition. A fourth reason that contributes to the tension between the Aristotelian and Lockean traditions in fallacies is that the former grew out of philosophical problems, largely what are logical and metaphysical puzzles (consider the many examples in Sophistical Refutations), whereas the ad-fallacies are more geared to social and political topics of popular concern, the subject matter that most intrigues modern researchers on fallacy theory.

As we look back over our survey we cannot help but observe that fallacies have been identified in relation to some ideal or model of good arguments, good argumentation, or rationality. Aristotle&rsquos fallacies are shortcomings of his ideal of deduction and proof, extended to contexts of refutation. The fallacies listed by Mill are errors of reasoning in a comprehensive model that includes both deduction and induction. Those who have defended SDF as the correct definition of &lsquofallacy&rsquo [10] take logic simpliciter or deductive validity as the ideal of rationality. Informal logicians view fallacies as failures to satisfy the criteria of what they consider to be a cogent argument. Defenders of the epistemic approach to fallacies see them as shortfalls of the standards of knowledge-generating arguments. Finally, those who are concerned with how we are to overcome our disagreements in a reasonable way will see fallacies as failures in relation to ideals of debate or critical discussions.

The standard treatment of the core fallacies has not emerged from a single conception of good argument or reasonableness but rather, like much of our unsystematic knowledge, has grown as a hodgepodge collection of items, proposed at various time and from different perspectives, that continues to draw our attention, even as the standards that originally brought a given fallacy to light are abandoned or absorbed into newer models of rationality. Hence, there is no single conception of good argument or argumentation to be discovered behind the core fallacies, and any attempt to force them all into a single framework, must take efforts to avoid distorting the character originally attributed to each of them.

4.2 The appearance condition

From Aristotle to Mill the appearance condition was an essential part of the conception of fallacies. However, some of the new, post-Hamblin, scholars have either ignored it (Finocchiaro, Biro and Siegel) or rejected it because appearances can vary from person to person, thus making the same argument a fallacy for the one who is taken in by the appearance, and not a fallacy for the one who sees past the appearances. This is unsatisfactory for those who think that arguments are either fallacies or not. Appearances, it is also argued, have no place in logical or scientific theories because they belong to psychology (van Eemeren and Grootendorst, 2004). But Walton (e.g., 2010) continues to consider appearances an essential part of fallacies as does Powers (1995, 300) who insists that fallacies must &ldquohave an appearance, however quickly seen through, of being valid.&rdquo If the mistake in an argument is not masked by an ambiguity that makes it appear to be a better argument than it really is, Powers denies it is a fallacy.

The appearance condition of fallacies serves at least two purposes. First, it can be part of explanations of why reasonable people make mistakes in arguments or argumentation: it may be due in part to an argument&rsquos appearing to be better than it really is. Second, it serves to divide mistakes into two groups: those which are trivial or the result of carelessness (for which there is no cure other than paying better attention), and those which we need to learn to detect through an increased awareness of their seductive nature. Without the appearance condition, it can be argued, no division can be made between these two kinds of errors: either there are no fallacies or all mistakes in argument and/or argumentation are fallacies a conclusion that some are willing to accept, but which runs contrary to tradition. One can also respond that there is an alternative to using the appearance condition as the demarcation property between fallacies and casual mistakes, namely, frequency. Fallacies are those mistakes we must learn to guard against because they occur with noticeable frequency. To this it may be answered that &lsquonoticeable frequency&rsquo is vague, and is perhaps best explained by the appearance condition.

4.3 Teaching

On the more practical level, there continues to be discussion about the value of teaching the fallacies to students. Is it an effective way for them to learn to reason well and to avoid bad arguments? One reason to think that it is not effective is that the list of fallacies is not complete, and that even if the group of core fallacies was extended to incorporate other fallacies we thought worth including, we could still not be sure that we had a complete prophylactic against bad arguments. Hence, we are better off teaching the positive criteria for good arguments/ argumentation which give us a fuller set of guidelines for good reasoning. But some (Pragma-dialectics and Johnson and Blair) do think that their stock of fallacies is a complete guard against errors because they have specified a full set of necessary conditions for good arguments/argumentation and they hold that fallacies are just failures to meet one of these conditions.

Another consideration about the value of the fallacies approach to teaching good reasoning is that it tends to make students overly critical and lead them to see fallacies where there are not any hence, it is maintained we could better advance the instilling of critical thinking skills by teaching the positive criteria of good reasoning and arguments (Hitchcock, 1995). In response to this view, it is argued that, if the fallacies are taught in a non-perfunctory way which includes the explanations of why they are fallacies&mdashwhich normative standards they transgress&mdashthen a course taught around the core fallacies can be effective in instilling good reasoning skills (Blair 1995).

4.4 Biases

Recently there has been renewed interest in how biases are related to fallacies. Correia (2011) has taken Mill&rsquos insight that biases are predisposing causes of fallacies a step further by connecting identifiable biases with particular fallacies. Biases can influence the unintentional committing of fallacies even where there is no intent to be deceptive, he observes. Taking biases to be &ldquosystematic errors that invariably distort the subject&rsquos reasoning and judgment,&rdquo the picture drawn is that particular biases are activated by desires and emotions (motivated reasoning) and once they are in play, they negatively affect the fair evaluation of evidence. Thus, for example, the &ldquofocussing illusion&rdquo bias inclines a person to focus on just a part of the evidence available, ignoring or denying evidence that might lead in another direction. Correia (2011, 118) links this bias to the fallacies of hasty generalization and straw man, suggesting that it is our desire to be right that activates the bias to focus more on positive or negative evidence, as the case may be. Other biases he links to other fallacies.

Thagard (2011) is more concerned to stress the differences between fallacies and biases than to find connections between them. He claims that the model of reasoning articulated by informal logic is not a good fit with the way that people actually reason and that only a few of the fallacies are relevant to the kinds of mistakes people actually make. Thagard&rsquos argument depends on his distinction between argument and inference. Arguments, and fallacies, he takes to be serial and linguistic, but inferences are brain activities and are characterized as parallel and multi-modal. By &ldquoparallel&rdquo is meant that the brain carries out different processes simultaneously, and by &ldquomulti-modal&rdquo that the brain uses non-linguistic and emotional, as well as linguistic representations in inferring. Biases (inferential error tendencies) can unconsciously affect inferring. &ldquoMotivated inference,&rdquo for example, &ldquoinvolves selective recruitment and assessment of evidence based on unconscious processes that are driven by emotional considerations of goals rather than purely cognitive reasoning&rdquo (2011, 156). Thagard volunteers a list of more than fifty of these inferential error tendencies. Because motivated inferences result from unconscious mental processes rather than explicit reasoning, the errors in inferences cannot be exposed simply by identifying a fallacy in a reconstructed argument. Dealing with biases requires identification of both conscious and unconscious goals of arguers, goals that can figure in explanations of why they incline to particular biases. &ldquoOvercoming people&rsquos motivated inferences,&rdquo Thagard concludes, &ldquois therefore more akin to psychotherapy than informal logic&rdquo (157), and the importance of fallacies is accordingly marginalized.

In response to these findings, one can admit their relevance to the pedagogy of critical thinking but still recall the distinction between what causes mistakes and what the mistakes are. The analysis of fallacies belongs to the normative study of arguments and argumentation, and to give an account of what the fallacy in a given argument is will involve making reference to some norm of argumentation. It will be an explanation of what the mistake in the argument is. Biases are relevant to understanding why people commit fallacies, and how we are to help them get past them, but they do not help us understand what the fallacy-mistakes are in the first place&mdashthis is not a question of psychology. Continued research at this intersection of interests will hopefully shed more light on both biases and fallacies.


Graduated psychology student trying to read academic articles.

I work at an addiction rehab facility and planning on going for an LPC in the fall. I’m trying to get ahead and read articles related to trauma and addiction but find articles really difficult to read and understand. Is there any advice on how to read them and grasp what they are saying easier?

Understanding that will help you understand the design of a study and essentially boil it down to whether it was a good design or not.

Use google scholar to put in key terms for what you want to research but keep in mind the year it was published too! In lots of fields, there are often key researchers that write prolifically.

You might want to start with review articles and then try empirical articles. Review articles should be labeled "review." Empirical articles will have a methods section. Skip meta-analyses for now. Also, have you read the book "The Body Keeps the Score"?

One of the classes you are likely to take to get your license will help you read those studies. I took Applying Psychological Science this past semester, and the entirety of the course was reading studies and evaluating them/ breaking them down. That should help a lot!

If you want the gist of an article it goes abstract -> methods -> discussion. If you don't get the method that's ok it's just to give u a general idea of what they did. Discussion section usually summarizes findings and gives you the takeaways of the study.

If you want more of the theoretical background and lit review that's where you read the intro.

It takes practice to know what everything means and to follow along, but if you want to just get the gist of the findings, the abstract and discussion sections are usually quite helpful. I remember I used to not know what the results sections would be saying at all. The discussion is where those results are interpreted and put in a broader context. So sometimes I would read the discussion first then go back and read more.


How to prevent falling for Jingle-Jangle fallacies? - Psychology

The thing that stuck out most to me in this chapter is the fallacies of emotions. A fallacy is an irrational thought that lead to illogical conclusions and in turn, debilitative emotions. These fallacies are caused by our own self-talk. Self-talk is the nonvocal process of thinking and on some level, self-talk occurs as a person interprets another’s behavior (152). There are seven fallacies of emotions that people believe to be true: fallacy of perfection, fallacy of approval, fallacy of shoulds, fallacy of overgeneralization, fallacy of causation, fallacy of helplessness and the fallacy of catastrophic expectations.

There is no story of the fallacies in this chapter so I have included a journal about the three fallacies that have affected my life the most. In that paper, there are definitions for the fallacies of approval, shoulds and causation, but not the rest. The fallacy of perfection is when a person believes that any worthwhile communicator should be able to handle every situation with great confidence and perfect skill (153). The fallacy of overgeneralization is one of two things (155). The first is a belief on a limited amount of evidence such as when we say, “I’m so stupid! I can’t do anything right!” The second is when we exaggerate things like when we tell someone, “You never do anything around here!” They probably do something but maybe not very often so we should say, “I feel like you don’t help me very often.” The next fallacy is that of helplessness. The fallacy of helplessness is when a person believes that they have no control over their own satisfaction (157). They feel helpless about what they can do to be happy in life and believe that the world is in total control. The final fallacy is the fallacy of catastrophic expectations (157). People who believe that anything bad that can happen, will happen are the believers in the fallacy of catastrophic expectations.

YouTube Video

This is a trailer for She’s Out of my League. In this movie, a nerdy and awkward boy is noticed by a girl that he thinks is perfect. She loses her phone in the airport where he works and he is the one to find it and give it back to her. She then begins to have a relationship with him. His friends are constantly telling him that she is way too perfect for him. He starts trying to live up to her ex-boyfriend, Tim. He starts to think that if he isn’t perfect, then she is not going to like him anymore. This is an example of the fallacy of perfection.

These fallacies of emotions have made me understand more of how we work. It has taught me how our emotions can easily cloud our judgment and how much our own self-talk contributes to our emotions and the way we feel.


Types of slippery slopes

  • Causal slopes, which revolve around the idea that a relatively minor initial action will lead to a relatively major final event.
  • Precedential slopes, which revolve around the idea that treating a relatively minor issue a certain way now will lead to us treating a relatively major issue the same way later on.
  • Conceptual slopes, which revolve around the idea that there is no meaningful difference between two things if it’s possible to get from one to the other through a series of small, nearly indistinguishable steps.

There is significant variation in terms of how different philosophers treat the different types of slippery slopes. However, in general, there are several characteristics that are shared between the different types and the different descriptions of slippery slope arguments:

  • A start-point that is relatively mild.
  • An end-point that is relatively extreme.
  • A process of transitioning from the start-point to the end-point, usually without the ability to stop in the middle.

In the sections below, we will explore each of these types of slippery slopes in more detail.

Causal slippery slopes

A causal slippery slope is an argument that suggests that undertaking an initial action will lead to a chain of events that will culminate in a dramatic outcome. For example, a causal slippery slope could involve arguing that if we help students who struggle by providing them with extra tutoring, then eventually we will simply give perfect grades to all students regardless if they put in any effort or not.

As such, the basic structure of a causal slippery slope is the following:

“If we do [relatively minor thing] now, then it will cause [relatively major thing] to happen later.”

At least two events are necessary for a causal slippery slope, though any a number of events can appear in between them, with each event in the chain occurring directly as a result of the previous one. Accordingly, a causal slippery slope will usually have the following structure in practice:

“If we allow [minor event] to happen now, then [another minor event] might happen later, leading to [a medium event], and finally to the possibility that [major event] will occur.”

These slopes often involve a positive-feedback mechanism, where the initial action in question will set off a chain reaction that reinforces itself. This potential feedback mechanism can be mentioned explicitly by the person proposing the slope, or it can be an implicit part of their argument.

Note: the causal slippery slope is sometimes also referred to as a predictive slippery slope or an empirical slippery slope.

Precedential slippery slopes

A precedential slippery slope is an argument that suggests that if we set the precedent of treating something relatively minor a certain way now, then we will have to treat something relatively major the same way later on. For example, a precedential slippery slope could involve arguing that if we legalize a relatively harmless drug now, then we will also have to legalize a much more harmful drug later.

The basic structure of a precedential slippery slope is:

“If we treat [relatively minor thing now] a certain way now, then we will set a precedent which will force us to treat [relatively major thing] the same way later.”

As such, the precedential slippery slopes are based on the need to treat similar cases in a consistent manner.

Note: the precedential slippery slope is sometimes also referred to as the fallacy of slippery precedents, in cases where its use is fallacious.

Conceptual slippery slopes

A conceptual slippery slope is an argument that suggests that if it’s possible to transition between two things using a series of small, nearly indistinguishable steps, then there is no meaningful difference between those two things, and they must be treated the exact same way. For example, a conceptual slippery slope could involve arguing that if we allow euthanasia for animals, then there is no reason why we shouldn’t also allow it for people.

As such, the basic structure of a conceptual slippery slope is:

“Since it’s possible to get from [first thing] to [second thing] through a series of small steps, there is no valid way to draw a distinction between them.”

This argument is based on the concept of vagueness and on the sorites paradox (also known as the paradox of the heap). This paradox revolves around the fact that removing a single grain of sand from a heap of sand doesn’t turn it into a non-heap, but that, at the same time, a single remaining grain of sand won’t be considered a heap, which means that at some point, the act of removing sand turned the heap into a non-heap, despite the fact that there is no clear line of demarcation between the two. Accordingly, this type of slippery slope argument often uses language such as “where do you draw the line?”.

Furthermore, this type of slippery slope often involves gradualism or incrementalism, where people’s commitment to a certain concept or course of action is tied to a series of small, closely related steps. Specifically, this occurs when the slippery slope argument suggests that if you take an initial step, then there is no reason for you not to accept the next step, and the one after that, until you reach the final step, which is usually highly negative. As such, such arguments pressure you to either give up on your initial commitment, or to demonstrate that there is an inconsistency in your commitments.

Note: because of its association with the sorites paradox and the concept of assimilation, the conceptual slippery slope is sometimes referred to as a sorites slippery slope or as the slippery assimilation fallacy.


The seven deadly sins of statistical misinterpretation, and how to avoid them

Winnifred Louis receives funding from the Australian Research Council and the Social Sciences and Humanities Research Council of Canada. She is a teacher of statistics in the University of Queensland, as well as a social psychologist, a peace psychologist, and a longstanding activist for causes such as enviromental sustainability and anti-racism.

Cassandra Chapman receives funding in the form of a PhD scholarship from the Department of Education and Training of the Australian government. She previously worked in marketing and fundraising for various not-for-profits and still collaborates with organisations in that sector.

Partners

University of Queensland provides funding as a member of The Conversation AU.

The Conversation UK receives funding from these organisations

Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.


Intention-Behavior-Gap and Intention Implementations

An article by David Scholz and Leonie Kott (translated by Anne Ridder)

Don’t you know those situations? You are trying to live more environmentally friendly by reducing your waste production. However, just as you step into your local bakery store and receive your order in three different bags, with pieces of cake separated by plastic layers and your coffee to go served with a plastic lid, you fail to achieve your new intentions. Why does that happen so often even though you were determined to follow your goal? One explanation is the “intention-behavior-gap”. In the following article, we will go into detail, explain the phenomenon, and give some tips and tricks to handle the obstacles on your way to achieve your intentions.

What is the Intention-Behavior-Gap?

The intention-behavior-gap describes why people sometimes fail to behave in a way they would like to, even though of existing, strong intentions (Sheeran & Webb, 2016). Research has found that most people’s intentions (e.g. I want to produce less waste) are weakly associated with their actions (e.g. to tell the baker that I do not need a bag) (Webb & Sheeran, 2006).

How Does the Gap Develop?

Sheeran & Webb (2016) came up with an overview for the actual state of research, which shows that there are two main reasons why intentions often do not result in behavior. Firstly, the quality of the intention needs to be assessed. The quality is determined by three factors:

a) The concreteness and difficulty of the goal
Does the goal reflect a concrete behavior? Is it easy to reach?

b) The background of the intention
Is the intention for change internal or external? Is it in accordance with your values?

c) The stability (time) of the intention
Does the intention already exists for a long time span?

Even intentions of good quality do not always lead to intended behavior. Self-regulatory difficulties make up the second criteria to successfully implement your intentions. This means, there can be problems with implementing your intentions because intentions are not acted on, not continued or not finished.

a) Not acting:
A common problem is that you just forget what you wanted to do due to all those distractions of our daily life. Even if you do not forget your intentions, it easily happens to miss opportunities were you could have acted on them. This especially happens when those opportunities come up in an irregular manner or are too short to realize (e.g., the vendor packs your goods too fast). In the end, an intention can also fail because you were not prepared accordingly. If one for example bought a breadbox, he or she is still in need for a bag and cannot refuse to take one.

b) Not continuing:
Even if you once succeeded to act on your intention can it still be difficult to keep on behaving in that way. One reason is that you do not keep track of your progress and thus quickly forget your intention. Other reasons may be that negative thoughts or emotions arise towards that behavior (e.g. I do not always want to receive special treatment) or old habits (e.g. I just said yes to a bag automatically) come up again. Lastly, you might just lack self-control since you need a lot of will power throughout the day (so-called ego depletion).

c) Not finishing:
Two problems might come up: you either have the feeling that you almost accomplished your goal and stop putting in effort to follow the goal or you did reach the goal but do not stop to put in effort. The former can lead to the outcome that the goal will never be accomplished. The latter can lead to the outcome that you do not have the time and energy to follow new goals.

Furthermore, there are behaviors that don’t help you to come closer to your goal but you still engage in them as a lot of time, work and resources has already been invested (so-called “sunk-cost fallacy”). For example, one has an extra freezer in the basement even though there is no real use for it. If he or she questions why, thoughts like “But I spent so much money on it” come up.

Since so much can go wrong, it might seem strange that intentions can actually result in behavior. Sheeran and Webb (2016) stress that those obstacles are neither unavoidable nor irresolvable. It quite often happens that you can implement new intentions on the first try. If you do not directly succeed, you can try to find the reason for your failure and make a plan to do so the next time.

How to Overcome the Gap?

In order to overcome the named obstacles, the authors propose so called “if-then plans” or “implementation intentions” as a useful method. Especially for promoting environmental friendly behavior, those implementations intentions are said to be most effective and easy way (Grimmer & Miles, 2016). One explanation is that they facilitate the preferred behavior to occur automatically in critical situations.

The method works like this:

  1. You start by imaging what you want to achieve as vividly as possible. You can close your eyes and try to envisage exactly how that feels and looks like.
  2. What are the exact steps to achieve that goal? Where do you have to take those steps? (e.g. bakery store, fast food, etc.)
  3. Sometimes things do not work out as planned and certain things hinder us to do what we planned to do. What are the main obstacles to achieve your goal? (e.g. It is inconvenient to ask for me, the product is packed too fast, etc.)
  4. How do you want to act in this situation the next time you encounter it? (e.g. I directly mention that I do not need a bag when I order, thus I do not miss the opportunity.)
  5. Formulate your if-then plan in advance. (e.g. If I order at the bakery, (then) I directly mention that I do not need a bag. If the situation feels uncomfortable, (then) I remain as friendly as possible.)

Another effective method that can be connected to the if-then method is the close observation of your own progress regarding that goal. To make it even more effective you may write down your progress (e.g. in a diary) or you inform your social contacts about your progress (e.g. tell a friend how proud you are that you finally achieved your goal).

If-then plans mostly help you to get started with the preferred behavior and to act accordingly. Keeping track of your progress helps you to continue to act and to realize when that goal is achieved.

Drawing on the previously mentioned example, the wish to reduce waste, we would like to make clear how the techniques could be used successfully.

Let’s propose you decide to lower you waste production. Now, take 5-10 minutes to give it some thought.

First of all, it makes sense to think about the quality of your intention:

  1. Goal setting: Producing less waste might be easy but is highly unspecific. The decision to not produce any waste at all (zero waste) would be very concise but highly unlikely. It is on you to find your best, personal compromise. How about starting with the goal: I do not want to buy fruit and vegetables that are packed in plastic!
  2. Origin: Rethink your goal. Is it personally relevant? Or did someone else ask you to do so? If the goal is not yet personally relevant to you it would make sense to get to know more about that topic or to select another goal. For example, you might consider your car use that has a higher impact on the environment than your plastic waste. Why not start there?! For our example, let’s assume you still decide to produce less plastic waste.
  3. Stability of your intention: Try to understand if you are just acting in the heat of the moment (e.g. you just saw a documentary about the consequences of plastic in our oceans) or if that topic has been occupying you for a longer time period already. If you catch yourself at basing your intention on the heat of the moment, it makes sense to give that topic some more attention first.

Well done, we now have a solid intention. Now, it is of importance to think about the obstacles you might encounter on your way in order to directly act against them. Think about situations in which you will be able to act on your new intention but also about what might hinder you to act and make some preparations. A good opportunity would be a supermarket since they sell lots of fruits and vegetables. Which obstacles could you possibly encounter? Are you missing an alternative to plastic bags? You could, for example, buy a fabric bag instead.

It is best to formulate an if-then plan to ensure that you will act as planned and also continue doing so. An example could be: If I go to the supermarket, (then) I take a fabric bag with me. Or: If I can decide between cheap fruits packed in plastic or more expensive fruit not packed in plastic, (then) I will buy the ones without plastic. Or: If I the only option is vegetables packed in plastic, (then) I will not buy them. Or: If I need fruits or vegetables, (then) I will go to the farmers market (and will not accept the offer of taking a plastic bag). Or, if you really do need the vegetables: If there are only vegetables packed in plastic, (then) I will complain to the manager.

Closely observe your progress of any form. If you are rather structured and barely need the help of friends or family, you could mark the days your plastic waste is full in a calendar. Do you feel like it is more helpful to be supported by others? Tell your friends about your achievements. Don’t forget to keep track and realize when a goal is truly achieved. Under ideal circumstances your if-then plan helps you to make your behavior automatic and you can continue to work on other behavioral patterns.

Grimmer, M., & Miles, M. P. (2017). With the best of intentions: a large sample test of the intention‐behaviour gap in pro‐environmental consumer behaviour. International journal of consumer studies, 41(1), 2-10.

Sheeran, P., & Webb, T. L. (2016). The intention–behavior gap. Social and personality psychology compass, 10(9), 503-518.

Webb, T. L., & Sheeran, P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological bulletin, 132(2), 249-268.


Fallacies & Pitfalls in Psychology

PLEASE NOTE: I created this site to be fully accessible for people with disabilities please follow this link to change text size, color, or contrast please follow this link for other accessibility functions for those with visual, mobility, and other disabilities.

I've gathered 10 of the most common fallacies and pitfalls that plague psychological testing and assessment, and provided a brief definition and discussion of each.

The list, of course, is far from exhaustive, but these 10 often show up in clinical, forensic, and other psychological assessments and I'm guessing that many if not all of us have gotten tangled up in them at one time or another.

They are: mismatched validity confirmation bias confusing retrospective and prospective accuracy (switching conditional probabilities) unstandardizing standardized tests ignoring the effects of low base rates misinterpreting dual high base rates perfect conditions fallacy financial bias ignoring the effects of audio-recording, video-recording, or the presence of third-party observers and uncertain gatekeeping.

These assessment fallacies and pitfalls are discussed in more detail in the articles and other materials on this site, but it seemed worthwhile to draw them together.

Mismatched Validity

Some tests are useful in diverse situations, but no test works well for all tasks with all people in all situations. In his classic 1967 article, Gordon Paul helped move us away from the oversimplified search for effective therapies toward a more difficult but meaningful question: "What treatment, by whom, is most effective for this individual with that specific problem, and under which set of circumstances?"

Selecting assessment instruments involves similarly complex questions, such as: "Has research established sufficient reliability and validity (as well as sensitivity, specificity, and other relevant features) for this test, with an individual from this population, for this task (i.e., the purpose of the assessment), in this set of circumstances?" It is important to note that as the population, task, or circumstances change, the measures of validity, reliability, sensitivity, etc., will also tend to change.

To determine whether tests are well-matched to the task, individual, and situation at hand, it is crucial that the psychologist ask a basic question at the outset: Why--exactly--am I conducting this assessment?

Confirmation Bias

Often we tend to seek, recognize, and value information that is consistent with our attitudes, beliefs, and expectations. If we form an initial impression, we may favor findings that support that impression, and discount, ignore, or misconstrue data that don't fit.

This premature cognitive commitment to an initial impression--which can form a strong cognitive set through which we sift all subsequent findings--is similar to the logical fallacy of hasty generalization.

To help protect ourselves against confirmation bias (in which we give preference to information that confirms our expectations), it is useful to search actively for data that disconfirm our expectations, and to try out alternate interpretations of the available data.

Confusing Retrospective & Predictive Accuracy (Switching Conditional Probabilities)

Predictive accuracy begins with the individual's test results and asks: What is the likelihood, expressed as a conditional probability, that a person with these results has condition (or ability, aptitude, quality, etc.) X? Retrospective accuracy begins with the condition (or ability, aptitude, quality) X and asks: What is the likelihood, expressed as a conditional probability, that a person who has X will show these test results? Confusing the "directionality'' of the inference (e.g., the likelihood that those who score positive on a hypothetical predictor variable will fall into a specific group versus the likelihood that those in a specific group will score positive on the predictor variable) causes many errors.

People with condition X are overwhelmingly likely to have these specific test results.
Person Y has these specific test results.
Therefore: Person Y is overwhelmingly likely to have condition X.

Unstandardizing Standardized Tests

Standardized tests gain their power from their standardization. Norms, validity, reliability, specificity, sensitivity, and similar measures emerge from an actuarial base: a well-selected sample of people providing data (through answering questions, performing tasks, etc.) in response to a uniform procedure in (reasonably) uniform conditions. When we change the instructions, or the test items themselves, or the way items are administered or scored, we depart from that standardization and our attempts to draw on the actuarial base become questionable.

There are other ways in which standardization can be defeated. People may show up for an assessment session without adequate reading glasses, or having taken cold medication that affects their alertness, or having experienced a family emergency or loss that leaves them unable to concentrate, or having stayed up all night with a loved one and now can barely keep their eyes open. The professional conducting the assessment must be alert to these situational factors, how they can threaten the assessment's validity, and how to address them effectively.

Any of us who conduct assessments can fall prey to these same situational factors and, on a given day, be unable to function adequately. We can also fall short through lack of competence. It is important to administer only those tests for which there has been adequate education, training, and supervised experience. We may function well in one area -- e.g., counseling psychology, clinical psychology, sport psychology, organizational psychology, school psychology, or forensic psychology -- and falsely assume that our competence transfers easily to the other areas. It is our responsibility to recognize the limits of competence and to make sure that any assessment is based on adequate competence in the relevant areas of practice, the relevant issues, and the relevant instruments.

In the same way that searching for disconfirming data and alternative explanations can help avoid confirmation bias, it can be helpful to search for conditions, incidents, or factors that may be undermining the validity of the assessment so that they can be taken into account and explicitly addressed in the assessment report.

Ignoring the Effects of Low Base Rates

Ignoring base rates can play a role in many testing problems but very low base rates seem particularly troublesome. Imagine you've been commissioned to develop an assessment procedure that will identify crooked judges so that candidates for judicial appointment can be screened. It's a difficult challenge, in part because only 1 out of 500 judges is (hypothetically speaking) dishonest.

You pull together all the actuarial data you can locate and find that you are able to develop a screening test for crookedness based on a variety of characteristics, personal history, and test results. Your method is 90% accurate.

When your method is used to screen the next 5,000 judicial candidates, there might be 10 candidates who are crooked (because about 1 out of 500 is crooked). A 90% accurate screening method will identify 9 of these 10 crooked candidates as crooked and one as honest.

So far, so good. The problem is the 4,990 honest candidates. Because the screening is wrong 10% of the time, and the only way for the screening to be wrong about honest candidates is to identify them as crooked, it will falsely classify 10% of the honest candidates as crooked. Therefore, this screening method will incorrectly classify 499 of these 4,990 honest candidates as crooked.

So out of the 5,000 candidates who were screened, the 90% accurate test has classified 508 of them as crooked (i.e., 9 who actually were crooked and 499 who were honest). Every 508 times the screening method indicates crookedness, it tends to be right only 9 times. And it has falsely branded 499 honest people as crooked.

Misinterpreting Dual High Base Rates

As part of a disaster response team, you are flown in to work at a community mental health center in a city devastated by a severe earthquake. Taking a quick look at the records the center has compiled, you note that of the 200 people who have come for services since the earthquake, there are 162 who are of a particular religious faith and are diagnosed with PTSD related to the earthquake, and 18 of that faith who came for services unrelated to the earthquake. Of those who are not of that faith, 18 have been diagnosed with PTSD related to the earthquake, and 2 have come for services unrelated to the earthquake.

It seems almost self-evident that there is a strong association between that particular religious faith and developing PTSD related to the earthquake: 81% of the people who came for services were of that religious faith and had developed PTSD. Perhaps this faith makes people vulnerable to PTSD. Or perhaps it is a more subtle association: this faith might make it easier for people with PTSD to seek mental heath services.

But the inference of an association is a fallacy: religious faith and the development of PTSD in this community are independent factors. Ninety percent of all people who seek services at this center happen to be of that specific religious faith (i.e., 90% of those who had developed PTSD and 90% who had come for other reasons) and 90% of all people who seek services after the earthquake (i.e., 90% of those with that particular religious faith and 90% of those who are not of that faith) have developed PTSD. The 2 factors appear to be associated because both have high base rates, but they are statistically unrelated.

Perfect Conditions Fallacy

Especially when we're hurried, we like to assume that "all is well," that in fact "conditions are perfect." If we don't check, we may not discover that the person we're assessing for a job, a custody hearing, a disability claim, a criminal case, asylum status, or a competency hearing took standardized psychological tests and completed other phases of formal assessment under conditions that significantly distorted the results. For example, the person may have forgotten the glasses they need for reading, be suffering from a severe headache or illness, be using a hearing aid that is not functioning well, be taking medication that impairs cognition or perception, have forgotten to take needed psychotropic medication, have experienced a crisis that makes it difficult to concentrate, be in physical pain, or have trouble understanding the language in which the assessment is conducted.

Financial Bias

It is a very human error to assume that we are immune to the effects of financial bias. But a financial conflict of interest can subtly -- and sometimes not so subtly -- affect the ways in which we gather, interpret, and present even the most routine data. This principle is reflected in well-established forensic texts and formal guidelines prohibiting liens and any other form of fee that is contingent on the outcome of a case. The Specialty Guidelines for Forensic Psychologists, for example, state: "Forensic psychologists do not provide professional services to parties to a legal proceeding on the basis of 'contingent fees,' when those services involve the offering of expert testimony to a court or administrative body, or when they call upon the psychologist to make affirmations or representations intended to be relied upon by third parties."

Ignoring Effects of Audio-recording, Video-recording or the Presence of Third-party Observers

Empirical research has identified ways in which audio-recording, video-recording, or the presence of third parties can affect the responses (e.g., various aspects of cognitive performance) of people during psychological and neuropsychological assessment. Ignoring these potential effects can create an extremely misleading assessment. Part of adequate preparation for an assessment that will involve recording or the presence of third-parties is reviewing the relevant research and professional guidelines.

Uncertain Gatekeeping

Psychologists who conduct assessments are gatekeepers of sensitive information that may have profound and lasting effects on the life of the person who was assessed. The gatekeeping responsibilities exist within a complex framework of federal (e.g., HIPAA) and state legislation and case law as well as other relevant regulations, codes, and contexts.

The following scenario illustrates some gatekeeping decisions psychologists may be called upon to make. This passage is taken verbatim from Ethics in Psychotherapy & Counseling, 4th Edition:

A 17-year-old boy comes to your office and asks for a comprehensive psychological evaluation. He has been experiencing some headaches, anxiety, and depression. A high-school dropout, he has been married for a year and has a one-year-old baby, but has left his wife and child and returned to live with his parents. He works full time as an auto mechanic and has insurance that covers the testing procedures. You complete the testing.

During the following year you receive requests for information about the testing from:

  • The boy's physician, an internist
  • The boy's parents, who are concerned about his depression
  • The boy's employer, in connection with a worker's compensation claim filed by the boy
  • The attorney for the insurance company that is contesting the worker's compensation claim
  • The attorney for the boy's wife, who is suing for divorce and for custody of the baby
  • The boy's attorney, who is considering suing you for malpractice because he does not like the results of the tests

Each of the requests asks for the full formal report, the original test data, and copies of each of the tests you administered (for example, instructions and all items for the MMPI-2).

To which of these people are you ethically or legally obligated to supply all information requested, partial information, a summary of the report, or no information at all? Which requests require having the boy's written informed consent before information can be released?

It is unfortunately all too easy, in the crush of a busy schedule or a hurried lapse of attention, to release data to those who are not legally or ethically entitled to it, sometimes with disastrous results. Clarifying these issues while planning an assessment is important because if the psychologist does not clearly understand them, it is impossible to communicate the information effectively as part of the process of informed consent and informed refusal. Information about who will or won't have access to an assessment report may be the key to an individual's decision to give or withhold informed consent for an assessment. It is the psychologist's responsibility to remain aware of the evolving legal, ethical, and practical frameworks that inform gatekeeping decisions.


4. Current issues in fallacy theory

4.1 The nature of fallacies

A question that continues to dog fallacy theory is how we are to conceive of fallacies. There would be advantages to having a unified theory of fallacies. It would give us a systematic way of demarcating fallacies and other kinds of mistakes it would give us a framework for justifying fallacy judgments, and it would give us a sense of the place of fallacies in our larger conceptual schemes. Some general definition of &lsquofallacy&rsquo is wanted but the desire is frustrated because there is disagreement about the identity of fallacies. Are they inferential, logical, epistemic or dialectical mistakes? Some authors insist that they are all of one kind: Biro and Siegel, for example, that they are epistemic, and Pragma-dialectics that they are dialectical. There are reasons to think that all fallacies do not easily fit into one category.

Together the Sophistical Refutations and Locke&rsquos Essay are the dual sources of our inheritance of fallacies. However, for four reasons they make for uneasy bedfellows. First, the ad fallacies seem to have a built-in dialectical character, which, it can be argued, Aristotle&rsquos fallacies do not have (they are not sophistical refutations but are in sophistical refutations). Second, Aristotle&rsquos fallacies are logical mistakes: they have no appropriate employment outside eristic argumentation whereas the ad-fallacies are instances of ad-arguments, often appropriately used in dialogues. Third, the appearance condition is part of the Aristotelian inheritance but it is not intimately connected with the ad-fallacies tradition. A fourth reason that contributes to the tension between the Aristotelian and Lockean traditions in fallacies is that the former grew out of philosophical problems, largely what are logical and metaphysical puzzles (consider the many examples in Sophistical Refutations), whereas the ad-fallacies are more geared to social and political topics of popular concern, the subject matter that most intrigues modern researchers on fallacy theory.

As we look back over our survey we cannot help but observe that fallacies have been identified in relation to some ideal or model of good arguments, good argumentation, or rationality. Aristotle&rsquos fallacies are shortcomings of his ideal of deduction and proof, extended to contexts of refutation. The fallacies listed by Mill are errors of reasoning in a comprehensive model that includes both deduction and induction. Those who have defended SDF as the correct definition of &lsquofallacy&rsquo [10] take logic simpliciter or deductive validity as the ideal of rationality. Informal logicians view fallacies as failures to satisfy the criteria of what they consider to be a cogent argument. Defenders of the epistemic approach to fallacies see them as shortfalls of the standards of knowledge-generating arguments. Finally, those who are concerned with how we are to overcome our disagreements in a reasonable way will see fallacies as failures in relation to ideals of debate or critical discussions.

The standard treatment of the core fallacies has not emerged from a single conception of good argument or reasonableness but rather, like much of our unsystematic knowledge, has grown as a hodgepodge collection of items, proposed at various time and from different perspectives, that continues to draw our attention, even as the standards that originally brought a given fallacy to light are abandoned or absorbed into newer models of rationality. Hence, there is no single conception of good argument or argumentation to be discovered behind the core fallacies, and any attempt to force them all into a single framework, must take efforts to avoid distorting the character originally attributed to each of them.

4.2 The appearance condition

From Aristotle to Mill the appearance condition was an essential part of the conception of fallacies. However, some of the new, post-Hamblin, scholars have either ignored it (Finocchiaro, Biro and Siegel) or rejected it because appearances can vary from person to person, thus making the same argument a fallacy for the one who is taken in by the appearance, and not a fallacy for the one who sees past the appearances. This is unsatisfactory for those who think that arguments are either fallacies or not. Appearances, it is also argued, have no place in logical or scientific theories because they belong to psychology (van Eemeren and Grootendorst, 2004). But Walton (e.g., 2010) continues to consider appearances an essential part of fallacies as does Powers (1995, 300) who insists that fallacies must &ldquohave an appearance, however quickly seen through, of being valid.&rdquo If the mistake in an argument is not masked by an ambiguity that makes it appear to be a better argument than it really is, Powers denies it is a fallacy.

The appearance condition of fallacies serves at least two purposes. First, it can be part of explanations of why reasonable people make mistakes in arguments or argumentation: it may be due in part to an argument&rsquos appearing to be better than it really is. Second, it serves to divide mistakes into two groups: those which are trivial or the result of carelessness (for which there is no cure other than paying better attention), and those which we need to learn to detect through an increased awareness of their seductive nature. Without the appearance condition, it can be argued, no division can be made between these two kinds of errors: either there are no fallacies or all mistakes in argument and/or argumentation are fallacies a conclusion that some are willing to accept, but which runs contrary to tradition. One can also respond that there is an alternative to using the appearance condition as the demarcation property between fallacies and casual mistakes, namely, frequency. Fallacies are those mistakes we must learn to guard against because they occur with noticeable frequency. To this it may be answered that &lsquonoticeable frequency&rsquo is vague, and is perhaps best explained by the appearance condition.

4.3 Teaching

On the more practical level, there continues to be discussion about the value of teaching the fallacies to students. Is it an effective way for them to learn to reason well and to avoid bad arguments? One reason to think that it is not effective is that the list of fallacies is not complete, and that even if the group of core fallacies was extended to incorporate other fallacies we thought worth including, we could still not be sure that we had a complete prophylactic against bad arguments. Hence, we are better off teaching the positive criteria for good arguments/ argumentation which give us a fuller set of guidelines for good reasoning. But some (Pragma-dialectics and Johnson and Blair) do think that their stock of fallacies is a complete guard against errors because they have specified a full set of necessary conditions for good arguments/argumentation and they hold that fallacies are just failures to meet one of these conditions.

Another consideration about the value of the fallacies approach to teaching good reasoning is that it tends to make students overly critical and lead them to see fallacies where there are not any hence, it is maintained we could better advance the instilling of critical thinking skills by teaching the positive criteria of good reasoning and arguments (Hitchcock, 1995). In response to this view, it is argued that, if the fallacies are taught in a non-perfunctory way which includes the explanations of why they are fallacies&mdashwhich normative standards they transgress&mdashthen a course taught around the core fallacies can be effective in instilling good reasoning skills (Blair 1995).

4.4 Biases

Recently there has been renewed interest in how biases are related to fallacies. Correia (2011) has taken Mill&rsquos insight that biases are predisposing causes of fallacies a step further by connecting identifiable biases with particular fallacies. Biases can influence the unintentional committing of fallacies even where there is no intent to be deceptive, he observes. Taking biases to be &ldquosystematic errors that invariably distort the subject&rsquos reasoning and judgment,&rdquo the picture drawn is that particular biases are activated by desires and emotions (motivated reasoning) and once they are in play, they negatively affect the fair evaluation of evidence. Thus, for example, the &ldquofocussing illusion&rdquo bias inclines a person to focus on just a part of the evidence available, ignoring or denying evidence that might lead in another direction. Correia (2011, 118) links this bias to the fallacies of hasty generalization and straw man, suggesting that it is our desire to be right that activates the bias to focus more on positive or negative evidence, as the case may be. Other biases he links to other fallacies.

Thagard (2011) is more concerned to stress the differences between fallacies and biases than to find connections between them. He claims that the model of reasoning articulated by informal logic is not a good fit with the way that people actually reason and that only a few of the fallacies are relevant to the kinds of mistakes people actually make. Thagard&rsquos argument depends on his distinction between argument and inference. Arguments, and fallacies, he takes to be serial and linguistic, but inferences are brain activities and are characterized as parallel and multi-modal. By &ldquoparallel&rdquo is meant that the brain carries out different processes simultaneously, and by &ldquomulti-modal&rdquo that the brain uses non-linguistic and emotional, as well as linguistic representations in inferring. Biases (inferential error tendencies) can unconsciously affect inferring. &ldquoMotivated inference,&rdquo for example, &ldquoinvolves selective recruitment and assessment of evidence based on unconscious processes that are driven by emotional considerations of goals rather than purely cognitive reasoning&rdquo (2011, 156). Thagard volunteers a list of more than fifty of these inferential error tendencies. Because motivated inferences result from unconscious mental processes rather than explicit reasoning, the errors in inferences cannot be exposed simply by identifying a fallacy in a reconstructed argument. Dealing with biases requires identification of both conscious and unconscious goals of arguers, goals that can figure in explanations of why they incline to particular biases. &ldquoOvercoming people&rsquos motivated inferences,&rdquo Thagard concludes, &ldquois therefore more akin to psychotherapy than informal logic&rdquo (157), and the importance of fallacies is accordingly marginalized.

In response to these findings, one can admit their relevance to the pedagogy of critical thinking but still recall the distinction between what causes mistakes and what the mistakes are. The analysis of fallacies belongs to the normative study of arguments and argumentation, and to give an account of what the fallacy in a given argument is will involve making reference to some norm of argumentation. It will be an explanation of what the mistake in the argument is. Biases are relevant to understanding why people commit fallacies, and how we are to help them get past them, but they do not help us understand what the fallacy-mistakes are in the first place&mdashthis is not a question of psychology. Continued research at this intersection of interests will hopefully shed more light on both biases and fallacies.


Graduated psychology student trying to read academic articles.

I work at an addiction rehab facility and planning on going for an LPC in the fall. I’m trying to get ahead and read articles related to trauma and addiction but find articles really difficult to read and understand. Is there any advice on how to read them and grasp what they are saying easier?

Understanding that will help you understand the design of a study and essentially boil it down to whether it was a good design or not.

Use google scholar to put in key terms for what you want to research but keep in mind the year it was published too! In lots of fields, there are often key researchers that write prolifically.

You might want to start with review articles and then try empirical articles. Review articles should be labeled "review." Empirical articles will have a methods section. Skip meta-analyses for now. Also, have you read the book "The Body Keeps the Score"?

One of the classes you are likely to take to get your license will help you read those studies. I took Applying Psychological Science this past semester, and the entirety of the course was reading studies and evaluating them/ breaking them down. That should help a lot!

If you want the gist of an article it goes abstract -> methods -> discussion. If you don't get the method that's ok it's just to give u a general idea of what they did. Discussion section usually summarizes findings and gives you the takeaways of the study.

If you want more of the theoretical background and lit review that's where you read the intro.

It takes practice to know what everything means and to follow along, but if you want to just get the gist of the findings, the abstract and discussion sections are usually quite helpful. I remember I used to not know what the results sections would be saying at all. The discussion is where those results are interpreted and put in a broader context. So sometimes I would read the discussion first then go back and read more.


Types of slippery slopes

  • Causal slopes, which revolve around the idea that a relatively minor initial action will lead to a relatively major final event.
  • Precedential slopes, which revolve around the idea that treating a relatively minor issue a certain way now will lead to us treating a relatively major issue the same way later on.
  • Conceptual slopes, which revolve around the idea that there is no meaningful difference between two things if it’s possible to get from one to the other through a series of small, nearly indistinguishable steps.

There is significant variation in terms of how different philosophers treat the different types of slippery slopes. However, in general, there are several characteristics that are shared between the different types and the different descriptions of slippery slope arguments:

  • A start-point that is relatively mild.
  • An end-point that is relatively extreme.
  • A process of transitioning from the start-point to the end-point, usually without the ability to stop in the middle.

In the sections below, we will explore each of these types of slippery slopes in more detail.

Causal slippery slopes

A causal slippery slope is an argument that suggests that undertaking an initial action will lead to a chain of events that will culminate in a dramatic outcome. For example, a causal slippery slope could involve arguing that if we help students who struggle by providing them with extra tutoring, then eventually we will simply give perfect grades to all students regardless if they put in any effort or not.

As such, the basic structure of a causal slippery slope is the following:

“If we do [relatively minor thing] now, then it will cause [relatively major thing] to happen later.”

At least two events are necessary for a causal slippery slope, though any a number of events can appear in between them, with each event in the chain occurring directly as a result of the previous one. Accordingly, a causal slippery slope will usually have the following structure in practice:

“If we allow [minor event] to happen now, then [another minor event] might happen later, leading to [a medium event], and finally to the possibility that [major event] will occur.”

These slopes often involve a positive-feedback mechanism, where the initial action in question will set off a chain reaction that reinforces itself. This potential feedback mechanism can be mentioned explicitly by the person proposing the slope, or it can be an implicit part of their argument.

Note: the causal slippery slope is sometimes also referred to as a predictive slippery slope or an empirical slippery slope.

Precedential slippery slopes

A precedential slippery slope is an argument that suggests that if we set the precedent of treating something relatively minor a certain way now, then we will have to treat something relatively major the same way later on. For example, a precedential slippery slope could involve arguing that if we legalize a relatively harmless drug now, then we will also have to legalize a much more harmful drug later.

The basic structure of a precedential slippery slope is:

“If we treat [relatively minor thing now] a certain way now, then we will set a precedent which will force us to treat [relatively major thing] the same way later.”

As such, the precedential slippery slopes are based on the need to treat similar cases in a consistent manner.

Note: the precedential slippery slope is sometimes also referred to as the fallacy of slippery precedents, in cases where its use is fallacious.

Conceptual slippery slopes

A conceptual slippery slope is an argument that suggests that if it’s possible to transition between two things using a series of small, nearly indistinguishable steps, then there is no meaningful difference between those two things, and they must be treated the exact same way. For example, a conceptual slippery slope could involve arguing that if we allow euthanasia for animals, then there is no reason why we shouldn’t also allow it for people.

As such, the basic structure of a conceptual slippery slope is:

“Since it’s possible to get from [first thing] to [second thing] through a series of small steps, there is no valid way to draw a distinction between them.”

This argument is based on the concept of vagueness and on the sorites paradox (also known as the paradox of the heap). This paradox revolves around the fact that removing a single grain of sand from a heap of sand doesn’t turn it into a non-heap, but that, at the same time, a single remaining grain of sand won’t be considered a heap, which means that at some point, the act of removing sand turned the heap into a non-heap, despite the fact that there is no clear line of demarcation between the two. Accordingly, this type of slippery slope argument often uses language such as “where do you draw the line?”.

Furthermore, this type of slippery slope often involves gradualism or incrementalism, where people’s commitment to a certain concept or course of action is tied to a series of small, closely related steps. Specifically, this occurs when the slippery slope argument suggests that if you take an initial step, then there is no reason for you not to accept the next step, and the one after that, until you reach the final step, which is usually highly negative. As such, such arguments pressure you to either give up on your initial commitment, or to demonstrate that there is an inconsistency in your commitments.

Note: because of its association with the sorites paradox and the concept of assimilation, the conceptual slippery slope is sometimes referred to as a sorites slippery slope or as the slippery assimilation fallacy.


The Jingle–Jangle Fallacy in Adolescent Autonomy in the Family: In Search of an Underlying Structure

The construct of autonomy has a rich, though quite controversial, history in adolescent psychology. The present investigation aimed to clarify the meaning and measurement of adolescent autonomy in the family. Based on theory and previous research, we examined whether two dimensions would underlie a wide range of autonomy-related measures, using data from two adolescent samples (N = 707, 51 % girls, and N = 783, 59 % girls, age range = 14–21 years). Clear evidence was found for a two-dimensional structure, with the first dimension reflecting “volition versus pressure”, that is, the degree to which adolescents experience a sense of volition and choice as opposed to feelings of pressure and coercion in the parent–adolescent relationship. The second dimension reflected “distance versus proximity”, which involves the degree of interpersonal distance in the parent-adolescent relationship. Whereas volition related to higher well-being, less problem behavior and a secure attachment style, distance was associated mainly with more problem behavior and an avoidant attachment style. These associations were not moderated by age. The discussion focuses on the meaning of adolescent autonomy and on the broader implications of the current findings.

This is a preview of subscription content, access via your institution.


How to prevent falling for Jingle-Jangle fallacies? - Psychology

The thing that stuck out most to me in this chapter is the fallacies of emotions. A fallacy is an irrational thought that lead to illogical conclusions and in turn, debilitative emotions. These fallacies are caused by our own self-talk. Self-talk is the nonvocal process of thinking and on some level, self-talk occurs as a person interprets another’s behavior (152). There are seven fallacies of emotions that people believe to be true: fallacy of perfection, fallacy of approval, fallacy of shoulds, fallacy of overgeneralization, fallacy of causation, fallacy of helplessness and the fallacy of catastrophic expectations.

There is no story of the fallacies in this chapter so I have included a journal about the three fallacies that have affected my life the most. In that paper, there are definitions for the fallacies of approval, shoulds and causation, but not the rest. The fallacy of perfection is when a person believes that any worthwhile communicator should be able to handle every situation with great confidence and perfect skill (153). The fallacy of overgeneralization is one of two things (155). The first is a belief on a limited amount of evidence such as when we say, “I’m so stupid! I can’t do anything right!” The second is when we exaggerate things like when we tell someone, “You never do anything around here!” They probably do something but maybe not very often so we should say, “I feel like you don’t help me very often.” The next fallacy is that of helplessness. The fallacy of helplessness is when a person believes that they have no control over their own satisfaction (157). They feel helpless about what they can do to be happy in life and believe that the world is in total control. The final fallacy is the fallacy of catastrophic expectations (157). People who believe that anything bad that can happen, will happen are the believers in the fallacy of catastrophic expectations.

YouTube Video

This is a trailer for She’s Out of my League. In this movie, a nerdy and awkward boy is noticed by a girl that he thinks is perfect. She loses her phone in the airport where he works and he is the one to find it and give it back to her. She then begins to have a relationship with him. His friends are constantly telling him that she is way too perfect for him. He starts trying to live up to her ex-boyfriend, Tim. He starts to think that if he isn’t perfect, then she is not going to like him anymore. This is an example of the fallacy of perfection.

These fallacies of emotions have made me understand more of how we work. It has taught me how our emotions can easily cloud our judgment and how much our own self-talk contributes to our emotions and the way we feel.


Fallacies & Pitfalls in Psychology

PLEASE NOTE: I created this site to be fully accessible for people with disabilities please follow this link to change text size, color, or contrast please follow this link for other accessibility functions for those with visual, mobility, and other disabilities.

I've gathered 10 of the most common fallacies and pitfalls that plague psychological testing and assessment, and provided a brief definition and discussion of each.

The list, of course, is far from exhaustive, but these 10 often show up in clinical, forensic, and other psychological assessments and I'm guessing that many if not all of us have gotten tangled up in them at one time or another.

They are: mismatched validity confirmation bias confusing retrospective and prospective accuracy (switching conditional probabilities) unstandardizing standardized tests ignoring the effects of low base rates misinterpreting dual high base rates perfect conditions fallacy financial bias ignoring the effects of audio-recording, video-recording, or the presence of third-party observers and uncertain gatekeeping.

These assessment fallacies and pitfalls are discussed in more detail in the articles and other materials on this site, but it seemed worthwhile to draw them together.

Mismatched Validity

Some tests are useful in diverse situations, but no test works well for all tasks with all people in all situations. In his classic 1967 article, Gordon Paul helped move us away from the oversimplified search for effective therapies toward a more difficult but meaningful question: "What treatment, by whom, is most effective for this individual with that specific problem, and under which set of circumstances?"

Selecting assessment instruments involves similarly complex questions, such as: "Has research established sufficient reliability and validity (as well as sensitivity, specificity, and other relevant features) for this test, with an individual from this population, for this task (i.e., the purpose of the assessment), in this set of circumstances?" It is important to note that as the population, task, or circumstances change, the measures of validity, reliability, sensitivity, etc., will also tend to change.

To determine whether tests are well-matched to the task, individual, and situation at hand, it is crucial that the psychologist ask a basic question at the outset: Why--exactly--am I conducting this assessment?

Confirmation Bias

Often we tend to seek, recognize, and value information that is consistent with our attitudes, beliefs, and expectations. If we form an initial impression, we may favor findings that support that impression, and discount, ignore, or misconstrue data that don't fit.

This premature cognitive commitment to an initial impression--which can form a strong cognitive set through which we sift all subsequent findings--is similar to the logical fallacy of hasty generalization.

To help protect ourselves against confirmation bias (in which we give preference to information that confirms our expectations), it is useful to search actively for data that disconfirm our expectations, and to try out alternate interpretations of the available data.

Confusing Retrospective & Predictive Accuracy (Switching Conditional Probabilities)

Predictive accuracy begins with the individual's test results and asks: What is the likelihood, expressed as a conditional probability, that a person with these results has condition (or ability, aptitude, quality, etc.) X? Retrospective accuracy begins with the condition (or ability, aptitude, quality) X and asks: What is the likelihood, expressed as a conditional probability, that a person who has X will show these test results? Confusing the "directionality'' of the inference (e.g., the likelihood that those who score positive on a hypothetical predictor variable will fall into a specific group versus the likelihood that those in a specific group will score positive on the predictor variable) causes many errors.

People with condition X are overwhelmingly likely to have these specific test results.
Person Y has these specific test results.
Therefore: Person Y is overwhelmingly likely to have condition X.

Unstandardizing Standardized Tests

Standardized tests gain their power from their standardization. Norms, validity, reliability, specificity, sensitivity, and similar measures emerge from an actuarial base: a well-selected sample of people providing data (through answering questions, performing tasks, etc.) in response to a uniform procedure in (reasonably) uniform conditions. When we change the instructions, or the test items themselves, or the way items are administered or scored, we depart from that standardization and our attempts to draw on the actuarial base become questionable.

There are other ways in which standardization can be defeated. People may show up for an assessment session without adequate reading glasses, or having taken cold medication that affects their alertness, or having experienced a family emergency or loss that leaves them unable to concentrate, or having stayed up all night with a loved one and now can barely keep their eyes open. The professional conducting the assessment must be alert to these situational factors, how they can threaten the assessment's validity, and how to address them effectively.

Any of us who conduct assessments can fall prey to these same situational factors and, on a given day, be unable to function adequately. We can also fall short through lack of competence. It is important to administer only those tests for which there has been adequate education, training, and supervised experience. We may function well in one area -- e.g., counseling psychology, clinical psychology, sport psychology, organizational psychology, school psychology, or forensic psychology -- and falsely assume that our competence transfers easily to the other areas. It is our responsibility to recognize the limits of competence and to make sure that any assessment is based on adequate competence in the relevant areas of practice, the relevant issues, and the relevant instruments.

In the same way that searching for disconfirming data and alternative explanations can help avoid confirmation bias, it can be helpful to search for conditions, incidents, or factors that may be undermining the validity of the assessment so that they can be taken into account and explicitly addressed in the assessment report.

Ignoring the Effects of Low Base Rates

Ignoring base rates can play a role in many testing problems but very low base rates seem particularly troublesome. Imagine you've been commissioned to develop an assessment procedure that will identify crooked judges so that candidates for judicial appointment can be screened. It's a difficult challenge, in part because only 1 out of 500 judges is (hypothetically speaking) dishonest.

You pull together all the actuarial data you can locate and find that you are able to develop a screening test for crookedness based on a variety of characteristics, personal history, and test results. Your method is 90% accurate.

When your method is used to screen the next 5,000 judicial candidates, there might be 10 candidates who are crooked (because about 1 out of 500 is crooked). A 90% accurate screening method will identify 9 of these 10 crooked candidates as crooked and one as honest.

So far, so good. The problem is the 4,990 honest candidates. Because the screening is wrong 10% of the time, and the only way for the screening to be wrong about honest candidates is to identify them as crooked, it will falsely classify 10% of the honest candidates as crooked. Therefore, this screening method will incorrectly classify 499 of these 4,990 honest candidates as crooked.

So out of the 5,000 candidates who were screened, the 90% accurate test has classified 508 of them as crooked (i.e., 9 who actually were crooked and 499 who were honest). Every 508 times the screening method indicates crookedness, it tends to be right only 9 times. And it has falsely branded 499 honest people as crooked.

Misinterpreting Dual High Base Rates

As part of a disaster response team, you are flown in to work at a community mental health center in a city devastated by a severe earthquake. Taking a quick look at the records the center has compiled, you note that of the 200 people who have come for services since the earthquake, there are 162 who are of a particular religious faith and are diagnosed with PTSD related to the earthquake, and 18 of that faith who came for services unrelated to the earthquake. Of those who are not of that faith, 18 have been diagnosed with PTSD related to the earthquake, and 2 have come for services unrelated to the earthquake.

It seems almost self-evident that there is a strong association between that particular religious faith and developing PTSD related to the earthquake: 81% of the people who came for services were of that religious faith and had developed PTSD. Perhaps this faith makes people vulnerable to PTSD. Or perhaps it is a more subtle association: this faith might make it easier for people with PTSD to seek mental heath services.

But the inference of an association is a fallacy: religious faith and the development of PTSD in this community are independent factors. Ninety percent of all people who seek services at this center happen to be of that specific religious faith (i.e., 90% of those who had developed PTSD and 90% who had come for other reasons) and 90% of all people who seek services after the earthquake (i.e., 90% of those with that particular religious faith and 90% of those who are not of that faith) have developed PTSD. The 2 factors appear to be associated because both have high base rates, but they are statistically unrelated.

Perfect Conditions Fallacy

Especially when we're hurried, we like to assume that "all is well," that in fact "conditions are perfect." If we don't check, we may not discover that the person we're assessing for a job, a custody hearing, a disability claim, a criminal case, asylum status, or a competency hearing took standardized psychological tests and completed other phases of formal assessment under conditions that significantly distorted the results. For example, the person may have forgotten the glasses they need for reading, be suffering from a severe headache or illness, be using a hearing aid that is not functioning well, be taking medication that impairs cognition or perception, have forgotten to take needed psychotropic medication, have experienced a crisis that makes it difficult to concentrate, be in physical pain, or have trouble understanding the language in which the assessment is conducted.

Financial Bias

It is a very human error to assume that we are immune to the effects of financial bias. But a financial conflict of interest can subtly -- and sometimes not so subtly -- affect the ways in which we gather, interpret, and present even the most routine data. This principle is reflected in well-established forensic texts and formal guidelines prohibiting liens and any other form of fee that is contingent on the outcome of a case. The Specialty Guidelines for Forensic Psychologists, for example, state: "Forensic psychologists do not provide professional services to parties to a legal proceeding on the basis of 'contingent fees,' when those services involve the offering of expert testimony to a court or administrative body, or when they call upon the psychologist to make affirmations or representations intended to be relied upon by third parties."

Ignoring Effects of Audio-recording, Video-recording or the Presence of Third-party Observers

Empirical research has identified ways in which audio-recording, video-recording, or the presence of third parties can affect the responses (e.g., various aspects of cognitive performance) of people during psychological and neuropsychological assessment. Ignoring these potential effects can create an extremely misleading assessment. Part of adequate preparation for an assessment that will involve recording or the presence of third-parties is reviewing the relevant research and professional guidelines.

Uncertain Gatekeeping

Psychologists who conduct assessments are gatekeepers of sensitive information that may have profound and lasting effects on the life of the person who was assessed. The gatekeeping responsibilities exist within a complex framework of federal (e.g., HIPAA) and state legislation and case law as well as other relevant regulations, codes, and contexts.

The following scenario illustrates some gatekeeping decisions psychologists may be called upon to make. This passage is taken verbatim from Ethics in Psychotherapy & Counseling, 4th Edition:

A 17-year-old boy comes to your office and asks for a comprehensive psychological evaluation. He has been experiencing some headaches, anxiety, and depression. A high-school dropout, he has been married for a year and has a one-year-old baby, but has left his wife and child and returned to live with his parents. He works full time as an auto mechanic and has insurance that covers the testing procedures. You complete the testing.

During the following year you receive requests for information about the testing from:

  • The boy's physician, an internist
  • The boy's parents, who are concerned about his depression
  • The boy's employer, in connection with a worker's compensation claim filed by the boy
  • The attorney for the insurance company that is contesting the worker's compensation claim
  • The attorney for the boy's wife, who is suing for divorce and for custody of the baby
  • The boy's attorney, who is considering suing you for malpractice because he does not like the results of the tests

Each of the requests asks for the full formal report, the original test data, and copies of each of the tests you administered (for example, instructions and all items for the MMPI-2).

To which of these people are you ethically or legally obligated to supply all information requested, partial information, a summary of the report, or no information at all? Which requests require having the boy's written informed consent before information can be released?

It is unfortunately all too easy, in the crush of a busy schedule or a hurried lapse of attention, to release data to those who are not legally or ethically entitled to it, sometimes with disastrous results. Clarifying these issues while planning an assessment is important because if the psychologist does not clearly understand them, it is impossible to communicate the information effectively as part of the process of informed consent and informed refusal. Information about who will or won't have access to an assessment report may be the key to an individual's decision to give or withhold informed consent for an assessment. It is the psychologist's responsibility to remain aware of the evolving legal, ethical, and practical frameworks that inform gatekeeping decisions.


Intention-Behavior-Gap and Intention Implementations

An article by David Scholz and Leonie Kott (translated by Anne Ridder)

Don’t you know those situations? You are trying to live more environmentally friendly by reducing your waste production. However, just as you step into your local bakery store and receive your order in three different bags, with pieces of cake separated by plastic layers and your coffee to go served with a plastic lid, you fail to achieve your new intentions. Why does that happen so often even though you were determined to follow your goal? One explanation is the “intention-behavior-gap”. In the following article, we will go into detail, explain the phenomenon, and give some tips and tricks to handle the obstacles on your way to achieve your intentions.

What is the Intention-Behavior-Gap?

The intention-behavior-gap describes why people sometimes fail to behave in a way they would like to, even though of existing, strong intentions (Sheeran & Webb, 2016). Research has found that most people’s intentions (e.g. I want to produce less waste) are weakly associated with their actions (e.g. to tell the baker that I do not need a bag) (Webb & Sheeran, 2006).

How Does the Gap Develop?

Sheeran & Webb (2016) came up with an overview for the actual state of research, which shows that there are two main reasons why intentions often do not result in behavior. Firstly, the quality of the intention needs to be assessed. The quality is determined by three factors:

a) The concreteness and difficulty of the goal
Does the goal reflect a concrete behavior? Is it easy to reach?

b) The background of the intention
Is the intention for change internal or external? Is it in accordance with your values?

c) The stability (time) of the intention
Does the intention already exists for a long time span?

Even intentions of good quality do not always lead to intended behavior. Self-regulatory difficulties make up the second criteria to successfully implement your intentions. This means, there can be problems with implementing your intentions because intentions are not acted on, not continued or not finished.

a) Not acting:
A common problem is that you just forget what you wanted to do due to all those distractions of our daily life. Even if you do not forget your intentions, it easily happens to miss opportunities were you could have acted on them. This especially happens when those opportunities come up in an irregular manner or are too short to realize (e.g., the vendor packs your goods too fast). In the end, an intention can also fail because you were not prepared accordingly. If one for example bought a breadbox, he or she is still in need for a bag and cannot refuse to take one.

b) Not continuing:
Even if you once succeeded to act on your intention can it still be difficult to keep on behaving in that way. One reason is that you do not keep track of your progress and thus quickly forget your intention. Other reasons may be that negative thoughts or emotions arise towards that behavior (e.g. I do not always want to receive special treatment) or old habits (e.g. I just said yes to a bag automatically) come up again. Lastly, you might just lack self-control since you need a lot of will power throughout the day (so-called ego depletion).

c) Not finishing:
Two problems might come up: you either have the feeling that you almost accomplished your goal and stop putting in effort to follow the goal or you did reach the goal but do not stop to put in effort. The former can lead to the outcome that the goal will never be accomplished. The latter can lead to the outcome that you do not have the time and energy to follow new goals.

Furthermore, there are behaviors that don’t help you to come closer to your goal but you still engage in them as a lot of time, work and resources has already been invested (so-called “sunk-cost fallacy”). For example, one has an extra freezer in the basement even though there is no real use for it. If he or she questions why, thoughts like “But I spent so much money on it” come up.

Since so much can go wrong, it might seem strange that intentions can actually result in behavior. Sheeran and Webb (2016) stress that those obstacles are neither unavoidable nor irresolvable. It quite often happens that you can implement new intentions on the first try. If you do not directly succeed, you can try to find the reason for your failure and make a plan to do so the next time.

How to Overcome the Gap?

In order to overcome the named obstacles, the authors propose so called “if-then plans” or “implementation intentions” as a useful method. Especially for promoting environmental friendly behavior, those implementations intentions are said to be most effective and easy way (Grimmer & Miles, 2016). One explanation is that they facilitate the preferred behavior to occur automatically in critical situations.

The method works like this:

  1. You start by imaging what you want to achieve as vividly as possible. You can close your eyes and try to envisage exactly how that feels and looks like.
  2. What are the exact steps to achieve that goal? Where do you have to take those steps? (e.g. bakery store, fast food, etc.)
  3. Sometimes things do not work out as planned and certain things hinder us to do what we planned to do. What are the main obstacles to achieve your goal? (e.g. It is inconvenient to ask for me, the product is packed too fast, etc.)
  4. How do you want to act in this situation the next time you encounter it? (e.g. I directly mention that I do not need a bag when I order, thus I do not miss the opportunity.)
  5. Formulate your if-then plan in advance. (e.g. If I order at the bakery, (then) I directly mention that I do not need a bag. If the situation feels uncomfortable, (then) I remain as friendly as possible.)

Another effective method that can be connected to the if-then method is the close observation of your own progress regarding that goal. To make it even more effective you may write down your progress (e.g. in a diary) or you inform your social contacts about your progress (e.g. tell a friend how proud you are that you finally achieved your goal).

If-then plans mostly help you to get started with the preferred behavior and to act accordingly. Keeping track of your progress helps you to continue to act and to realize when that goal is achieved.

Drawing on the previously mentioned example, the wish to reduce waste, we would like to make clear how the techniques could be used successfully.

Let’s propose you decide to lower you waste production. Now, take 5-10 minutes to give it some thought.

First of all, it makes sense to think about the quality of your intention:

  1. Goal setting: Producing less waste might be easy but is highly unspecific. The decision to not produce any waste at all (zero waste) would be very concise but highly unlikely. It is on you to find your best, personal compromise. How about starting with the goal: I do not want to buy fruit and vegetables that are packed in plastic!
  2. Origin: Rethink your goal. Is it personally relevant? Or did someone else ask you to do so? If the goal is not yet personally relevant to you it would make sense to get to know more about that topic or to select another goal. For example, you might consider your car use that has a higher impact on the environment than your plastic waste. Why not start there?! For our example, let’s assume you still decide to produce less plastic waste.
  3. Stability of your intention: Try to understand if you are just acting in the heat of the moment (e.g. you just saw a documentary about the consequences of plastic in our oceans) or if that topic has been occupying you for a longer time period already. If you catch yourself at basing your intention on the heat of the moment, it makes sense to give that topic some more attention first.

Well done, we now have a solid intention. Now, it is of importance to think about the obstacles you might encounter on your way in order to directly act against them. Think about situations in which you will be able to act on your new intention but also about what might hinder you to act and make some preparations. A good opportunity would be a supermarket since they sell lots of fruits and vegetables. Which obstacles could you possibly encounter? Are you missing an alternative to plastic bags? You could, for example, buy a fabric bag instead.

It is best to formulate an if-then plan to ensure that you will act as planned and also continue doing so. An example could be: If I go to the supermarket, (then) I take a fabric bag with me. Or: If I can decide between cheap fruits packed in plastic or more expensive fruit not packed in plastic, (then) I will buy the ones without plastic. Or: If I the only option is vegetables packed in plastic, (then) I will not buy them. Or: If I need fruits or vegetables, (then) I will go to the farmers market (and will not accept the offer of taking a plastic bag). Or, if you really do need the vegetables: If there are only vegetables packed in plastic, (then) I will complain to the manager.

Closely observe your progress of any form. If you are rather structured and barely need the help of friends or family, you could mark the days your plastic waste is full in a calendar. Do you feel like it is more helpful to be supported by others? Tell your friends about your achievements. Don’t forget to keep track and realize when a goal is truly achieved. Under ideal circumstances your if-then plan helps you to make your behavior automatic and you can continue to work on other behavioral patterns.

Grimmer, M., & Miles, M. P. (2017). With the best of intentions: a large sample test of the intention‐behaviour gap in pro‐environmental consumer behaviour. International journal of consumer studies, 41(1), 2-10.

Sheeran, P., & Webb, T. L. (2016). The intention–behavior gap. Social and personality psychology compass, 10(9), 503-518.

Webb, T. L., & Sheeran, P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological bulletin, 132(2), 249-268.


The seven deadly sins of statistical misinterpretation, and how to avoid them

Winnifred Louis receives funding from the Australian Research Council and the Social Sciences and Humanities Research Council of Canada. She is a teacher of statistics in the University of Queensland, as well as a social psychologist, a peace psychologist, and a longstanding activist for causes such as enviromental sustainability and anti-racism.

Cassandra Chapman receives funding in the form of a PhD scholarship from the Department of Education and Training of the Australian government. She previously worked in marketing and fundraising for various not-for-profits and still collaborates with organisations in that sector.

Partners

University of Queensland provides funding as a member of The Conversation AU.

The Conversation UK receives funding from these organisations

Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.


What is “asking for it”?

On the face of it, it may seem logical to some people that a woman who’s drunk or wearing a short skirt is somehow “asking for it” in reality, though, it’s absurd. To use an utterly ridiculous example, let’s say that my sexual kink is whacking men’s bums with a rubber chicken. Let’s also say that men wearing sunglasses are my biggest turn-on. Furthermore, let’s say I were to accost a man wearing sunglasses and get busy on his bum with my rubber chicken. Is there any chance that anyone would say that the rubber chickening is the man’s fault for wearing sunglasses, or that wearing sunglasses constituted implied consent? Is there anything at all that he could possibly do to make people conclude the rubber chickening was his fault? I highly doubt it.

So, we don’t blame the victim in this albeit ridiculous scenario, but people blame a woman for being sexually assaulted because she’s wearing a miniskirt? Even though one scenario is ridiculous and the other happens far too often, they are fundamentally the same. They’re both situations where the perpetrator is fully responsible for their actions, and the victim is 0% responsible.

The cognitive biases that tend to underlie victim-blaming may be easy to fall into, but that’s no excuse for anyone not to check in with themselves and reflect on who it is that’s actually done something wrong.

The Psychology Corner has an overview of terms covered in the What Is… series, along with s a collection of scientifically validated psychological tests.

Visit the [email protected] Resource Pages hub to see other themed pages from Mental Health @ Home.


Watch the video: Ricky Martin - Borrow Indefinitely Official Lyric Video (January 2022).