Information

Are there any studies (fMRI scans, etc) showing why some people (supposedly) are more open to hypnotism?

Are there any studies (fMRI scans, etc) showing why some people (supposedly) are more open to hypnotism?

In this article published in Harper's magazine back in 1996, journalist David Foster Wallace described his experience with hypnotist Nigel Ellery (see the last section of the article, titled "THE HEADLINE ENTERTAINMENT"). Mr Ellery was providing part of the entertainment on the cruise ship that David was on.

As to hypnotism, David writes that:

First off, we learn that not everyone is susceptible to serious hypnosis: Nigel Ellery puts the Celebrity Show Lounge's whole 300-plus crowd through some simple in-your-seat tests to determine who is suggestibly "gifted" enough to "participate" in the "fun" to come.

In the footnote to that paragraph just quoted, David writes that:

I, who know from hard experience that I am hypnotizable, think about sports statistics and deliberately flunk a couple of the tests to avoid getting up there.

The question I wanted to ask is, are there any studies looking into why some people would be more susceptible to hypnotism?

What I am particularly interested in is whether this has something to do with a habitual activation or inactivation of parts of the brain, or perhaps the levels of some neuromodulators?


Overall, while there are developing cognitive neuroscience theories of how hypnotic states are produced, there does not appear to be any known cognitive neuroscience basis for individual differences in hypnotic susceptibility based on a reasonable Google Scholar, Web of Science and Scopus search on the topic.

There is at least some evidence to suggest that highly hypnotic participants exhibit different EEG activity patterns (Sabourin et al., 1990). A later review of the hypnosis literature proposed an attentional explanation for the difference in hypnotic behavior for highs and lows (Crawford, 1994). They specifically propose that:

highly hypnotizable persons (highs) possess stronger attentional filtering abilities than do low hypnotizable persons, and that these differences are reflected in underlying brain dynamics.

Finally, a still later study more specifically implicated the anterior cingulate cortex, the thalamus, and the ponto-mesencephalic brainstem in the production of hypnotic states (Rainville et al., 2002), but they make no mention of differences of highly and less hypnotically susceptible individuals. This seems to be a trend in the literature I reviewed, suggesting the concept may have been abandoned.

References

  • Crawford, H. J. (1994). Brain dynamics and hypnosis: Attentional and disattentional processes. International journal of clinical and experimental hypnosis, 42(3), 204-232.
  • Rainville, P., Hofbauer, R. K., Bushnell, M. C., Duncan, G. H., & Price, D. D. (2002). Hypnosis modulates activity in brain structures involved in the regulation of consciousness. Journal of cognitive neuroscience, 14(6), 887-901.
  • Sabourin, M. E., Cutcomb, S. D., Crawford, H. J., & Pribram, K. (1990). EEG correlates of hypnotic susceptibility and hypnotic trance: Spectral analysis and coherence. International Journal of Psychophysiology, 10(2), 125-142.

McCabe & Castel Experiment

McCabe and Castel wrote three brief (fake) scientific articles that appeared to be typical reports like those you might find in a textbook or news source, all with brain activity as part of the story. In addition to the one you read (“Watching TV is related to math ability “) others had these titles: “Meditation enhances creative thought” and “Playing video games benefits attention.”

All of the articles had flawed scientific reasoning. In the “Watching TV is Related to Math Ability” article that you read, the only “result” that is reported is that a particular brain area (a part of the parietal lobe) is active when a person is watching TV and when he or she is working on math. The highlighted part of the next sentence is where the article goes too far: “This area of the brain has been implicated in other research as being important for abstract thought, suggesting that both tv watching and arithmetic processing may have beneficial effects on cognition.”

The fact that the same area of the brain is active for two different activities does not “suggest” that either one is beneficial or that there is any interesting similarity in mental or brain activity between the processes. The final part of the article goes on and on about how this supposedly surprising finding is intriguing and deserves extensive exploration.

Try It

The researchers asked 156 college students to read the three articles and rate them for how much they made sense scientifically, as well as rating the quality of the writing and the accuracy of the title.

Everybody read exactly the same articles, but the picture that accompanied the article differed according to create three experimental conditions. For the article in the brain image condition, subjects saw one of the following brain images to the side of the article:

Figure 1. Subjects in the experimental condition were shown ONE of the applicable brain images with each article they read.

Graphs are a common and effective way to display results in science and other areas, but most people are so used to seeing graphs that (according to McCabe and Castel) people should be less impressed by them than by brain images. The figures below show the graphs that accompanied the three articles for the bar graph condition. The results shown in the graphs were made up by the experimenters, but what they show is consistent with the information in the article.

Figure 2. Participants in the bar graph condition were shown ONE of the bar graphs with each article they read.

Finally, in the control condition, the article was presented without any accompanying figure or picture. The control condition tells us how the subjects rate the articles without any extraneous, but potentially biasing, illustrations.

Procedure

Each participant read all three articles: one with a brain image, one with a bar graph, and one without any illustration (the control condition). Across all the participants, each article was presented approximately the same number of times in each condition, and the order in which the articles were presented was randomized.

Ratings

Immediately after reading each article, the participants rated their agreement with three statements: (a) The article was well written, (b) The title was a good description of the results, and (c) The scientific reasoning in the article made sense. Each rating was on a 4-point scale: (score=1) strongly disagree, (score=2) disagree, (score=3) agree, and (score=4) strongly agree. Remember that the written part of the articles was exactly the same in all three conditions, so the ratings should have been the same if people were not using the illustrations to influence their conclusions.

Before going on, let’s make sure you know the basic design of this experiment. In other words, can you identify the critical variables used in the study according to their function?

Try It

Results for (a) Accuracy of the Title and (b) Quality of the Writing

The first two questions for the participants were about (a) the accuracy of the title and (b) the quality of the writing. These questions were included to assure that the participants had read the articles closely. The experimenters expected that there would be no differences in the ratings for the three conditions for these questions. For the question about the title, their prediction was correct. Subjects gave about the same rating to the titles in all three conditions, agreeing that it was accurate.

For question (b) about the quality of the writing, the experimenters found that the two conditions with illustrations (the brain images and the bar graphs) were rated higher than the control condition. Apparently just the presence of an illustration made the writing seem better. This result was not predicted.

Results for (c) Scientific Reasoning Assessment

The main hypothesis behind this study was that subjects would rate the quality of the scientific reasoning in the article higher when it was accompanied by a brain image than when there was a bar graph or there was no illustration at all. If the ratings differed among conditions, then the illustrations—which added nothing substantial that was not in the writing—had to be the cause.

Try It

Use the graph below to show your predicted results of the experiment. Move the bars to the point where you think people generally agreed or disagreed with the statement that “the scientific reasoning in the article made sense.” Higher bars mean that the person believes the reasoning in the article is better, and a lower bar means that they judge the reasoning as worse. Click on “Show Results” when you are done to compare your prediction with the actual results.

RESULTS: The results supported the experimenters’ prediction. The scientific reasoning for the Brain Image condition was rated as significantly higher than for either other condition. There was no significant difference between the Bar Graph condition and the Control condition. Here is a graph of the results:

Conclusions

McCabe and Castel conducted two more experiments, changing the stories, the images, and the wording of the questions in each. Across the three experiments, they tested almost 400 college students and their results were consistent: participants rated the quality of scientific reasoning higher when the writing was accompanied by a brain image than in other conditions.

The implications of this study go beyond brain images. The deeper idea is that any information that symbolizes something we believe is important can influence our thinking, sometimes making us less thoughtful than we might otherwise be. This other information could be a brain image or some statistical jargon that sounds impressive or a mathematical formula that we don’t understand or a statement that the author teaches at Harvard University rather than Littletown State College.

In a study also published in 2008, Deena Weisberg and her colleagues at Yale University conducted a study similar to the one you just read. [2] Weisberg had people read brief descriptions of psychological phenomena (involving memory, attention, reasoning, emotion, and other similar topics). They rated the scientific quality of the explanations. Instead of images, Weisberg had some explanations that included entirely superfluous and useless brain information (e.g., “people feel strong emotion because the amygdala processes emotion”) or no such brain information. Weisberg found that a good explanation was rated as even better when it included a brain reference (which was completely irrelevant). When the explanation was flawed, students were fairly good at catching the reasoning problems UNLESS the explanation contained the irrelevant brain reference. In that case, the students rated the flawed explanations as being good. Weinstein and her colleague call the problem “the seductive allure of neuroscience explanations.”


#4 Record and Celebrate Your Small Victories

We learned earlier that making progress is a great way to enhance our mood and motivation. The problem is, our brain isn’t built to detect progress, but to detect anything that could threaten our lives (which makes sense from a survival standpoint).

This is called the brain’s negativity bias — the brain’s mechanism of fading out all the positive stuff and just showing us the negative. You could do 99 out of 100 things right, and your brain would still focus on the one thing you did wrong.

That’s a bit of a problem from a mood perspective, because we tend to overlook the progress we’re making and instead focus on the setbacks — which will lower our mood and make us think we’re total losers.

Thankfully, there’s a way to mitigate the effects of our brain’s negativity bias by making a list of our small daily accomplishments. Every time you catch yourself doing something right, you register it and celebrate it as making progress. This keeps you focused on the positive and gives your mood and productivity a steady boost.


Since the mid 1990s, the intriguing dynamics of the brain at rest has been attracting a growing body of research in neuroscience. Neuroimaging studies have revealed distinct functional networks that slowly activate and deactivate, pointing to the existence of an underlying network dynamics emerging spontaneously during rest, with specific spatial, temporal and spectral characteristics. Several theoretical scenarios have been proposed and tested with the use of large-scale computational models of coupled brain areas. However, a mechanistic explanation that encompasses all the phenomena observed in the brain during rest is still to come.

In this review, we provide an overview of the key findings of resting-state activity covering a range of neuroimaging modalities including fMRI, EEG and MEG. We describe how to best define and analyze anatomical and functional brain networks and how unbalancing these networks may lead to problems with mental health. Finally, we review existing large-scale models of resting-state dynamics in health and disease.

An important common feature of resting-state models is that the emergence of resting-state functional networks is obtained when the model parameters are such that the system operates at the edge of a bifurcation. At this critical working point, the global network dynamics reveals correlation patterns that are spatially shaped by the underlying anatomical structure, leading to an optimal fit with the empirical BOLD functional connectivity. However, new insights coming from recent studies, including faster oscillatory dynamics and non-stationary functional connectivity, must be taken into account in future models to fully understand the network mechanisms leading to the resting-state activity.


Recording Electrical Activity in the Brain

In addition to lesion approaches, it is also possible to learn about the brain by studying the electrical activity created by the firing of its neurons. One approach, primarily used with animals, is to place detectors in the brain to study the responses of specific neurons. Research using these techniques has found, for instance, that there are specific neurons, known as feature detectors, in the visual cortex that detect movement, lines and edges, and even faces (Kanwisher, 2000).

Figure 4.14 EEG Study. A participant in an EEG study with a number of electrodes placed around his head.

A less invasive approach, and one that can be used on living humans, is electroencephalography (EEG), as shown in Figure 4.14. The EEG is a technique that records the electrical activity produced by the brain’s neurons through the use of electrodes that are placed around the research participant’s head. An EEG can show if a person is asleep, awake, or anesthetized because the brainwave patterns are known to differ during each state. EEGs can also track the waves that are produced when a person is reading, writing, and speaking, and are useful for understanding brain abnormalities, such as epilepsy. A particular advantage of EEG is that the participant can move around while the recordings are being taken, which is useful when measuring brain activity in children, who often have difficulty keeping still. Furthermore, by following electrical impulses across the surface of the brain, researchers can observe changes over very fast time periods.


Lesions Provide a Picture of What Is Missing

An advantage of the cadaver approach is that the brains can be fully studied, but an obvious disadvantage is that the brains are no longer active. In other cases, however, we can study living brains. The brains of living human beings may be damaged, for instance, as a result of strokes, falls, automobile accidents, gunshots, or tumors. These damages are called lesions. In rare occasions, brain lesions may be created intentionally through surgery, such as that designed to remove brain tumors or (as in split-brain patients) to reduce the effects of epilepsy. Psychologists also sometimes intentionally create lesions in animals to study the effects on their behavior. In so doing, they hope to be able to draw inferences about the likely functions of human brains from the effects of the lesions in animals.

Lesions allow the scientist to observe any loss of brain function that may occur. For instance, when an individual suffers a stroke, a blood clot deprives part of the brain of oxygen, killing the neurons in the area and rendering that area unable to process information. In some cases, the result of the stroke is a specific lack of ability. For instance, if the stroke influences the occipital lobe, then vision may suffer, and if the stroke influences the areas associated with language or speech, these functions will suffer. In fact, our earliest understanding of the specific areas involved in speech and language were gained by studying patients who had experienced strokes.


John M. Harlow – Phineas Gage – public domain.

Areas in the frontal lobe of Phineas Gage were damaged when a metal rod blasted through it. Although Gage lived through the accident, his personality, emotions, and moral reasoning were influenced. The accident helped scientists understand the role of the frontal lobe in these processes.

It is now known that a good part of our moral reasoning abilities are located in the frontal lobe, and at least some of this understanding comes from lesion studies. For instance, consider the well-known case of Phineas Gage, a 25-year-old railroad worker who, as a result of an explosion, had an iron rod driven into his cheek and out through the top of his skull, causing major damage to his frontal lobe (Macmillan, 2000). Although remarkably Gage was able to return to work after the wounds healed, he no longer seemed to be the same person to those who knew him. The amiable, soft-spoken Gage had become irritable, rude, irresponsible, and dishonest. Although there are questions about the interpretation of this case study (Kotowicz, 2007), it did provide early evidence that the frontal lobe is involved in emotion and morality (Damasio et al., 2005).

More recent and more controlled research has also used patients with lesions to investigate the source of moral reasoning. Michael Koenigs and his colleagues (Koenigs et al., 2007) asked groups of normal persons, individuals with lesions in the frontal lobes, and individuals with lesions in other places in the brain to respond to scenarios that involved doing harm to a person, even though the harm ultimately saved the lives of other people (Miller, 2008).

In one of the scenarios the participants were asked if they would be willing to kill one person in order to prevent five other people from being killed. As you can see in Figure 3.14 “The Frontal Lobe and Moral Judgment”, they found that the individuals with lesions in the frontal lobe were significantly more likely to agree to do the harm than were individuals from the two other groups.

Figure 3.14 The Frontal Lobe and Moral Judgment

Koenigs and his colleagues (2007) found that the frontal lobe is important in moral judgment. Persons with lesions in the frontal lobe were more likely to be willing to harm one person in order to save the lives of five others than were control participants or those with lesions in other parts of the brain.


1.3 Conducting Research in Social Psychology

Social psychologists are not the only people interested in understanding and predicting social behavior or the only people who study it. Social behavior is also considered by religious leaders, philosophers, politicians, novelists, and others, and it is a common topic on TV shows. But the social psychological approach to understanding social behavior goes beyond the mere observation of human actions. Social psychologists believe that a true understanding of the causes of social behavior can only be obtained through a systematic scientific approach, and that is why they conduct scientific research. Social psychologists believe that the study of social behavior should be empirical —that is, based on the collection and systematic analysis of observable data.


Algorithms used to study brain activity may be exaggerating results

Interpretation of functional MRI data called into question.

Interesting. Is this software that runs with/on the machine, or software that is sold separately to analyze fMRI data after it has been exported from the scanner? Is it the device manufacturer (ie, GE, Siemens, Toshiba, Philips) that is at fault, or some other software companies?

I am not surprised that someone found bugs in complex software that potentially invalidates a lot of work. One should tread very carefully when complex software models are used as they likely contain some well hidden bugs and flaws that were never picked up during testing.

So, the whole bullshit that talking on a phone, with a Bluetooth, while driving is as dangerous as driving drunk (because flmri told us so) is debunked.

Now NItsa has to walk back their recommendation that talking on the phone while driving be banned.

The software in the study is post-processing (i.e., after the scan is over) software. It's been made by different academic centers (or collaborations thereof)

This is a massive win for advocates of open data.

Reproducibility has so often been something that is "nice in theory, impossible in practice" - particularly with the drive towards shorter and shorter publications, with more hype and less detail. The recent push towards open data is a critical change in this trend.

Out of curiosity, does anyone know how much of the MRI hardware and software is open source?

I hope you're not implying the same with long term global climate models. I'll get my pitchfork ready just in case.

Is there more hype than what's been actually demonstrated?

The particular 15 year old bug they found was in 3dClustSim.
https://afni.nimh.nih.gov/pub/dist/doc/ . stSim.html

The particular 15 year old bug they found was in 3dClustSim.
https://afni.nimh.nih.gov/pub/dist/doc/ . stSim.html

You know, the other way to look at this is to consider that a dead salmon might just be smarter than you think.

Or at least smarter than my congress critter or other elected official.

(Which indicates the people who put said critter in Congress must not be very bright.)

MRIs are used for other reason. I get an MRI annually because of a disease that I have not related to blood flow.

The staff told me that I can bring a CD to listen to while being scanned and to drown out that aggravating noise the MRI makes. I thought about stripping the audio from a porno movie and burn it to CD. I would like to see how that scan compares to others.

Out of curiosity, does anyone how much of the MRI hardware and software is open source?

MRI hardware and software from the major vendors (Siemens, GE, Philips, Toshiba, Bruker, Hitachi) are not open source and even with a collaboration agreement it's not always easy to get access to the source code of MRI sequences and reconstruction code. Post-processing software is often developed by academic instititutions and is thus mostly open-source or at least free-to-use.

However, for image reconstruction, there is an NIH project that focuses on developing an open-source framework that works across MRIs from different vendors ( https://gadgetron.github.io/ ). The idea is that only a barebone image reconstruction code runs on the vendor side that sends the raw data to the open-source software. The reconstructed images are then send back to the vendor software and are then displayed on the scanner console. This is still a pretty young project but it already works fairly well and gets better every day. Great project that is really imported for scientists that want to make cutting edge sampling & reconstruction (e.g. compressed sensing) techniques available for clinical studies.

But the good news for the week is enough helium has been found to continue doing these experiments and more.

MRIs are used for other reason. I get an MRI annually because of a disease that I have not related to blood flow.

The staff told me that I can bring a CD to listen to while being scanned and to drown out that aggravating noise the MRI makes. I thought about stripping the audio from a porno movie and burn it to CD. I would like to see how that scan compares to others.

fMRI and MRI are different (though related) things. Hospitals generally don't have fMRI machines, only regular MRIs. fMRIs are pretty much only used by research institutions.

MRIs are used for other reason. I get an MRI annually because of a disease that I have not related to blood flow.

The staff told me that I can bring a CD to listen to while being scanned and to drown out that aggravating noise the MRI makes. I thought about stripping the audio from a porno movie and burn it to CD. I would like to see how that scan compares to others.

fMRI and MRI are different (though related) things. Hospitals generally don't have fMRI machines, only regular MRIs. fMRIs are pretty much only used by research institutions.

MRI scanners use different applications and settings for those applications to acquire images. For example, if your physician is concerned about a stroke, your MR exam will include acquisitions that are sensitive to the physiological changes that occur in your brain due to a stroke. Different settings would be used if you are attempting to determine if you have a brain tumor. A typical MRI exam includes acquiring several different image volumes, each with different settings. A reasonable analogy is a camera. With a single camera, you can change settings such as aperture, shutter speed, focal length, etc. in order to get images which look different. MRI is a relatively young imaging technique (

40 years), so there is a ton of research developing new techniques to use MRIs.

Here and here are some introductions to the image contrasts in MRIs.

fMRI rapidly acquires a time series of images (e.g. a 3D movie) of your brain using an acquisition that is sensitive to changes in blood oxygenation (BOLD contrast). These data are acquired synchronized to some task that the patient is asked to do. For example the patient may be asked to alternately tap the fingers on their right hand and then not tap fingers to determine what part of the brain is used to control their fingers. The post-processing software then looks at the signal in each voxel over time to look for changes in signal intensity correlated to the task that the subject was performing.

Many MRI scanners are capable of acquiring fMRI images. However, many institutions do not pay for or use the capability to do this because it is not part of their clinical practice and they do not do fMRI research. Most fMRI scanning is done as part of research studies either on a system dedicated to research or on a system shared with a clinical practice.

As the first author of the dead salmon work I can honestly say there was no evidence the dead salmon might be smarter than we thought. : ) For the curious, here are links to our papers that argued for improved multiple comparisons correction in fMRI studies:

It is academic software that's generally freely downloadable:

Note that the software supplied with the machines is generally used for anatomical and clinical imaging of the images as they are acquired.

This kind of software is used to track levels of activation in groups of brain cells, usually while a subject performs a specific task, which typically isn't a requirement of clinical imaging. Its reasonably computationally intensive to use, so is used post acquisition, often requiring many hours of processing.

"Their focus was on scans of resting brains, typically used as a control in studies of specific activity. While these might show specific activities in some of the subjects (like moving a leg or thinking of dinner), there should be no consistent, systemic signal across a population of people being scanned."

Human brains have common structures. Is it so hard to believe that almost all of us would have one part of the brain that's active while the brain is in a resting state?

Maybe the part that controls resting?

This study seems deeply flawed to me unless somebody can give me a rationale for this assumption.

This goes to prove my hypothesis that actually well-educated people don't automatically believe studies, even ones from prestigious institutions. Non-educated don't really believe studies, either. That leaves only the semi-educated to believe studies. Basically, the TED crowd.

I'm a student of the history of science and technology and there is a long standing pattern of someone making a significant breakthrough in some area, that makes real progress, but then it rapidly mutates into into some kind of unquestioned panacea.

Electricity, serums, clean water, evolutionary theory, radium/x-rays/nuclear-medicine, antibiotics, anything new and highly successful in one focused area, gets turned into a either a panacea or an oracle. And I'm not talking about the quacks and cons that always crop up but actual mainstream medical and scientific researchers who just get carried away.

I knew that fMRI had gone over the cliff when I read a bunch of studies about emotions, anger, jealously r and even love. Then they started talking about being able to examine lying. My BS detector went off immediately because how can you actually create a real emotion under laboratory conditions? How can you establish a baseline? How can you create reproducible intensity.

None of the foundations of basic scientific method where possible in these types of studies. Moreover,owing to the expense, such "studies" usually had fewer than a dozen participants.So, most of the studies based on brain scanning are in the "that seems interesting, lets collect more data" stage of science at best.

But we can't seem to control the temptation.

I think that eventually, brain scanning combined with insights from evolutionary psychology will revolutionize humanity by eventually giving us a real understanding of how we perceive, think and make decisions. What has been the province of philosophy for 2,500 years will become a sound reproducible science.

Some historical examples for the interested:

The whole low-fat, low cholesterol
In the 1950s, cardio disease suddenly increased especially among upper class white males. Something had to be done!

The whole low-fat, low cholesterol concept turned out to be overwrought and at times fraudulent. For the first 15 years they told people to get their cholesterol down, but then it turned out that triglycerides were almost as important and then it took another 15 years to figure out that with cholesterol it was the ratio between high-density vs low-density that matter and not the absolute numbers.

We tried Statins but the predicted benefits of brute force lowering of cholesterol never appeared.

Oh, then then they found out there was any actual correlation between fat intake and heart disease in many populations and in fact some people get great cardiac chemistry from living on butter wrapped in bacon.

The best guess today about the uptick in cardiovascular disease is: the great switchover from manual labor to office work that tipped over in 1955, possibly combined with the significant increase in consumption of seed oils.

Computer "Oracles"
The problem appeared in computers big time in the 60s and 70s when serious processing spread into large government organizations and corporations. System analysis had to fight what they called the "oracle" delusion in which decision makers would believe the output of program as true simply because it came from a computer. (I wonder if that's were Larry got the idea for his company's name?)

The problem arose because computers allowed very large organizations to really correlate the masses of data they piled up and allowed for very intensive and accurate statistical modeling within a short enough time frame decisions the models could be used to make good decision in real-time. For decision makers, drowning in data, the CORRECT outputs of the computers seemed almost magical. It became all to easy to slip into a lazy idea that if "the computer said so, it must be right."

Beating the concept of Garbage-In, Garbage-out took a decade not to mention teaching decision-makers about bugs. I heard some tall tales from the old school pens and slide rule guys who taught me CompSci in the early 80s.

Supposedly, IBM fought the problem with practical jokes, putting in gibberish data that produced outputs so bad decision-makers just couldn't accept them. I've been told tales, which will never be substantiated, that at some point in 60s, RAND, simply stopped providing computer analysis and instead went back to masses of people with slide rules, because the decision-makers would be more critical of manual calculations. Since RAND did the game theory for nuclear deterrence, that was likely a wise decision.It's probably apocryphal but I imagine it was something someone seriously thought about. Rear Admiral Grace Murray Hopper, did allude in a lecture I attended to putting nonsense into some of her reports to make sure the brass was paying attention. Gutsy for a military officer, of course, her command flag was the Skull and Crossbones and they used to steal copper wire right out the walls if red tape stopped her.

And yep, Climatology is going to take a serious hit as well. I expect it will follow the near exact trajectory as Eugenic. Both have the same political and social demographics e.g. upper income, college educated, white, protestants, politically trusting in the state to engineer the behavior of the people (which wasn't really a Left-Right thing back then but more linked to education and income.)

Both are being based on the best scientific theories and data of the day. From

1890-1935, no mainstream scientist questioned the long term danger of genetic degeneration. The only real debate among scientist was whether government could actually carry out a Eugenics program or whether the attempt would cause more damage e.g. basically a debate on political theory instead of biology. The only non-scientific opponents where the Catholic Church and in America, marginal evangelical sects like the Southern Baptist, who opposed eugenics not on grounds of science but on theology. Score one for redneck creationist, albeit by accident.

In the end, the Nazis did the world a favor by jumping on a bandwagon that was 50 years old at the time. Eugenics foundered on just a couple of mistakes: Lord Kelvin underestimated the age of the earth, making natural selection seem an impossible evolutionary mechanism. Instead, it was assumed that evolution must be driven by some rapid progressive force. Genes where modeled as mixing pigments instead of being quantum (discrete chunks) even though Mendel had shown otherwise in the mid-1800s, and his work got recreated in 1904.

It took the discovery of radiation and effects on measuring the age of the earth. The spread of idea of discrete genes (1930s) and the development of population genetics. Even so it wheezed on into 1960s and IIRC Sweden (which went for Eugenics big time for some reason) had laws on the books.

Climatology will eventually suffer the same fate. It has all the ear marks of scientific overreach while at the same time being of keen interest of elites who like to control ordinary people. While scientifically plausible (as parts of Eugenics still are) if you look in detail at the actual predictions you see that different climate models do not all produce the near exact PATTERN of warming, which they should if all the independent models were all accurately simulating the same phenomena.

I think the intensity of climate change will gradually downgrade. Weather dependent energy sources will eventually bring the a major power grid down and technological change will make carbon emissions fall off a cliff.

I could toss onto the pile the "Energy Crisis" of 73-83 a bizarre episode in which wage and price controls and bad tax polices triggered monopolistic conditions created a shortage oil in markets but which almost the entire planet convinced itself actually meant a physical depletion in oil in the earth itself. The hysteria cause famines, wars, the SUV, CAFÉ laws mandating small vehicles still kill hundreds every year while having perverse effects like adding to suburban sprawl. If you'd told some in 1980 that in 1986 a gallon of gas would cheaper than a gallon of distilled water or that in 2000 we'd all be driving big SUVs, you'd been locked up as nuts but in the end, it just took the signing of one executive order and they whole phantom hysteria collapsed.

I suppose the best we can say is the more a concept provides a rational to control other humans, the more its likely to to be oversold and suffer a runaway credibility. In this respect, brain scanning presents special dangers because it allows us to invent pseudo-scientific pathologies for people we want to control.

During the 20th century, it was common to for leftists to use Marxism and Freudianism, both completely unscientific gibberish, to pathologies those who didn't like Stalin or didn't believe that toddlers were sexually attracted to their parents.

I've all ready read an paper sometime back that accused "working class" Philadelphians as suffer irrational pathological judgements, because, I'm completely serious, these supposed ignorant people thought that a person having sex with chicken carcasses right out of the refrigerator indicated something amiss with that person. Not kidding. Peer reviewed and everything.

One does wonder what comes next. I really haven't tracked the issue but I bet there's all kinds of "studies" of brain scans out their showing how rational and logical leftist are and how irrational and illogical non-leftist are, all "proven" by brain scans with bad software. I imagine googling "fMRI" and "Social Justice" will bring up some interesting "studies."


Study found that MRI scans can detect ADHD and it's subtypes! In this essay I will -

I'm taking a biological psychology course in school right now and some content covered in this last week's lectures is in regards to the medullary oblongata in the brain. My prof mentioned the ventral tegmental area (VTA) and the substantia nigra and how they process rewards, motivation related behavior, and addiction, in addition to other cognitive functions. As a member of this sub, I of course related it to my own diagnosis and stopped watching my lecture to try and find research that may indicate how these areas may be affected in individual's with ADHD. As a result we now have an informal report on my research instead of me actually doing my work. this is exactly the reason why I am behind in my material.

Studies that combined resting-state functional connectivity (RSFC), a technique based on spontaneous brain activity captured during brief (3–6 min) magnetic resonance imaging (MRI) scanning that is extensively used to evaluate functional coupling between brain regions, with positron emission tomography (PET), a technique that allows to measure molecular targets involved in DA signaling, have corroborated a role of DA in brain functional connectivity across striatocortical pathways

. recent functional MRI studies have shown increases in functional connectivity in VTA and in its targets . Disruption of these DA pathways are implicated in several psychiatric disorders including schizophrenia, ADHD, and addiction.

Translation: Using fMRI, RSFC, and PET scans, researchers were able to observe functional differences between individuals with and without ADHD. The study included children, adolescents, and adults with a sample size of over 1300 people.

I also found this article 2 which states:

Alterations in the shape of the left temporal lobe, bilateral cuneus, and areas around the left central sulcus distinguish ADHD from typically developing patients.

. identify patients with ADHD with an average accuracy of 73.7% and to discriminate between ADHD inattentive and ADHD combined subtypes with 80% accuracy.

Translation: There are physical differences in the brain that can differentiate brains with or without ADHD. You can see those differences on MRI scans and further conclude which subtype the subject may have with a relatively high degree of accuracy.

Note: The second study has a much smaller sample size (170 boys) and the research does not state that the physical differences are exclusive to these areas of the brain.

SO this means I was right in thinking that the ventral tegmental area and the substantia nigra were heavily involved in this diagnosis but I am also bestowed with the knowledge that ADHD has the potential to be diagnosed with physical scans!

Considering that ADHD is diagnosed through subjective testing and the diagnoses from psychiatrists and doctors can vary wildly, this is another incredible step in heading towards getting proper acknowledgement for those that need it most. I know many of us have been faced with criticism for suggesting a possible ADHD diagnosis and for some, it's really hard to find a professional that will actually listen and get you what you need. Accusations for "chasing drugs", being told "everyone loses motivation, learn to get through it like the rest of us", psychiatrists/therapists/doctors rolling their eyes when you suggest the possibility of ADHD, being misdiagnosed and getting improper treatment, the list goes on.

It has been known for a while now that ADHD is due to abnormal function of certain neurological pathways (neuroreceptors, dopamine, serotonin, you know the drill). But it's so hard to educate people that have prejudices against neurological disorders when you have to give an entire biology lecture just to set the table. Unless you "have physical proof", skeptics won't believe it or even take you seriously. It's different from issues, like a broken bone or cancer, where treatments like surgery can fix or remove visible abnormalities. You can't zip open your brain and point out where the necrotic area is, it's (largely) chemical imbalances.

I was only justable to convince my dad literally last week that depression isn't "just in your head." My sister had confessed that she got diagnosed with depression and wanted an opinion in taking meds with therapy or just the therapy. He shot down my sister's idea of even considering taking medication and said therapy is useless anyway. Despite my dad being a highly educated engineer who prides himself in only believing cold hard facts, the fact that he instantly shot down a professional diagnosis was pretty surprising. Naturally, I began to debate with him (common occurrence between us and I'm pretty sure I get my ADHD from him) and I was trying to explain the chemical imbalances that cause mental illnesses, but he was adamant in his belief. That is. until I pulled up literal brain scans from my college textbook. Here are some similar scans for context 3. When I showed him this picture and explained that these showed the activity levels of the different neuronal and chemical activity in these regions, that's when he finally believed it. He is now more open to pharmaceutical treatment for mental illnesses but we've still got a long road ahead.

Just in my own interaction with my dad, it's pretty obvious how effective visual examples are in justifying treatment and diagnoses for mental illnesses. There are people that have supposedly been diagnosed with ADHD themselves yet they are also critical and demeaning about this illness. 4. There are also others that will state that there have been studies that show there are no discernible differences in the white and gray matter volume between controls and subjects with ADHD 5. However, delving deeper into the contents of this argument will result in another long winded statement so I will skip this. Feel free to research gray and white matter and read the paper yourself though. I've attached all the linked articles and the relevant papers below.

If this becomes widely adopted form of ADHD diagnosis, we can reach new levels of acceptance as a society. The scans will allow for objective results and when used in tandem with psychiatric consultations, the chances of bad results would be considerably lower than a sole subjective consultation. In an ideal world, this method being implemented into our current system could allow for better acceptance from society, make it easier to convert skeptics, and people will take this illness more seriously.

Research paper the PET scan article is based on (depression scans): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5514428/

Paper on gray and white matter volume between controls and those with ADHD: https://pubs.rsna.org/doi/10.1148/radiol.2017170226

TLDR: There are physical differences between normal brains and those of us with ADHD, and it can be observed on MRI (and other) scans. There is a high degree of accuracy and scans can even distinguish the subtypes. ADHD diagnosis has the potential to be less subjective and be taken more seriously!


Psychology paper 1

Intro : define neuroplasticity
Brain plasticity refers to the brain's ability to change and adapt because of experience. Research has demonstrated that the brain continues to create new neural pathways and alter existing ones in response to changing experiences

Background information: According to Maguire, the role of the hippocampus is to facilitate spatial memory, in the form of navigation. From previous studies (pre-Maguire) it was impossible to know whether differences in brain anatomy are predetermined, or whether the brain is susceptible to plastic changes, in response to environmental stimulation - in this case driving a taxi.
Taxi drivers undergo extensive training, known as 'The Knowledge' and therefore make an ideal group for the study of spatial navigation.
Aim: To examine whether structural changes could be detected in the brain of people with extensive experience of spatial navigation.
Method: Structural MRI scans were obtained. 16 right-handed male London taxi drivers participated all had been driving for more than 1.5 years. Scans of 50 healthy right-handed males who did not drive taxis were included for comparison. The mean age did not differ between the two groups.
Results: 1) Increased grey matter was found in the brains of taxi drivers compared with controls in two brain regions, the right and left hippocampi. The increased volume was found in the posterior (rear) hippocampus.
2) Changes with navigation experience - A correlation was found between the amount of time spent as a taxi driver and volume in the right posterior hippocampus.

The sample was made up of only males. It was a double blind study and the participants were either injected with scopolamine - a drug that blocks acetylcholine receptor sites - or a placebo. They were then asked to play a virtual reality game in which they had to remember how to get to a certain place in the game. Once they found where the "pole" was that they were looking for, they would be "put" at a new starting point and asked to find the pole again. They were in an fMRI while carrying out the task so that brain activity could be observed.

The biological approach has some special problems with regard to informed consent. First, the biological approach uses animals which cannot actually give consent. In addition, biological researchers often do studies of people who have mental illness or brain damage. It could be argued that these participants may not be able to understand what they are agreeing to. Finally, often biological research is rather complex and may not be understood by the average person, making "informed consent" difficult.

One study that raises questions about informed consent is the study of HM by Milner. HM had severe amnesia as a result of an operation which was done to stop epileptic seizures. HM had both retrograde amnesia (he couldn't remember what happened before the operation) and he had anterograde amnesia (he couldn't create new memories). Milner carried out a case study and found that the hippocampus plays a key role in the transfer of episodic and semantic memories from short-term to long-term memory.

As HM could not remember giving consent, this study is ethically problematic. HM was asked to give consent throughout the experiment, but it is not clear that he really understood what was happening or who Milner actually was. Originally consent was given by HM's mother and then later by his caretakers. However, there is a concern that HM may not have been able to take advantage of his right to withdraw either because he did not understand or he forgot.

Since we often create strong memories of things that have frightened us, McGaugh & Cahill wanted to study the effect of adrenaline on the creation of emotional memories. They had participants watch a series of slides while listening to a story. In one group, the story was uninteresting. The second group heard a story that was very traumatic about a young boy who was in an accident and his feet were severed. After two weeks, the participants came back and were asked to answer a series of questions about the slides. Those that were in the more emotionally arousing condition remembered more than in the boring condition.

To test the role of adrenaline, they repeated this procedure but gave the participants beta-blockers that interfere with the release of adrenaline. It was hypothesized that if adrenaline is blocked, then the amygdala would not be able to produce emotional memories. It appears that this was the case. The group that took beta-blockers remembered no more detail about the slides than the group that heard the boring story.

One potential human pheromone is androstadienone - found in male semen and sweat. Zhou et al (2014) wanted to see if androstadienone influenced human sexual behaviour. To do this he carried out an experiment with a sample of heterosexual men and women and gays and lesbians.

An example of this was a study by Newcomer who wanted to see the effect of stress on verbal declarative memory. When we are stressed we secrete a hormone called cortisol. Newcomer's hypothesis was that high levels of cortisol would prevent memory formation. To test the hypothesis, participants were randomly allocated to one of three conditions: a low dose of cortisol, a high dose of cortisol or a placebo - a pill that they thought was cortisol but was not. The experiment took place over 10 days with four different measures of the participants' ability to immediately recall a piece of prose that was read to them. The experiment was a double-blind study- the participants did not know which group they were in and the researcher also did not know which participants had been assigned to each group.

As a result of the Human Genome Project, psychologists have moved beyond simple twin studies and now can look at the role of a specific gene in a behaviour. Often look at how different genetic mutations may play a role in a behaviour. Caspi et al (2003) examined the role of the 5-HTT gene in depression, a gene known as a "serotonin transporter" gene, it regulates the level of serotonin in the synapse. Psychologists believe that serotonin plays a role in mood and therefore plays a role in human depression. The long allele is the "normal" allele the short allele is the mutation. Caspi wanted to test if people who inherit two short versions of the 5-HTT gene are more likely to develop major depression after a stressful life event than people with two long alleles.

Caspi used a sample of over 800 New Zealand 26-year-olds. The study was a prospective, longitudinal study. Participants were divided into three groups: Group 1 had two short alleles Group 2 had one short and one long allele Group 3 had two long alleles. The participants were asked to fill in a "Stressful life events" questionnaire. They were also assessed for depression.

One study of the role of genetics in depression is the study by Kendler of monozygotic (MZ) and dizygotic (DZ) twins. Researchers study MZ twins because they have identical DNA, having come from a single fertilized egg (zygote). DZ twins are from two different fertilized eggs. They are born at the same time but their DNA is as different as any other set of siblings. Psychologists argue that if the concordance rate of MZ twins for a behaviour is significantly higher than the concordance rate for DZ twins, then there is a genetic component to the behaviour. Psychologists also know that although one may have a certain genetic makeup (genotype), not all of the genes that are inherited may be expressed. The idea of gene expression is that sometimes an individual may have a predisposition to a behaviour as a result of inheriting the gene from a parent, but until a stressor from the environment causes the gene to be expressed, the person will not show that behaviour. Hence, genes alone cannot cause a behaviour - but it is the interaction of genes and the environment that leads to behaviour.

Kendler carried out a study of 42000 MZ and DZ twins to find out whether depression might be inherited. He predicted that the MZ twins would have a greater concordance rate for depression than DZ twins. The researchers found that the MZ had a concordance rate of 0.44, whereas the concordance rate for the DZ twins was only about 0.17. However, what was interesting to the researchers was that even though MZ twins shared the same genotype, their concordance rate was not 100%.

Intro:
- Evolutionary arguments have been used to explain human mating behaviour. Evolutionary psychologists argue that our behaviours are the result of natural selection - that means that the behaviours that most improve our chances of handing down our genes and producing healthy offspring have an evolutionary advantage.
P1: Aim
- Wedekind (1995) carried out a study to see to what extent MHC alleles play a role in mating behaviour. MHC alleles are responsible for our immune systems. They are inherited from both of our parents - and they are codominant. That means we end up with both immune systems. He argued that our "smell" is based on our MHC and it is best for a woman to choose a mating partner who has a different smell in order to maximize the immune system of her child.
P2: Procedure
- Students were used in the study. The men were asked to wear a t-shirt for two nights. They were also told not to wear any perfume or perfumed soap, to avoid spicy food, smoking and alcohol. They were told not to do anything that would change their natural smell.
- Two days later the women were asked to rank the smell of the t-shirts. They were tested in the second week after the beginning of menstruation when they have a better sense of smell. T-shirts were placed into boxes with a "smelling hole." 3 boxes contained t-shirts from men with the same MHC as the woman, three were different and one was unworn. Every woman rated the shirts for their "pleasantness."

Brewer and Treyens did a study on the impact of schemas on memory. For this experiment, they had 86 university psychology students as participants. They asked each participant individually to wait in an office for a short time while the researcher went to finish the experiment with another participant. Then, after 35 seconds the research came to get the participant and brought her into another room where they were asked to recall objects in the office. The objects in the office were either congruent or incongruent with an office schema - that is, a mental representation of an office. For example, the office had pencils and a stapler, but there were also objects like a brick and a screwdriver.

The students were asked to remember these objects under three different conditions: a recall condition, a drawing condition and a recognition condition. The researchers found with the recall and drawing conditions that the participants remembered objects congruent with their schema of an office but did not recall objects that were incongruent however in the recognition condition where they were asked to choose objects from a list, participants were able to also recognize objects that were incongruent with their schema as they were prompted by the researcher.

Although we may feel as if we retrieve memories intact from our
long-term memory store, there is a lot of evidence that memory is
an active process of reconstruction. Loftus and Palmer's (1974)
two-experiment study into eyewitness testimony illustrates the
reconstructive nature of memory, most specifically, the effects of
post-event information on memory for a particular event.

In the first experiment 45 participants were shown seven short
film clips of traffic accidents. After each film they filled a
questionnaire about what they had seen. The critical question (IV)
was, 'About how fast were the cars going when they hit each
other?' Different conditions were used, where the verb was
changed to 'smashed', 'collided', bumped', 'hit' and 'contacted'.
Participants had to estimate the speed in miles per hour.

In the follow-up experiment 150 participants were divided into
three groups. All watched a short film on a multiple-car accident
and the critical question was, 'How fast were the cars going when
they hit each other?' The verb was changed to 'smashed' in the
comparison group. The control group was not asked to estimate
the speed.
The participants were asked to return a week later. They were not
shown the film again but were asked several questions about the
accident in the film. The critical question was, 'Did you see any
broken glass?' They had to answer in terms of 'yes' or 'no'. The
video did not have any broken glass.

Because availability is affected by factors, such as familiarity,
media reports and stereotyping, reliance on availability leads to
cognitive bias. Tversky and Kahneman (1973) tested the
availability heuristic by presenting four lists aurally to two
different groups of participants in a study. One had 19 famous
male and 20 less famous female entertainers and the second had
20 less famous male and 19 famous female entertainers. Lists 3
and 4 were the same, but this time the names were those of
politicians. Participants in group 1 were asked to write down as
many as they could recall after listening to each list, and group 2
was asked whether each list contained more names of males or of
females.

One example of an experiment is Loftus & Palmer's study on how leading questions may affect one's memory of an automobile crash. Participants watched a movie in which two cars hit one another. The participants were given a questionnaire with several questions about the accident, but only one question was actually important. One question asked the participants how fast the car was going when the accident occurred. For some participants, the question ended with "when the two cars smashed into each other." For other participants, the word smashed was replaced with bumped, hit, collided or contacted. The IV was the intensity of the verb in the leading question. The DV was the speed that the participants estimated. The researchers used an independent samples design, so the participants experienced only one condition. Otherwise, they would not have been able to carry out the experiment because the participants would have figured out the actual goal of the study. Therefore, deception is sometimes used in experiments to avoid the participants demonstrating demand characteristics, where they do what they think that the researcher wants them to do. As part of the experiment, when the task is completed, the researcher must debrief the participant and reveal any deception.

A classic study that used deception was Loftus & Pickrel's Lost in the Mall study. The aim of the study was to see if participants would "create memories" of a biographical event that never happened to them. Participants were given four short stories describing childhood events, all supposedly provided by family members, and asked them to try to recall them. Relatives had provided the stories. One of the stories, describing a time when the participant was lost in a mall when he/she as a child, was false. In the study, 25% of the participants said that they remembered this event even though it never actually occurred. They often described the event in great detail. Loftus concluded that being asked to recall something that didn't happen, but that they thought their parents said happened, can lead to the creation of false memories.

Researchers now know that when adrenaline reaches the brain it activates the amygdala in the limbic system to send a message that something important or dangerous has happened. The amygdala plays a key role in creating emotional memories.

McGaugh & Cahill (1995) did an experiment to study the role of emotional arousal on memory. The participants were divided into two groups. Each group saw 12 slides and heard a different story. In the first condition, the participant heard a boring story about a woman and her son who paid a visit to the son's father in a hospital where they watched the staff in a disaster preparation drill. In the second condition, the participant heard a story where the boy was involved in a car accident where his feet were severed. He was quickly brought to the hospital where the surgeons reattached the injured limbs. Then he stayed in the hospital for a few weeks and then went home with his mother. A third group heard the same story as the second group, but they were given beta-blockers. Beta-blockers block the receptor sites for adrenaline in the amygdala. Two weeks later the participants were asked to come back and have their memory tested. Two weeks later the participants were asked to come back and have their memory tested.

One study that demonstrates the role of one's Social Identity on behaviour was done by Abrams et al. Abrams wanted to see if being made aware of one's social identity would increase the level of conformity to a group. To do this, he had participants take part in the Asch paradigm. In this test, there is a group of confederates and one naive participant. The group is shown a line and then asked to match it with the line of the same length in a set of three lines. In half of the trials, the confederates gave the correct answer in half they did not.

To test the role of social identity, one group of naive participants was told that the other participants were "fellow psychology students from the university." In the other condition, they were told that they were "ancient history students from the competitor university." When they thought it was their in-group, participants conformed almost 50% of the time to the incorrect answer when they thought it was their out-group, they conformed only 5% of the time.

The study was made up of 3 - 5-year-old children. They were first evaluated to determine their level of aggression. Bandura then used a matched-pairs design to make sure that the different levels of aggression were evenly distributed in the groups. There were three independent variables in this study: whether the children were exposed to violence or not, the gender of the child, and the gender of the model. The children then watched either a male or a female model either act aggressively (bashing the Bobo with a baseball bat and yelling at the Bobo), act passively (assembling toys), or they had no model. This served as the control group to see what children would do when simply but with the Bobo.

The children were then individually invited into a room full of toys. After they saw all the toys, they were told that they were not allowed to play with them since they were for other children. This caused all of the children to feel frustrated. This was important because Bandura wanted to make sure that they all had the same level of arousal.

The results were that all of the children showed some level of aggression against the Bobo. However, the group that saw the aggressive model were the most aggressive. Those that saw the control were second, and those who saw the passive model showed the least aggression. In addition, the boys were the most violent. They tended to imitate both the male and the female models, though they commented that the woman's behaviour was not acceptable, saying "Ladies should not behave that way." Girls tended to imitate the verbal aggression of the male - and imitated the female model more directly. This shows that it each gender identified more with the same sex model.

This study demonstrates SCT. First, the children appear to have learned the behaviour by watching the models. Secondly, since there was no punishment for the models' actions (and it actually looked like they enjoyed it), the children imitated it. They had been vicariously reinforced. Lastly, the fact that they imitated the same gender makes sense. Since the children would identify with the same gender and they would feel that if he (or she) can do it, so can I (self-efficacy), then they are more likely to imitate them.

Participants were given five minutes to recall as many memories as they could of public events in their lifetime. They were then given a memory question which was similar to the one used by Brown & Kulik (1977). These questions included where they had learned of the event, what time of day it was and what they were doing when they heard about it. They were then asked questions about the importance of the event - including how personally important it was, how surprised they were and how often they had spoken about it since it happened. All questionnaires were provided in the native language of the participants.

One study into enculturation is Fagot (1978). This study looked at how parents directly influence gender identity, which is although dependent on biological sex, is shaped by cultural norms. The aim of this study was to observe parental reactions to behaviour that wasn't deemed appropriate for the child's gender, at least according to American culture at the time. Fagot carried out naturalistic observations among 24 families, 12 families with boys and 12 with girls. They found that parents acted more favourably towards their child when they acted according to their gender norms and expectations. Boys were encouraged to play with toys that build strength while girls were encouraged to play with dolls or dress up.

Psychologists recognize that the process of acculturation can be stressful for people. This is referred to as acculturative stress - the psychological, physiological, and social difficulties of acculturation, often resulting in anxiety or depression. The result is a decrease in one's mental health. This is often experienced by immigrants when they move to a new country and they try to balance the culture in which they were enculturated and the new culture into which they are trying to acculturate.

P1: MRI description
- older method : MRI
- powerful magnetic field placed around the brain, temporarily holds the nuclei of the brain's atoms in one direction. When released, the atoms "wobble" back to their original positions and emit a weak radioactive frequency signal that can be picked up by a sensitive receiving device.
- shows structural changes in brain matter and is used to investigate tumors or any other possible brain damage.
- However, MRI scans are limited to only showing structural changed and need careful interpretation to prevent false positives.
- cautious when interpreting MRI scans to ensure validity
- claustrophobics have issue due to nature of MRI ---> affects n. of people that can participate = less representative.
- limits the effectiveness of this technique for measuring the relationship between biological factors and behaviour in people with claustrophobia.

P2: Maguire describe
- compare the volume of grey matter in the brains of London taxi drivers compared to a pre-existing sample of matched controls. -
- Maguire hypothesized that taxi drivers would show significantly higher volumes of grey matter in their hippocampus, a structure associated with navigational skills.
- MRI: increased grey matter in the brains of taxi drivers compared to controls in both the left and right hippocampi.
- increased volume was found in the posterior hippocampi.
LINK: The results of the MRI allowed Maguire to investigate relationships between biological factors (the size of the hippocampi) and the behaviour (spatial navigation)
- lacking ecological validity. However, until a procedure is developed whereby brain scanning can take place during an everyday activity in the real world, then it must essentially remain lab-bound.
- MRI image can take several minutes to form, and slightest movement can affect the validity of the findings.
- Consequently, while an MRI is useful for establishing a relationship between biological factors and behaviour, its inability to show causation, low ecological validity and susceptibility to distortion via movement, make this scanning technique limited in demonstrating a causal relationship with any certainty.

P3: PET scans description
- type of nuclear medicine imaging, use a small amount of radioactive material to diagnose and determine the severity of brain diseases, including cancers and neurological disorders.
- PET scans involve the injection of a radioactive tracer, appears as a bright color on the scan, areas of the brain are most active in metabolizing glucose during a task.
- brighter color, = more activity.
- cant be used on everyone: some allergic to tracers, not used on children or pregnant women -----> therefore unable to measure relationships between brain and behaviour in these group -- limitation

P4: study that made use of PET - Raine 1997
- to demonstrate a biological correlation between impulsive behaviour and lack of pre-frontal cortex (PFC) activity.
- sample of 41 murderers (39 men and 2 women) who had pleaded NGRI (not guilty by reason of insanity) and 41 age and sex-matched controls.
- Raine found that the NGRI participants had lower glucose metabolism in their PFC in comparison to the controls.
- It might be inferred from the findings that NGRI murderers do not use their PFC to interpret and respond to non-emotional stimuli (in this case the continuous cognitive task), reacting instead in an emotional manner.
- PET scans: allowed Raine to investigate the link between a biological factor (lack of pre-frontal cortex activity) and behaviour (impulsive behaviour).
- findings wouldnt be possible without PET scans which could be used in clinical and forensic settings to inform rehabilitation programs and to prevent crimes to some extent
-by establishing a biological correlate for behaviour- easier for researchers to understand reasons behind specific behaviours e.g. unpremeditated murder.

P2.5: Limitations of PET scans
-does not provide a full explanation of all possible influences on the behaviour in question.
- correlational --> researchers are only able to conclude that the two factors (pre-frontal cortex activity and impulsive behaviour) are linked.
= no real evidence to show conclusively that NGRI murderers' crimes are caused by a lack of PFC activity there could be a huge range of other influences that produced the behaviour e.g. alcohol abuse, upbringing, etc.
- Therefore, PET scans are limited as they can only reduce a complex behaviour when highlighting a link between biological factors and behaviour.

1. Evaluate research on localization of function
- Intro: Explain the theory of localizations
- P2. Support for theory of localization - Maguire
- P3. Reductionism/ methodological issues of supporting research
Point Research into the strict localization of brain function has been argued to be biologically reductionist.
Explain This refers to the way that biological psychologists attempt to explain complex human behaviour by reducing it down to its basic physical level and explain it in terms of brain structure.
Evidence For example, Maguire suggesting that the posterior hippocampus is responsible for spatial memory is a reductionist approach to explaining memory.
Elaborate Whilst the practice may be considered useful as it can aid understanding of the complex through more simplistic explanations, biological psychology has been criticized for over-simplifying many complex human behaviours by attributing it to specific areas of the brain alone whilst ignoring other contributory factors.
Link This matters because such focus may neglect to consider complexities of the behaviour and result in a very simple, basic explanation at the expense of a more holistic account
- P4. Contrasting theory - law of equipotentiality
- Not all researchers agree with the view that cognitive functions are localized in the brain.
An influential, conflicting view is the Law of Equipotentiality Theory proposed my Karl Lashley (1950). Lashley claimed that intact areas of the cortex could take over responsibility for specific cognitive functions following injury to the area normally responsible for the function.
Lashley believed that whilst basic motor and sensory functions may be localized in the brain, higher cognitive functions are not.
This suggests that the effects of damage to the brain are better determined by the extent rather than the location of the damage, thus challenging the theory of strict localization.

P5. Research to support contrasting theory - Karl Lashley
However, despite the evidence supporting localization of brain function, there are classical studies that lend support to the opposing theory - that of distribution of brain function. The psychologist Karl Lashley conducted research in the 1920s and onwards attempting to find the part or parts of the brain where learning and memory were localized, a structure he and others called the engram. He trained rats to perform specific maze‐running tasks, and then lesioned varying portions of the rat cortex, either before or after the animals received the training, depending upon the experiment. The amount of cortical tissue removed had specific effects on acquisition and retention of knowledge, but the actual location of the removed brain tissue had no significant effect. This led him to conclude that memory and learning are not localized but are widely distributed across the cortex.

- P6. Issues with distribution of function.
- Weak localization nevertheless exists
Point Some functions are nevertheless localized weakly that is, several brain areas may be responsible for a function but some areas are dominant.
Evidence There were many examples of weakly localized (lateralized) functions in Sperry and Gazzaniga's research.
Explanation Although the left hemisphere was consistently shown to be dominant for language, the right hemisphere was also shown to be capable of understanding some simple language.
Link Scientists have been generally more successful in establishing strict localization for sensory and motor functions than for higher-order cognitive functions such as memory, thinking and learning.

- Conclusion : while human behaviour can be localised in one area, which makes the theory, it does not necessarily have to be.

2. Discuss limitations of the theory of localization of function.
- Define theory of localization
- There are individual differences in language areas
- Language predictions bay not be confined to Broca's area alone
- Support for theory of localization
- Challenges to the theory of localization
- Reductionism
- Determinism
Point A vital criticism of the biological explanation involved in the theory of localization is its attempt to explain human behaviour purely in terms of physiological factors.
Explain This is known as biological determinism, the belief that all human behaviour is caused and controlled by internal, biological mechanism such as brain structure.
Evidence For example, Maguire et al (2000) suggesting that the posterior hippocampus is associated with spatial memory is an example of determinism.
Elaborate Such a viewpoint suggests that one's behaviour is solely determined by internal physiology, and, as such, factors beyond one's control. This standpoint neglects to account for the idea of free will.
Link This matters because it suggests that an individual has no capacity to alter their behaviour or any control as an active agent of their actions. This will then have implications for how a person is then viewed and treated by society.

How neuroplasticity works
- Information takes a pathway through the brain, travelling from one neuron to the next via synapses
- When we're presented with new information, new neural pathways begin to form
- Using a neural pathway strengthens it - the more a pathway is used, the stronger the connections between the neurons become.
- If a neural pathway is not used, it becomes weaker

The constant rewiring and reorganisation of the brain is the basis of how we learn and adapt to changes in our environment. Plasticity was previously thought to only occur in babies and children. Although, plasticity is greatest in the developing brain, it is now widely accepted that plasticity occurs throughout adult life too.

P1 maguire
Maguire et al (2000) studied the brains of London taxi-drivers and found significantly more volume of grey matter in the posterior hippocampus than in the matched control group. This part of the brain is associated with the development of spatial and navigational skills in humans and other animals. As part of their training, the taxi drivers must take a complex test called 'the knowledge', which assesses the recall of the city streets and possible routes. It appears that the result of this learning experience is to alter the structure of the taxi drivers' brains. It is also noteworthy that the longer they had been in the job, the more pronounced was the structural difference (a positive correlation).

P2: strengths and limitations
A similar finding was observed by Draganski et al (2006) who imaged the brains of medical students 3 months before and after their final exams. Learning-induced changes were seen to have occurred in the posterior hippocampus and the parietal cortex presumably as a result of the exam. Finally, Mechelli et al (2004) also found a larger parietal cortex in the brains of people who were bilingual compared to matched monolingual controls. However, some psychologists suggest that research investigating the plasticity of the brain is limited. For example, Maguire's research is biologically reductionist and only examines a single biological factor (the size of the hippocampus) in relation to spatial memory. This approach is limited and fails to take into account all of the different biological/cognitive processes involved in spatial navigation which may limit our understanding. Therefore, while Maguire's research shows that the brain an cange in response to frequent exposure to a particular task, some psychologists suggest that holistic approach to understanding complex human behaviour may be more appropriate

p2.5 linking paragraph
-besides memory, other area of interes is cognitive developmetn in childhood
- observed cogn differences btwn children from poor families and those from a wealthier bckgrnd
-showed that brain strongly affected by environment from a very early age. the results also show that changes in the brain in response to poverty can be mediaed by nurturing.

p3 luby et al description
- whether poverty impacted brain dev in early childhood and mediators of this effect
-145 children cognitively and socially assessed annually n psychosocial, behavioural and other developmental dimensions for 3-6 years.
- support or hostility of caregivers and stressful life events
- 2 mri scans
- 1 measuring volumes of white and grey matter of the whole brain and one of the amygdala and hipp.
- summary of findings and applications of these findings to policy
- poverty associated with less white and grey brain matter and with smaller vols of hipp and amgd
-caregigver supportive or hostile mediated effect of pov on both hipp, but stressult life events affected left hipp only.
- the findings that exposure to pov in early childhood impacts brain develop at school age shows that neuropl takes place in response to physical and social deprivation as well as learning,
- real life application : the fact that these effects on hipp can be mediated should be focused on

p4 + & -
- lubys extends earlier animal studies into the role of nurturing in hipp development, and earlier child studies showing that this nurturing effect is independent of income.
-gives complex pic of mediating effecs of stress and nurturing needed to be further investigated
-limitation reliability of the findings, could actually have been a response of the child to the caregiver that effected the changes in the hipp, not purely the effect of nurturing.
-study merely shows + corr, not uni-directional cause and effect
- however, researcher makes it more clear that an understanding of neuroplasticity is important for prevention of the LT effects of poverty in childhood on cognition

p5 applications
- social policy
- can be used to counteract now outdated arguments that intelligence is genetic+ arguments stating a simplistic an unmediated effect of pov on the brain in childhood .
- knowledge on neuroplasticity could be used to look at environmentally-triggered disorders like depression and anxiety to look at how these changes can be interrupted
- given lubys findings, maybe social support of those with mental disorders would be enough to mediate the brain changes associated with them, as an alternative to medication.

The functional recovery that may occur in the brain after trauma is an example of neural plasticity. This happens when brain function is rewired around a damaged area, usually following rehabilitation. Unaffected areas are usually able to compensate for those areas that are damaged, and take over the function of the unaffected brain area. Neuroscientists suggest that this process can occur quickly after trauma (spontaneous recovery) and then slow down after several weeks or months. At this point the individual may require rehabilitative therapy to further their recovery. Understanding the processes involved in plasticity has contributed to the field of neurorehabilitation. Following illness or injury to the brain, spontaneous recovery tends to slow down after a number of weeks so forms of physical therapy may be required to maintain improvements in functioning. Techniques may include movement therapy and electrical stimulation of the brain to counter the deficits in motor and/or cognitive functioning that may be experienced following a stroke, for instance. This shows that, although the brain may have the capacity to 'fix itself' to a point, this process requires further intervention if it is to be completely successful.

Roger & Kesner's study aimed to determine the role of acetylcholine in the formation of spatial memory. There is a high concentration of acetylcholine receptors found in the hippocampus, an area of the brain responsible for the formation of memories. The study looked at the formation of spatial memory in 30 rats. The rats were randomly allocated into 2 groups. The treatment group was injected with scopolamine, which blocks acetylcholine receptor sites. The control group was injected with a placebo saline solution, as there was a possibility that the injection itself changed the rat's behaviour through adrenaline or stress. They were then put into a maze. The results showed that the treatment group made more mistakes, which suggested they took longer to learn the maze.

1: Troster - ach and encoding of memories
- acetylcholine plays a role in the encoding of memories, but not the retrieval of LTM.
- there were three conditions. Each subject was injected with either a saline solution, a .5 or a .8 mg solution of scopolamine, an acetylcholine antagonist.
-carried out three tests. In the first test, they were asked to recall a list of 14 words. Recall was tested immediately after reading the list and then after 45 minutes.
- The high scopolamine group recalled the least in both conditions. In the second test, participants were given a map of a fake state and asked to memorize the location of the cities. After one minute they were given a blank map and a list of cities and asked to place them on the map.
- Once again, the high scopolamine group did poorly.
- Finally, participants were given a test of memories of famous people and events.
- They found no significant difference in the scores of the three conditions.
- acetylcholine may play a role in the encoding of memory, but not its retrieval.
+ highly standardized which allows other researchers to replicate the findings.
(-)scopolamine has strong side effects, so the researcher and the participant would know whether it was the placebo or not.
- task artificial, may not reflect how memories are usually created.

P2 : correlational
- There is a great deal of controversy over whether biological factors are the cause of a behaviour or whether they are the result of it.
- For example, is the encoding of memories caused by an imbalance of the neurotransmitters or is the process of encoding memories causing ach levels to become depleted.
Since a lot of research in biological psychology deals with people who are already exhibiting particular behaviours, it is almost impossible to know whether the neurotransmitter causes the behaviour or if the behaviour causes a change in the levels of the neurotransmitter.
- it is extremely difficult to establish a cause and effect relationship, less scientifically sure as to the causes of certain behaviours.

P3: Antonova
- Antonova wanted to see if scopolamine affected activity in the hippocampus, particularly in the creation of spatial memories. - sample: 20 healthy adult males.
- double-blind procedure, with participants randomly allocated to one of two conditions, one in which participants would receive a scopolamine injection, and the other group would receive a placebo.
- At the beginning of the experiment, participants were put into an fMRI while playing a virtual reality game that was designed to test the participants' abilities to create spatial memories. The objective of the game was to navigate through an arena to reach a pole. Once they reached the pole, the screen would go blank for 30 seconds and the participants were told to rehearse how they got to the pole, then they would appear in a different location in the arena to find the pole again.
- fMRI: brain activity.
- ppts injected with scopolamine demonstrated a significant reduction in the activation of the hippocampus when compared with the placebo group.
- This suggests that acetylcholine plays an important role in the encoding of spatial memories in humans.

P4: evaluation
- Although there was a higher rate of error in the scopolamine group, it wasn't a significant difference between the two. However, there was a significant difference in the activity of the hippocampus between the two groups. This implies that the design of the task itself was not ideal to show performance difference, and without the use of technology, there would be no way of knowing of the biological effects between the two groups.
- The study was a repeated measures design, which allowed the researchers to eliminate participant variability. The study was also counterbalanced, with some doing the scopolamine condition first and some doing the placebo condition first to control for practice effect.
+ Double-Blind Experiment, preventing researcher bias in the results.
- In order for the results to be considered reliable, the study would need to be replicated, due to the small sample size.

P5: reductionism
Research into the influence of neurotransmitters has been argued to be biologically reductionist. This refers to the way that biological psychologists attempt to explain complex human behaviour by reducing it down to its basic physical level. Whilst the research may be considered useful as it can aid understanding of the complex through more simplistic explanations, biological psychology has been criticized for over-simplifying many complex human behaviours and ignoring other contributory factors. An advantage of reductionism is that it allows for the operationalisation of variable. This makes it possible to conduct experiments in a way that is meaningful and reliable. Therefore, the reductionist approach gives research greater credibility. However, reductionist approaches have been accused of oversimplifying complex phenomena leading to a loss of validity. Explanations that operated at the level of a neurotransmitter do not include an analysis of the social context within which the behaiour occurs - and this is where the behaviour in question may derive its meaning. This means that reductionist explanations can only ever form part of an explanation.

P1: lab experiment
The purpose of using laboratory experiments in the biological approach is for the researchers to establish a causal relationship between two variables - the independent and the dependent variable. Experiments are based on hypothesis testing - that is, making a measurable and testable hypothesis and then seeing if the results of the study are statistically significant so that they can reject the null hypothesis. In addition, an experiment must contain at least one group that receives a treatment (the manipulation of an independent variable), and a control group does not receive the treatment. In a true experiment, participants are randomly allocated to conditions.
Rogers & Kesner conducted a laboratory experiment with the aim to determine the role of a neurotransmitter acetylcholine in spatial memory formation there are multiple acetylcholine receptors in the hippocampus that play a role in the consolidation of memory. Firstly, the researchers had mice run a simple maze to find food that was placed in one of the two corners. After having run the maze, but before memory could be consolidated, the mice were injected with one of two chemicals into their hippocampal region. The first group was injected with scopolamine, which blocks the acetylcholine receptors and thus inhibits the response. The second group was a control one, given a placebo injection of saline solution to make sure that getting an injection does not cause any change in memory. Thereafter, the two groups were placed again into the maze to see how long it would take them to find the food they had previously located. The results show that the scopolamine group took longer and made more mistakes in finding the food, whereas the control group learned faster and made fewer mistakes.
-neurotransmitter acetylcholine may play an important role in the consolidation of spatial memory and retrieval.

P2 eval labs
+ highly standardized procedure so the study can be replicated by other researchers, and the reliability of the results can be further tested.
+ try to control extraneous variables and randomly allocate participants to conditions, which increases the internal validity, allowing for a cause and effect relationship to be established.
- low ecological validity due to the highly controlled environments, meaning that the results may not reflect behaviour under normal conditions. It is also not always clear to what extent the results of animal research may be applicable to human beings such as from rats to humans in the Rogers & Kesner study.

Another research method in the biological approach is a case study. Case studies are comprehensive investigations of one individual with particular brain abnormality or damage case studies provide situations that cannot be ethically reproduced by researchers in a laboratory under controlled conditions. They are also often carried out longitudinally to observe short-term and long-term effects, where the same variables are investigated in repeated and different types of observations over long periods of time. An independent variable is not manipulated in this type of research and hence, no causal relationship can be established. In addition, psychologists study the brain-damaged patient by using triangulation - for example, more than one method, researcher, and different sources of data.
Milner carried out a classic case study on HM on the role of the hippocampus in memory formation HM sustained a serious head injury when he fell off his bicycle at the age of 7, and beginning from three years after his accident, suffered from repeating epileptic seizures. With the approval of HM and his family, a tissue from the medial temporal lobe, including the hippocampus, was removed in an experimental surgery on both sides of his brain. Although HM remembered his childhood very well and his personality seemed relatively unchanged after the surgery, he had suffered from anterograde amnesia—not being able to transfer new information from short-term memory to long-term memory. Milner longitudinally studied HM through different methods such as psychometric testing, direct observations, interviews, MRI scans, and cognitive testing. The researchers have found that HM could not acquire new episodic memories [memories of autobiographical events] and semantic knowledge [general world knowledge], however, procedural memories were not impacted they have concluded that hippocampal region plays a significant role in memory formation.

One of the strengths of case studies is that they collect rich data. Case studies collect data over a long period of time, accounting for both short-term and long-term effects on the patient's behaviour they also use a more holistic approach as opposed to experiments by looking at a range of behaviours, rather than measuring a single dependent variable. In addition, the use of method triangulation increases the validity of the results. Nevertheless, generalizability is one of the most critical limitations of this type of research method case studies often study brain abnormality or damage that is unique to an individual and for that reason, the observed results cannot be generalized to the behaviour of all human beings. In addition, a causal relationship cannot be established as an independent variable is not manipulated in this type of research. Lastly, it might also be difficult for the researcher to acquire and verify information about the patient prior to his/her accident, that may otherwise be of some use when drawing conclusions.

Obtaining informed consent is an ethical consideration of many studies into the study of brain and behaviour. Informed consent means that before participants agree to participate in a study, the researcher must explan the purpose and procedure of the study. Additionally, the researcher must explain the participants' rights - including the right to withdraw and that all data will maintain anonymity. Any potential negative effects of participation must be fully explained. The biological approach has some special problems with regard to informed consent. In addition, biological researchers often do studies of people who have mental illness or brain damage. It could be argued that these participants may not be able to understand what they are agreeing to. biological research is rather complex and may not be understood by the average person, making "informed consent" difficult. For example, in the study of retrogade and anterograde amnesia in HM by Milner, not only was consent presumptive, but HM may not have been able to take advantage of his right to withdraw because he may have not understood or forgotten due to the brain damage he had encountered.

Informed consent is important so that researchers don't take advantage of participants. However, because much research into brain behaviour looks into brain damage, this make obtaining informed consent difficult. Additionally, obtaining informed consent may result in demand characteristics, which would invalidate the research. This can be overcome through debriefing. However, in many cases in the bioligical approach, such as HM, the participants suffer from brain damage and are unlikely to exhibit demand characteristics, therefore informed consetn should be obtained. However this may also be difficult as the ppts may not understand. This can be overcome using presumptive consent - HMs mum.

Research into brain and behaviour should be ethical in terms of confidentiality and anonymity in order to minimise the consequences of labelling individuals and reduce psychological harm. Maintaining anonymity is crucial when researching health problems as the data is socially sensitive and if the individual can be identified by the public, this can lead to discrimination and therefore psychological harm. An issue with this labelling of individuals is that it can lead to discrimination. The findings of the study by Raine (1997) suggest a biological correlation between impulsive behaviour and lack of pre-frontal cortex. This labelling can cause psychological harm because it suggests that it is their own cognitive process that is causing the addictive behaviours. However, when conducting a cost-benefit analysis, it becomes clear that this labelling is for the greater good because if the research was valid then the findings can be generalised, allowing for greater intervention through education programmes to reduce behaviours that may lead to violence. Therefore, while research into brain and behaviour is likely to result in psychological harm, the ends justify the means if appropriate intervention techniques are implemented for both the participants in the study and the population to which the findings apply.

Additionally, labelling can be self-fulfilling, causing an individual to believe that their behaviur is out of their control and therefore labelling individuals in the study of brain and behaviour can be deterministic. However, this labelling is important as it allows for medical intervention because the individuals with the biological predispositions can be identified. This means that upon a cost-benefit analysis, confidentiality may be breached because if the researcher holds possession over the participants' data but doesn't share it, then the participant can be helped by the researcher without discrimination by society. Raine can ensure that anonymity and confidentiality are maintained through careful reporting of the results after the research has been conducted as well as reflexivity. This research demonstrates the importance of a cost-benefit analysis when making ethical considerations in research concerning the brain and.behaviour as it was concluded that confidentiality may be breached however anonymity needs to be maintained, ensuring that the research is deemed ethical in terms of the socially sensitive data it is reporting.

Zhou wanted to see the impact of Androstadienone (AND), a potential human pheromone found in male sweat, on human sexual behavior. In the study, four groups of participants - heterosexual males, heterosexual females, homosexual males and homosexual females - were shown images of stick figures walking on a screen. Each group did one trial where they were exposed to the smell of cloves while they watched the stick figures moving on a screen and another trial where the cloves were mixed with a high dose of AND. The researchers found that in the trial with the AND, heterosexual females and homosexual males rated the stick figures as more masculine. The researchers also carried out the same study with the female version of AND (estratetraenol) and found similar results with heterosexual males and homosexual females. The researchers concluded that AND could be a human pheromone that carry plays a role in sexual attraction.

The fact that the effect of Androstadienone was only seen with heterosexual females and homosexual males does suggest that it impacts us on the basis of sexual orientation. However, even though the study is highly standardized, there are a number of issues present in the study. Firstly, the study is very artificial and therefore has a problem with ecological validity. The level of AND present in the study was at a far higher concentration than what is seen in human males. It could be argued that the levels of AND in the real world are too low to be detected. Finally, this study does nothing to show that AND or estratetraenol are used to signal mating behaviour or associated with attraction. Signalling pheromones are used to cause rapid behavioural changes leading to mating behaviour and there is no evidence for this.

Additionally, Hare et al (2017) did a study attempting to replicate the findings of Zhou et al (2014) and failed to do so. For these reasons, this study cannot be said to prove the existence of a human pheromone.
Although the study showed a significant difference in behaviour, there are some concerns with the study. First, the participants were exposed to very high levels of the pheromones it is unclear that this response would happen in a naturalistic setting. Secondly, although they identified the figure as masculine or feminine, this is not a clear study of sexual odour but rather if participants perceived a person's walk as feminine or masculine. It can be debated whether this is a reliable measure of sexual behaviour. Finally, the study is done on a relatively small sample. The study would need to be replicated on a much larger sample in order to determine whether the results are reliable.

A more promising study was carried out by Doucet et al (2009) on the role of secretion of the areolar glands in suckling behaviour in 3-day-old infants. The areolar glands are located near the nipple. The researchers administered the different secretions to the infants nasally and then measured their behaviour and breathing rate. The researchers compared the infants' reaction to seven different stimuli - including, secretions of areolar glands, human milk, cow milk, formula milk, and vanilla. They found that the infants began sucking only when exposed to the secretions of the areoloar glands. In addition, there was a significant increase in their breathing rate. The researchers argue that this stimulus of the aerolar odor may initiate a chain of behavioural and physiological events that lead to the progressive establishment of attachment between the mother and the infant. However, more research is necessary to definitively draw these conclusions.

Those who argue that human pheromones do not exist, and therefore cannot affect human behaviour point to the fact that the detection of pheromones in mammals usually relies on the vomeronasal sensory organ (VNO), which is a collection of neurons deep in the nose that transmits signals via the accessory olfactory bulb to the hypothalamus in the brain, and so far neither the VNO or the accessory olfactory bulb has been shown to exist in humans, though both exist in the foetus up to about 18 weeks. Trotier et al used endoscope observation of 2031 adults and followed up with CT scan of the nasal cavity of 7 adults to conclude that there are only pits which are remnants of where the VNO used to be. VNO not connected to any receptor neurons and doesnt operate as sensory organ, though it may function in the foetus. The question remains as to whether the VNO is the only means of detection of pheromones. If it isnt, and humans may be able to detect them in a different way, then the possibility of their existence may resurface

In an experiment conducted by Newcomer et al, the effect of levels of cortisol on verbal declarative memory was tested. The participants were matched to one of three conditions based on age and gender. In the first condition, the participants were given a 160 mg tablet of cortisol daily during the four-day experiment. These tablets produced a level of cortisol that one would experience during a major stress event. The second condition had the participants take a 40 mg tablet of cortisol per day. This level of cortisol replicates the levels of cortisol experienced by someone during a low-stress event. The last condition had participants take placebo tablets. The function of this was to provide a control group that eliminated the effect of taking a pill in itself. Each participant had to listen to a prose paragraph and then recall it in over a period of four days. This was done in order to test their verbal declarative memory. The participants in the high-cortisol condition performed the worst in the verbal declarative memory task. This suggests significant negative effects on verbal declarative memory. On the other hand, the low levels cortisol condition showed better recall than the placebo group. These findings suggest that low levels of cortisol actually may enhance verbal declarative memory.

As the researchers manipulate the independent variable, a cause and effect relationship was established. Particularly the effect of cortisol levels on one's verbal declarative memory. This experiment was highly standardised, therefore easily replicable. Replicability suggests high reliability of the experiment's findings. This was a double-blind laboratory experiment. This meant that neither the experimenter nor the participant knew which group was receiving the cortisol pills or placebo pills. This eliminated demand characteristics such as the expectancy effect in which the participant acts the way they think they should act in order to produce data that aligns with the hypothesis. The participants weren't aware of the dose of cortisol they were given, therefore couldn't try to produce any desirable results However, given that the experiment was conducted over a period of four days, extraneous variables from the participants' daily lives were not controlled, suggesting low internal validity. Depending on the events of the participants' day, the levels of cortisol could have fluctuated and influenced the findings.

Ackermann et al. (2013) investigated whether individual differences in cortisol levels could predict picture encoding (learning) and recall.
Over a thousand healthy young male and female participants between 18 and 35 years old viewed different sets of pictures on two consecutive days. The pictures comprised two sets (Set 1 and Set 2) of 24 positive, 24 negative, and 24 neutral pictures interleaved with 24 scrambled pictures. Both sets were recalled after a short delay (10 min). Also, on Day 2, the pictures seen on Day 1 were additionally
recalled, resulting in a long‐delay (20 hr.) recall condition. Cortisol levels were measured three times on Days 1 and 2 via saliva samples before encoding the memories (to get a basal measurement), between encoding and recall and after recall testing. Stronger decreases in cortisol levels during retrieval testing were associated with better recall of pictures, regardless of emotional value of the pictures or length of the retention interval (i.e., 10 min vs. 20 hrs). The results support previous findings indicating that higher cortisol levels during retrieval testing lowers recall of episodic memories. This study demonstrated how the hormone cortisol affects the human behaviour of memory.

psychologists believe that the stress hormone adrenaline plays a role in the creation of emotional memories. Adrenaline is responsible for stimulating the sympathetic nervous system. Cahill and McGaugh conducted a lab experiment in which there were two groups of participants. The first group was shown 12 slides accompanied by a rather boring story about a boy that was visiting his father in a hospital. The second group was shown the same 12 slides but told a traumatic story about a boy that was in a car accident in which his legs were severed. After the story, the participants were asked to rate their level of emotion. Two weeks later, the participants were asked to come back to answer a set of questions regarding the stories they were told. They had three options for each question to choose as an answer. The researchers did a follow-up study in which they repeated the procedure but injected the group exposed to the "traumatic" story with a beta-blocker, propranolol. Propranolol interferes with the release of adrenaline and prevents the activation of the amygdala to prevent the formation of emotional memories.
In this first study, the group exposed to the "traumatic" story had better accuracy in terms of remembering details. They were also more capable of remembering details from the slides compared to the group exposed to the uninteresting story. In the follow-up study, the group that was injected with beta-blockers and exposed to the "traumatic" story performed no better than the group exposed to the non-emotional story. This suggests that the prevention of the release of adrenaline caused the inability to recall the "emotional" story.

By carrying out a highly controlled lab experiment Cahill and McGaugh were able to establish a cause and effect relationship between adrenaline's interaction within the amygdala and the formation of emotional memories. As well this experiment was highly standardised, therefore it is easy to replicate. As it is easy to replicate, similar results would increase the reliability of the results.But as the study was well controlled and rather simplistic, can we apply the findings to the "real world?" The research has been applied to the treatment of accident victims with the goal of preventing PTSD. Pitman et al (2002) carried out an experimental study where patients coming into emergency rooms after a traumatic injury were given either beta-blockers (propranolol) or a placebo. One month after the traumatic event, people who had received the beta-blockers showed fewer symptoms of PTSD than those that had received no beta-blockers or a placebo. It appears that Cahill & McGaugh's findings may prove helpful in preventing the onset of PTSD in some patients following a trauma. One limitation of this experiment was its artificiality. The experiment was highly-controlled and took place in a lab setting, questioning its ecological validity. Instead of experiencing the event themselves they were told a story accompanied by 12 images in a lab environment. In addition, the participants self-reported their emotional state, there was no objective measure.

reductionism - explain and link to eg
Elaborate Whilst this practice may be considered useful as it can aid understanding of the complex through more simplistic explanation, it can likewise be criticized for over-simplifying many complex human behaviours by attributing it to levels of specific hormones alone whilst ignoring other contributory factors.
Link This matters because such focus may neglect to consider the complexities of the behaviour and result in a very simple, basic explanation at the expense of a more holistic account.

Experiments are often used by researchers within the biological approach to establish a cause and effect relationship. Experiments start with a hypothesis. To test the hypothesis, researchers manipulate an independent variable to measure the effect on a dependent variable, while attempting to keep all other variables constant.. Participants are randomly allocated to either a treatment group (where the IV is manipulated) or a control group (where the IV is not manipulated).
One example of an experiment was done by McGaugh and Cahill. They wanted to see the effect of the hormone adrenaline on the creation of emotional memories. They hypothesized that adrenaline interacts with the amygdala to create emotional memories. Participants were randomly allocated to one of three groups. Each group say a series of slides, but in one group (the control), a boring story was read. In the treatment group, they heard a story about a boy who was in a car accident and had his feet severed. A very emotional story. The third group heard the traumatic story but were given beta-blockers which inhibit the release of adrenaline. Two weeks later, the participants were asked to answer a series of questions about the slides.
The researchers found that the participants who had heard the traumatic story remembered more details than the unemotional story. They also found that those that had heard the traumatic story but had taken beta-blockers remembered no more than those that heard the boring story. The experiment indicates that adrenaline may play a significant role in the creation of emotional memories.
Strengths of experiments include that they attempt to control extraneous variables. By doing so, they have high internal validity - that is, you can say that the IV most likely caused the change in the DV. In addition, because they are highly standardized, they can be replicated. This allows other psychologists to test the reliability of the results.
However, experiments suffer from low ecological validity due to the highly controlled environment in which the behaviour is observed. The procedures are often highly artificial - such as the one by McGaugh and Cahill. It could be argued that the results do not show us how adrenaline functions under normal conditions. Often experiments have the problem of demand characteristics, where the participants figure out the goal of the experiment and then act in a way to "help out" the researcher. However, in biological research this is often not possible. Blocking adrenaline seems to make it impossible for the participants to create strong memories of the story, regardless of whether the participants know the aim of the study. Finally, experiments in the biological approach often take a reductionist approach, looking at the effect of a single IV on a DV.

Correlational research can be used to study hormones and pheromones in humans more easily. Correlational studies are different from experiments in that no variable is manipulated by the researcher, meaning causation cannot be inferred. In correlational studies, two or more variables are measured and the relationship between them is mathematically quantified. Hormones such as testosterone have been correlated with antisocial behaviour and aggression. One example of such correlational research was that conducted by Ehrenkrantz et al (2003).Plasma testosterone was determined in 36 male prisoners 12 with chronic aggressive behavior, 12 socially dominant without physical aggressiveness and 12 who were not physically aggressive or socially dominant. Here the two variables were testosterone levels and aggressive behaviour, where a positive correlation was found. Conducting correlational research is useful due to it being more ethically practical for the participants and having higher ecological validity than lab experiments as the subjects are being investigated in their natural environments. Therefore, conducting correlational research to study hormones has advantages in terms of its practicality and the generalizability of the findings.

One significant advantage of correlational research is that it can usually be carried out on humans instead of animals (unlike laboratory experiments), providing psychologists with human data. This provides a practical value as the findings can be more easily extrapolated to the target populations. Correlations are very useful as a preliminary research technique, allowing researchers to identify a link that can be further investigated through more controlled research (such as a lab experiment). Additionally, correlations can be used to research topics which are sensitive such as explanations for health problems. In this sense, correlational research is considered to be more ethical as no deliberate manipulation of variables is required. However, correlations only identify a link, such as that between the levels of plasma testosterone and aggression. Therefore, they do not identify which variable causes which, and there may even be a third variable present which is not being considered but is influencing one of the co-variables.

Informed consent is important so that researchers don't take advantage of participants. However, because much research into brain behaviour looks into brain damage, this make obtaining informed consent difficult. Additionally, obtaining informed consent may result in demand characteristics, which would invalidate the research. This can be overcome through debriefing. However, in many cases in the bioligical approach, such as HM, the participants suffer from brain damage and are unlikely to exhibit demand characteristics, therefore informed consetn should be obtained. However this may also be difficult as the ppts may not understand. This can be overcome using presumptive consent - HMs mum.

Research into brain and behaviour should be ethical in terms of confidentiality and anonymity in order to minimise the consequences of labelling individuals and reduce psychological harm. Maintaining anonymity is crucial when researching health problems as the data is socially sensitive and if the individual can be identified by the public, this can lead to discrimination and therefore psychological harm. An issue with this labelling of individuals is that it can lead to discrimination. The findings of the study by ehrenkranz (1997) suggest a biological correlation between aggression and testosteron. This labelling can cause psychological harm because it suggests that it is their own cognitive process that is causing the addictive behaviours. However, when conducting a cost-benefit analysis, it becomes clear that this labelling is for the greater good because if the research was valid then the findings can be generalised, allowing for greater intervention through education programmes to reduce behaviours that may lead to violence. Therefore, while research into brain and behaviour is likely to result in psychological harm, the ends justify the means if appropriate intervention techniques are implemented for both the participants in the study and the population to which the findings apply.

Additionally, labelling can be self-fulfilling, causing an individual to believe that their behaviur is out of their control and therefore labelling individuals in the study of brain and behaviour can be deterministic. However, this labelling is important as it allows for medical intervention because the individuals with the biological predispositions can be identified. This means that upon a cost-benefit analysis, confidentiality may be breached because if the researcher holds possession over the participants' data but doesn't share it, then the participant can be helped by the researcher without discrimination by society. Raine can ensure that anonymity and confidentiality are maintained through careful reporting of the results after the research has been conducted as well as reflexivity. This research demonstrates the importance of a cost-benefit analysis when making ethical considerations in research concerning the brain and.behaviour as it was concluded that confidentiality may be breached however anonymity needs to be maintained, ensuring that the research is deemed ethical in terms of the socially sensitive data it is reporting.

Family studies look at trends in behaviour over several generations in order to see if the behaviour "runs in the family." Weissman did a 20-year study to see if major depression might be genetic. The study collected data on depressed patients and non-depressed participants. All had children at the beginning of the study. In the study, the original participants, their children and then their grandchildren were all assessed for major depressive disorder by a clinician. The evaluation was blind to the child's family history. The study found that by 12 years old, almost 60% of the grandchildren in families with the disorder were showing signs of a disorder. Children had an increased risk of any disorder if depression was observed in both the grandparents and the parents, compared to children where their parents were not depressed. This study seems to indicate a potential genetic link to depression.

Weissman's study is longitudinal, demonstrating change over time. They were able to gather prospective data, rather than relying on family history. Family studies are limited in that they can only look at around three living generations. In order to go further back, they are reliant on family memory. In the study of mental illness, often there are stories about older generations, but they lack an official diagnosis - or the diagnostic criteria have changed. Family stories are also anecdotal in nature and may be open to memory distortions. Although family studies indicate a potential genetic link to behaviour, there is no genotype studied - so it is not possible to determine if a specific gene might be responsible. In addition, family studies do not control for environmental factors.

Twin studies attempt to solve the problem of not identifying the genotype by using identical (MZ) and fraternal (DZ) twins. Although the genotype is still not identified, MZ twins have the same DNA and DZ twins do not. Kendler carried out a study of over 15.000 twins. If MZ twins had a higher concordance rate for depression than DZ twins, it could be argued that depression might be genetic. In addition to filling in questionnaires about their mental health, the twins were also asked questions about their personal life experiences. Kendler found that MZ twins had a significantly higher concordance rate than DZ twins. Life experiences had no significant effect on the data, meaning that environmental factors did not play a significant role.

Like family studies, twin studies are correlational and do not establish a cause and effect relationship. In addition, no specific genes were identified in this study. There have been many twin studies that have similar results, so the findings are reliable. However, there are some limitations. As no physiological measurement is taken, all data is self-reported. In addition, the interviewers did not officially diagnose the twins but instead accepted previous diagnoses. That assumes that the diagnoses were valid and that those who were not diagnosed, actually do not have the disorder. However, the fact that the sample size is large helps to increase the reliability of the data. Although this study attempted to rule out environmental factors, they were also self-reported. Adoption studies are a natural experiment that allows researchers to more directly investigate the role of environment vs genetics.

Another weakness of twin studies is that the 'equal environments assumption' may lack validity. It assumes that MZ and DZ twins are treated with equal degrees of similarity, by parents, friends, teachers etc., and so concordance rates can be compared. Views differ, but many psychologists continue to question the equal environments assumption (e.g. Joseph 2002). It is argued that because they are identical, MZ twins are treated more similarly than DZ twins, and therefore twin studies may overestimate the extent of genetic influence and underestimate the extent of environmental influence. In addition to this, DZ twins can be different sexes. This is relevant to AN, especially because of the greater prevalence of AN in females. Although AN is not an inevitable outcome of genetic risk because environmental and cognitive factors play an essential role (as indicated by the diathesis-stress model), the contribution of genes to AN is clear.

Adoption studies compare the behaviour of a child to both the biological and adoptive parents. The idea is that if the behaviour is genetic, then the child's behaviour should be more similar to the biological parent than the adoptive parent. This assumes that the environment is different from the environment of the birth parents. If the behaviour is more similar to the adoptive parents, who are not genetically similar to the child, then the assumption is that the behaviour is the result of environmental factors. Sorensen carried out a study on Danish adoptees to see if obesity might have a genetic origin. This was a longitudinal study that looked at children's height and weight over a six-year period, as well as their adult weight. The BMI was calculated and compared to the BMI of both the birth family (parents and siblings) and the adoptive family. Sorensen found that there was a significantly higher correlation between the BMI of the adopted participants and their birth family than the adoptive family.

This study was reliant on school records for childhood data. In addition, parents and adoptive parents were contacted for information about their weight by questionnaire. This may mean that the data is open to inaccuracies and demand characteristics. However, a large sample was used to increase the reliability of the data. Adoption studies also assume that the environment will be different in the adoptive home, but there is often a policy of selective placement in which a family is chosen that is similar to the birth family. This may mean that the environment is not as well controlled as believed. Finally, there is that problem that adoptive children are not representative of a larger population. Knowing that one is adopted may have an effect on one's self of self. This means that it may be difficult to generalize the findings.

A twin study was done by Kendler to investigate the rate at which both identical (MZ) twins and fraternal (DZ) twins inherit depression. Using the Swedish Twin Registry, Kendler looked at over 40.000 twins and found that the concordance rate for female MZ twins was 44% and for DZ only 16%. In males, the rates were 30% and 10%. When looking at the results of identical twins we can firstly see that the percentage is not 100%. This indicates that if depression is genetic, having the genes for depression is not enough to make someone depressed. Instead, only through interacting with the environment, resulting in gene expression, may depression occur. The fact that the MZ twins may not both have depression may have less to do with genes and more to do with the stressors that they have personally experienced. Inheriting the genes does not mean that the person will automatically develop depression. The results for the DZ twins being lower than the MZ twins supports the theory of genetic inheritance because fraternal twins are much less likely to have the same gene make ups.

One issue of genetic research from clinical samples is that it often uses small numbers of participants. This can cause generalisability issues for the research used to support the theory. However, the concordance rates for AN are relatively consistent across studies. Not only does this consistency strengthen the validity of the genetic component, it also eliminates the extraneous variable of shared environment that is so often a methodological flaw in twin research. However, concordance rates are still not 100%, implying that genetics are not the only factor involved.A limitation of this study is that twins are a very small part of our global population. Although this study seems to support the theory of genetic inheritance we cannot know for sure if these results can be generalized to everybody. There is also an issue with a self-fulfilling prophecy, as identical twins may think that they are more likely to develop depression they might start to exhibit more symptoms. This also causes question into issues of undo stress or harm both because the study may contribute to the onset of depression, but that the study could also leave participants in fear that they will too develop the illness. Another weakness of twin studies is that the 'equal environments assumption' may lack validity. It assumes that MZ and DZ twins are treated with equal degrees of similarity, by parents, friends, teachers etc., and so concordance rates can be compared. Views differ, but many psychologists continue to question the equal environments assumption (e.g. Joseph 2002). It is argued that because they are identical, MZ twins are treated more similarly than DZ twins, and therefore twin studies may overestimate the extent of genetic influence and underestimate the extent of environmental influence. In addition to this, DZ twins can be different sexes.

Modern research does not only look at twin research but also at specific genes. Caspi carried out a prospective longitudinal study of the effect of the mutation of the 5-HTT serotonin transport gene. Caspi argued that people with two short alleles (the mutation) of the 5-HTT gene would be more likely to develop depression. Participants were allocated to groups based on the length of the allele of their 5-HTT genes.. The first group consisted of those with one short allele and one long, the second had two short, and the third had two long. Participants were evaluated for depression and asked to fill out a questionnaire detailing major life events. Those with the mutation and with major life stressors were more likely to exhibit symptoms of depression, and suicidal ideation. Caspi found that participants with the mutation who had three or more stressful life events were the most likely to show depressive symptoms. The study shows that genes are not destiny, but that a combination of a genetic predisposition and environmental stressors may be. Modern genetics is a holistic approach - recognizing that a gene-environment interaction often leads to behaviour, and not the genes alone. There have been replications of the study, so the research has been shown to be reliable. However, when dealing with depression, there are complications with the construct. It is difficult to know if depression is the same across all cultures and if serotonin plays a role in the origins of the disorder.

Wilhelm et al (2009) carried out a study to determine the effect of genetic testing for the 5-HTT gene which is believed to play a role in depression. In this study, the researchers followed up by asking participants who had received genetic testing to fill out questionnaires. When asked about the most important benefits of genetic testing, participants said that it: allowed for early intervention, provided the potential to prevent the onset of depression and helped people with the gene variation to avoid stressors that made to the onset of depression.

When asked to identify the most important limitations of receiving some information, participants said that it could: lead to insurance discrimination lead to discrimination from employers make people with the gene variation feel more stressed or depressed.

Regardless of which variation of the 5HTT gene was found, all participants reported more positive feelings than negative feelings. However, the participants with two short alleles demonstrated significantly higher distress levels after learning their result compared with the other participants.

The study gives us some insight into the ethical considerations of genetic testing. However, the study has some limitations. First, the sample had a mean age of 50 years old. 42% of the participants had suffered from depression during their lifetime - and those that had not had little chance of starting at such a late age. In addition, the sample was highly educated. The sample is also made up of those that had agreed to have the testing in the first place. Obviously, it is not possible to know the effect on those that refused to have the testing.

An issue with the genetic explanation is that prevalence rates vary at different times and in different cultures. For example, depression prevalence rates are increasing in developing countries and vary between western and non-western countries. This suggests that the genetic explanation may be considered reductionist as it neglects other possible contributory factors. The nature of the gene pool has remained constant over the same period, so genetics alone cannot explain the increase in rates of depression. Furthermore, AN rates vary within one culture. These differences in time and location cannot be explained by genetic factors alone and are likely to be the result of psychological factors and environmental factors, such as the economic factors leading to lifestyle change.